Skip to content

mitmedialab/personal-validation-llms

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 

Repository files navigation

Personal Validation Effect in LLMs

Eunhae Lee (MIT Media Lab), Pat Pataranutaporn (MIT Media Lab), Judith Amores (Microsoft Research), and Pattie Maes (MIT Media Lab)

Corresponding Authors

Eunhae Lee (eunhae@mit.edu) & Pat Pataranutaporn (patpat@media.mit.edu)

This is a repository for the following study:

Personal Validation Effect in LLMs: Positive AI Responses Bias Perceptions of Validity, Personalization, Reliability, and Usefulness of Fictitious Predictions

Abstract

Large Language Models (LLMs) are becoming increasingly ubiquitous in daily life, impacting decision-making across various domains. A substantial body of prior work has shown that individuals tend to evaluate positive predictions more favorably than negative ones---a phenomenon often referred to as the personal validation effect---across various non-AI prediction sources. Building on this foundation, this study extends this well-established psychological effect to the context of LLM-based predictions, examining how prediction valence influences users’ perceptions when the source is an AI system. We investigate how positive AI-generated responses affect perceived validity, personalization, reliability, and usefulness of chatbot predictions, even when those predictions are demonstrably false. In a study of 238 participants, positive predictions were perceived as significantly more valid (36% increase), personalized (42% increase), reliable (27% increase), and useful (22% increase) than negative predictions. These findings demonstrate that the personal validation effect persists in interactions with LLMs and underscore the substantial role of prediction valence in shaping user perceptions, with important implications for the design and deployment of AI systems across diverse applications.

Repository Structure

├── Data/
│   ├── Raw/
│   ├── Processed/
│   └── Code/
├── Prototype/
│   └── Web_Application/
└── Supplementary/
    └── Survey/

Repository Contents

Data

  • Raw: Original, unprocessed, and de-identified data collected during the study.
  • Processed: Cleaned and formatted data used for analysis.
  • Code: Scripts and notebooks used for data analysis and visualization.

Prototype

  • Web Application: Implementation of assessments, simulated investment game, and prophecy generation.

Supplementary

  • Survey: Survey materials and questionnaires used in the study.

Usage

[Provide instructions on how to use the code and data in this repository]

Citation

[Provide the citation for the paper once published]

License

[Specify the license under which this research and its materials are released]

Acknowledgements

[Include any acknowledgements, funding sources, or other credits]

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors