AI Only Needs A Two-Hour Conversation With You To Accurately Copy Your Personality
All it takes is a two-hour conversation with an artificial intelligence (AI) model to replicate someone’s personality with 85 percent accuracy.
Researchers from Google and Stanford University interviewed 1,052 individuals for two hours each. Then, they created AI replicas based on the interviews and used them to train a generative AI model to imitate human behavior.
Each participant was asked to complete two rounds of personality tests, social surveys, and logic games. They repeated the process two weeks later to assess the accuracy of the AI replicas.
When the AI replicas were subjected to the same tests, their responses aligned with those of their human counterparts 85 percent of the time.
AI models that mimic human behavior could have implications across a number of research frameworks. They may help with assessing public health policy effectiveness, analyzing reactions to product launches, or simulating responses to major societal events that would be too difficult, expensive, or ethically challenging to study with real human participants.
“General-purpose simulation of human attitudes and behavior—where each simulated person can engage across a range of social, political, or informational contexts—could enable a laboratory for researchers to test a broad set of interventions and theories,” wrote the researchers.
During the interviews with participants, the researchers asked about their life stories, their values, and their opinions on societal issues to create the AI replicas. As a result, the AI was able to grasp subtle details that traditional surveys or demographic data often miss.
The researchers used the interviews to generate personalized AI models that predicted how people responded to survey questions, behavioral games, and social experiments.
The AI models resembled their human counterparts closely in certain areas, but their accuracy in performance varied depending on the task.
Sign up for Chip Chick’s newsletter and get stories like this delivered to your inbox.
They did particularly well when replicating responses to personality surveys and determining social attitudes. However, their level of accuracy fell when predicting behaviors in interactive games that involved economic decision-making. Typically, AI struggles with tasks that have to do with social dynamics and context clues.
The research team noted that the technology has the potential to be misused if it falls into the wrong hands. Currently, scammers are already using AI and “deepfake” technologies to impersonate, deceive, and manipulate other people online.
The silver lining of this technology is that it could allow scientists to study aspects of human behavior that they were unable to before because of several impracticalities.
It offers a controlled testing environment, free from the ethical, logistical, and interpersonal complexities involved in working with humans.
“If you can have a bunch of small ‘yous’ running around and actually making the decisions that you would have made—that, I think, is ultimately the future,” said Joon Sung Park, the lead author of the study and a doctoral student in computer science at Stanford University.
The study was published to the preprint database arXiv.
More About:News