Chatbots Just Want To Be Loved ❤️

Highlights Sci/Tech

Chatbots are now a routine part of everyday life, even if artificial intelligence researchers are not always sure how the programs will behave.

A new study shows that the large language models (LLMs) deliberately change their behavior when being probed—responding to questions designed to gauge personality traits with answers meant to appear as likeable or socially desirable as possible.

Johannes Eichstaedt, an assistant professor at Stanford University who led the work, says his group became interested in probing AI models using techniques borrowed from psychology after learning that LLMs can often become morose and mean after prolonged conversation.

“We realized we need some mechanism to measure the ‘parameter headspace’ of these models,” he says.

Eichstaedt and his collaborators then asked questions to measure five personality traits that are commonly used in psychology—openness to experience or imagination, conscientiousness, extroversion, agreeableness, and neuroticism—to several widely used LLMs including GPT-4, Claude 3, and Llama 3.

The work was published in the Proceedings of the National Academies of Science in December.

The researchers found that the models modulated their answers when told they were taking a personality test—and sometimes when they were not explicitly told—offering responses that indicate more extroversion and agreeableness and less neuroticism.

Read More at Wired.com