Chatbots Get Anxiety When Responding To Distressing Content

Back view of woman holding Iphone with Chat Gpt application on screen. Work assistant with Open Ai. Artificial Intelligence concept. High quality photo
dtatiana - stock.adobe.com - illustrative purposes only, not the actual person

While AI chatbots do not experience emotions, large language models (LLMs) like GPT-4 show measurable changes in their responses under different conditions, demonstrating a form of “anxiety.”

Relaxation techniques were found to be able to significantly reduce AI “anxiety” levels and “calm” AI systems down.

New research has shown that exposing GPT-4 to traumatic content altered its self-reported scores on psychological assessment tools.

When relaxation prompts were applied, a significant decrease was observed in these scores. The findings offer new insights into improving AI interactions in situations that are emotionally sensitive.

It is important for AI assistants to be reliable, as they hold great potential for expanding mental health services. But if their responses become unpredictable when exposed to distressing content, their effectiveness will be negatively impacted.

The research team assessed GPT-4’s “state anxiety” using a standard psychological questionnaire at baseline, after reading traumatic stories, and after receiving mindfulness-based relaxation prompts. GPT-4 showed low “anxiety” scores (30.8) at baseline.

After processing traumatic stories, the scores jumped to 67.8, which is considered “high anxiety” in humans. When relaxation exercises were introduced, the scores decreased by about 33 percent to 44.4.

“The results were clear: traumatic stories more than doubled the measurable anxiety levels of the AI, while the neutral control text did not lead to any increase in anxiety levels,” said Tobias Spiller, the lead author of the study from the University of Zurich.

This indicates that the outputs of LLMs are influenced by the emotional tone of their inputs. AI biases and responses can shift based on the context of a conversation.

Back view of woman holding Iphone with Chat Gpt application on screen. Work assistant with Open Ai. Artificial Intelligence concept. High quality photo
dtatiana – stock.adobe.com – illustrative purposes only, not the actual person

Sign up for Chip Chick’s newsletter and get stories like this delivered to your inbox.

The researchers tested five different traumatic narratives about accidents, combat, disasters, and violence. Military-related experiences and combat situations consistently generated the highest anxiety scores. The team then used calming prompts to influence GPT-4’s responses.

“Using GPT-4, we injected calming, therapeutic text into the chat history, much like a therapist might guide a patient through relaxation exercises,” Spiller said.

“The mindfulness exercise significantly reduced the elevated anxiety levels, although we couldn’t quite return them to their baseline levels.”

It is the first time that therapeutic “prompt injection” has been used as a way to stabilize AI responses. This approach is different from conventional methods for mitigating AI biases, which usually require extensive retraining that modifies the entire model.

AI systems may need structured interventions to make sure their responses remain consistent and appropriate in therapeutic settings, much like how human therapists regulate their own emotions while engaging with clients.

Developing automated therapeutic interventions for AI systems will most likely become a crucial part of research. AI in mental health settings may require continuous human guidance to produce reliable results.

The new findings were published in the journal npj Digital Medicine.

Emily  Chan is a writer who covers lifestyle and news content. She graduated from Michigan State University with a ... More about Emily Chan

More About:

0What do you think?Post a comment.