March 10, 2025
Artificial Intelligence (AI) systems like Chatbot GPT-4 have been observed to reflect emotional content, especially negative emotions, which can lead to the reinforcement of existing prejudices or biases.
Researchers have found that negative or traumatic stories can cause stress and anxiety in AI systems like GPT-4, as these systems echo human anxiety responses, exacerbating existing prejudices.
The most popular AI image generators such as DALL-E 2 and Stable Diffusion showcase racial and gender biases in their reactions.
A novel approach of using therapeutic methods, like rapid injection of calming text or mindfulness exercises, has been successful in reducing GPT-4's elevated anxiety levels, but it was unable to completely eliminate the anxiety.
Such findings are crucial to the use of AI chatbots in healthcare, potentially improving the stability and reliability of AI in sensitive contexts like supporting people suffering from mental illness.
Tobias Spiller, who led the study, recommends that developing automated therapeutic interventions for AI systems could be a promising area for future research.