Assessing and alleviating state anxiety in large language models

Ziv Ben-Zion, Kristin Witte, Akshay K. Jagadish, Or Duek, Ilan Harpaz-Rotem, Marie Christine Khorsandian, Achim Burrer, Erich Seifritz, Philipp Homan, Eric Schulz, Tobias R. Spiller

Research output: Contribution to journalArticlepeer-review

Abstract

The use of Large Language Models (LLMs) in mental health highlights the need to understand their responses to emotional content. Previous research shows that emotion-inducing prompts can elevate “anxiety” in LLMs, affecting behavior and amplifying biases. Here, we found that traumatic narratives increased Chat-GPT-4’s reported anxiety while mindfulness-based exercises reduced it, though not to baseline. These findings suggest managing LLMs’ “emotional states” can foster safer and more ethical human-AI interactions.

Original languageEnglish
Article number132
Journalnpj Digital Medicine
Volume8
Issue number1
DOIs
StatePublished - 1 Dec 2025

All Science Journal Classification (ASJC) codes

  • Medicine (miscellaneous)
  • Health Informatics
  • Computer Science Applications
  • Health Information Management

Fingerprint

Dive into the research topics of 'Assessing and alleviating state anxiety in large language models'. Together they form a unique fingerprint.

Cite this