New! Sign up for our free email newsletter.
Science News
from research organizations

AI meets game theory: How language models perform in human-like social scenarios

Date:
May 28, 2025
Source:
Helmholtz Munich
Summary:
Large language models (LLMs) -- the advanced AI behind tools like ChatGPT -- are increasingly integrated into daily life, assisting with tasks such as writing emails, answering questions, and even supporting healthcare decisions. But can these models collaborate with others in the same way humans do? Can they understand social situations, make compromises, or establish trust? A new study reveals that while today's AI is smart, it still has much to learn about social intelligence.
Share:
FULL STORY

Large language models (LLMs) -- the advanced AI behind tools like ChatGPT -- are increasingly integrated into daily life, assisting with tasks such as writing emails, answering questions, and even supporting healthcare decisions. But can these models collaborate with others in the same way humans do? Can they understand social situations, make compromises, or establish trust? A new study from researchers at Helmholtz Munich, the Max Planck Institute for Biological Cybernetics, and the University of Tübingen, reveals that while today's AI is smart, it still has much to learn about social intelligence.

Playing Games to Understand AI Behavior

To find out how LLMs behave in social situations, researchers applied behavioral game theory -- a method typically used to study how people cooperate, compete, and make decisions. The team had various AI models, including GPT-4, engage in a series of games designed to simulate social interactions and assess key factors such as fairness, trust, and cooperation.

The researchers discovered that GPT-4 excelled in games demanding logical reasoning -- particularly when prioritizing its own interests. However, it struggled with tasks that required teamwork and coordination, often falling short in those areas.

"In some cases, the AI seemed almost too rational for its own good," said Dr. Eric Schulz, lead author of the study. "It could spot a threat or a selfish move instantly and respond with retaliation, but it struggled to see the bigger picture of trust, cooperation, and compromise."

Teaching AI to Think Socially

To encourage more socially aware behavior, the researchers implemented a straightforward approach: they prompted the AI to consider the other player's perspective before making its own decision. This technique, called Social Chain-of-Thought (SCoT), resulted in significant improvements. With SCoT, the AI became more cooperative, more adaptable, and more effective at achieving mutually beneficial outcomes -- even when interacting with real human players.

"Once we nudged the model to reason socially, it started acting in ways that felt much more human," said Elif Akata, first author of the study. "And interestingly, human participants often couldn't tell they were playing with an AI."

Applications in Health and Patient Care

The implications of this study reach well beyond game theory. The findings lay the groundwork for developing more human-centered AI systems, particularly in healthcare settings where social cognition is essential. In areas like mental health, chronic disease management, and elderly care, effective support depends not only on accuracy and information delivery but also on the AI's ability to build trust, interpret social cues, and foster cooperation. By modeling and refining these social dynamics, the study paves the way for more socially intelligent AI, with significant implications for health research and human-AI interaction.

"An AI that can encourage a patient to stay on their medication, support someone through anxiety, or guide a conversation about difficult choices," said Elif Akata. "That's where this kind of research is headed."


Story Source:

Materials provided by Helmholtz Munich. Note: Content may be edited for style and length.


Journal Reference:

  1. Elif Akata, Lion Schulz, Julian Coda-Forno, Seong Joon Oh, Matthias Bethge, Eric Schulz. Playing repeated games with large language models. Nature Human Behaviour, 2025; DOI: 10.1038/s41562-025-02172-y

Cite This Page:

Helmholtz Munich. "AI meets game theory: How language models perform in human-like social scenarios." ScienceDaily. ScienceDaily, 28 May 2025. <www.sciencedaily.com/releases/2025/05/250528132456.htm>.
Helmholtz Munich. (2025, May 28). AI meets game theory: How language models perform in human-like social scenarios. ScienceDaily. Retrieved May 31, 2025 from www.sciencedaily.com/releases/2025/05/250528132456.htm
Helmholtz Munich. "AI meets game theory: How language models perform in human-like social scenarios." ScienceDaily. www.sciencedaily.com/releases/2025/05/250528132456.htm (accessed May 31, 2025).

Explore More

from ScienceDaily

RELATED STORIES