STEER: Assessing the Economic Rationality of Large Language Models

Narun Raman, Taylor Lundy, Samuel Joseph Amouyal, Yoav Levine, Kevin Leyton-Brown, Moshe Tennenholtz

Research output: Contribution to journalConference articlepeer-review

Abstract

There is increasing interest in using LLMs as decision-making “agents.” Doing so includes many degrees of freedom: which model should be used; how should it be prompted; should it be asked to introspect, conduct chain-of-thought reasoning, etc? Settling these questions—and more broadly, determining whether an LLM agent is reliable enough to be trusted—requires a methodology for assessing such an agent's economic rationality. In this paper, we provide one. We begin by surveying the economic literature on rational decision making, taxonomizing a large set of fine-grained “elements” that an agent should exhibit, along with dependencies between them. We then propose a benchmark distribution called STEER (Systematic and Tuneable Evaluation of Economic Rationality) that quantitatively scores an LLMs performance on these elements and, combined with a user-provided rubric, produces a “STEER report card.” Finally, we describe the results of a large-scale empirical experiment with 14 different LLMs, characterizing the both current state of the art and the impact of different model sizes on models' ability to exhibit rational behavior.

Original languageEnglish
Pages (from-to)42026-42047
Number of pages22
JournalProceedings of Machine Learning Research
Volume235
StatePublished - 2024
Event41st International Conference on Machine Learning, ICML 2024 - Vienna, Austria
Duration: 21 Jul 202427 Jul 2024

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Cite this