Using large language models to evaluate alternative uses task flexibility score

Eran Hadas, Arnon Hershkovitz

Research output: Contribution to journalArticlepeer-review

Abstract

In the Alternative Uses Task (AUT) test, a group of participants is asked to list as many uses as possible for a simple object. The test measures Divergent Thinking (DT), which involves exploring possible solutions in various semantic domains. In this study we employ a Machine Learning approach to automatically generate suitable categories for object uses and classify given responses into them. We show that the results yielded by this automated approach are correlated with results given by humans and can be used to predict expected behavior in the field. Educators and researchers may utilize this approach to address the limitations of subjective scoring, save time, and use the AUT as a tool for cultivating creativity.

Original languageEnglish
Article number101549
JournalThinking Skills and Creativity
Volume52
DOIs
StatePublished - Jun 2024

Keywords

  • Alternative uses task
  • Creativity
  • Divergent thinking
  • Flexibility
  • Large language models

All Science Journal Classification (ASJC) codes

  • Education

Fingerprint

Dive into the research topics of 'Using large language models to evaluate alternative uses task flexibility score'. Together they form a unique fingerprint.

Cite this