Does complexity matter? Meta-Analysis of learner performance in artificial grammar tasks

Pessia Katan, Rachel Schiff

Research output: Contribution to journalArticlepeer-review


Complexity has been shown to affect performance on artificial grammar learning (AGL) tasks (categorization of test items as grammatical/ungrammatical according to the implicitly trained grammar rules). However, previously published AGL experiments did not utilize consistent measures to investigate the comprehensive effect of grammar complexity on task performance. The present study focused on computerizing Bollt and Jones's (2000) technique of calculating topological entropy (TE), a quantitative measure of AGL charts' complexity, with the aim of examining associations between grammar systems' TE and learners' AGL task performance. We surveyed the literature and identified 56 previous AGL experiments based on 10 different grammars that met the sampling criteria. Using the automated matrix lift-action method, we assigned a TE value for each of these 10 previously used AGL systems and examined its correlation with learners' task performance. The meta-regression analysis showed a significant correlation, demonstrating that the complexity effect transcended the different settings and conditions in which the categorization task was performed. The results reinforced the importance of using this new automated tool to uniformly measure grammar systems' complexity when experimenting with and evaluating the findings of AGL studies.

Original languageEnglish
Article number1084
JournalFrontiers in Psychology
Issue numberSEP
StatePublished - 2014


  • Artificial grammar learning
  • Complexity
  • Grammar system
  • Topological entropy

All Science Journal Classification (ASJC) codes

  • General Psychology


Dive into the research topics of 'Does complexity matter? Meta-Analysis of learner performance in artificial grammar tasks'. Together they form a unique fingerprint.

Cite this