Putting cognitive tasks on trial: A measure of reliability convergence

Jan Kadlec, Catherine Walsh, Uri Sadé, Ariel Amir, Jesse Rissman, Michal Ramot

Research output: Contribution to journalArticle


The surge in interest in individual differences has coincided with the latest replication crisis centered around brain-wide association studies of brain-behavior correlations. Yet the reliability of the measures we use in cognitive neuroscience, a crucial component of this brain-behavior relationship, is often assumed but not directly tested. Here, we evaluate the reliability of different cognitive tasks on a large dataset of over 250 participants, who each completed a multi-day task battery. We show how reliability improves as a function of number of trials, and describe the convergence of the reliability curves for the different tasks, allowing us to score tasks according to their suitability for studies of individual differences. To improve the accessibility of these findings, we designed a simple web-based tool that implements this function to calculate the convergence factor and predict the expected reliability for any given number of trials and participants, even based on limited pilot data.Competing Interest StatementThe authors have declared no competing interest.
Original languageEnglish
StateIn preparation - 3 Jul 2023


Dive into the research topics of 'Putting cognitive tasks on trial: A measure of reliability convergence'. Together they form a unique fingerprint.

Cite this