Meaning multiplicity and valid disagreement in textual measurement: A plea for a revised notion of reliability

Research output: Contribution to journalArticlepeer-review

Abstract

In quantitative content analysis, conventional wisdom holds that reliability, operationalized as agreement, is a necessary precondition for validity. Underlying this view is the assumption that there is a definite, unique way to correctly classify any instance of a measured variable. In this intervention, we argue that there are textual ambiguities that cause disagreement in classification that is not measurement error, but reflects true properties of the classified text. We introduce a notion of valid disagreement, a form of replicable disagreement that must be distinguished from replication failures that threaten reliability. We distinguish three key forms of meaning multiplicity that result in valid disagreement - ambiguity due to under-specification, polysemy due to excessive information, and interchangeability of classification choices - that are widespread in textual analysis, yet defy treatment within the confines of the existing content-analytic toolbox. Discussing implications, we present strategies for addressing valid disagreement in content analysis.

Original languageEnglish
Pages (from-to)305-326
Number of pages22
JournalStudies in Communication and Media
Volume12
Issue number4
DOIs
StatePublished - 2023

Keywords

  • Content analysis
  • ambiguity
  • meaning multiplicity
  • measurement validity
  • polysemy
  • reliability

All Science Journal Classification (ASJC) codes

  • Communication
  • Language and Linguistics
  • Sociology and Political Science
  • Linguistics and Language

Fingerprint

Dive into the research topics of 'Meaning multiplicity and valid disagreement in textual measurement: A plea for a revised notion of reliability'. Together they form a unique fingerprint.

Cite this