Abstract
In quantitative content analysis, conventional wisdom holds that reliability, operationalized as agreement, is a necessary precondition for validity. Underlying this view is the assumption that there is a definite, unique way to correctly classify any instance of a measured variable. In this intervention, we argue that there are textual ambiguities that cause disagreement in classification that is not measurement error, but reflects true properties of the classified text. We introduce a notion of valid disagreement, a form of replicable disagreement that must be distinguished from replication failures that threaten reliability. We distinguish three key forms of meaning multiplicity that result in valid disagreement - ambiguity due to under-specification, polysemy due to excessive information, and interchangeability of classification choices - that are widespread in textual analysis, yet defy treatment within the confines of the existing content-analytic toolbox. Discussing implications, we present strategies for addressing valid disagreement in content analysis.
Original language | English |
---|---|
Pages (from-to) | 305-326 |
Number of pages | 22 |
Journal | Studies in Communication and Media |
Volume | 12 |
Issue number | 4 |
DOIs | |
State | Published - 2023 |
Keywords
- Content analysis
- ambiguity
- meaning multiplicity
- measurement validity
- polysemy
- reliability
All Science Journal Classification (ASJC) codes
- Communication
- Language and Linguistics
- Sociology and Political Science
- Linguistics and Language