Abstract
Background: Cardiopulmonary exercise testing (CPET) is used in the evaluation of unexplained dyspnea. However, its interpretation requires expertise that is often not available. We aim to evaluate the utility of ChatGPT (GPT) in interpreting CPET results. Research Design and Methods: This cross-sectional study included 150 patients who underwent CPET. Two expert pulmonologists categorized the results as normal or abnormal (cardiovascular, pulmonary, or other exercise limitations), being the gold standard. GPT versions 3.5 (GPT-3.5) and 4 (GPT-4) analyzed the same data using pre-defined structured inputs. Results: GPT-3.5 correctly interpreted 67% of the cases. It achieved a sensitivity of 75% and specificity of 98% in identifying normal CPET results. GPT-3.5 had varying results for abnormal CPET tests, depending on the limiting etiology. In contrast, GPT-4 demonstrated improvements in interpreting abnormal tests, with sensitivities of 83% and 92% for respiratory and cardiovascular limitations, respectively. Combining the normal CPET interpretations by both AI models resulted in 91% sensitivity and 98% specificity. Low work rate and peak oxygen consumption were independent predictors for inaccurate interpretations. Conclusions: Both GPT-3.5 and GPT-4 succeeded in ruling out abnormal CPET results. This tool could be utilized to differentiate between normal and abnormal results.
Original language | American English |
---|---|
Pages (from-to) | 371-378 |
Number of pages | 8 |
Journal | Expert Review of Respiratory Medicine |
Volume | 19 |
Issue number | 4 |
DOIs | |
State | Published - 1 Jan 2025 |
Keywords
- Artificial Intelligence
- ChatGPT
- Generative AI
- cardio-pulmonary exercise test (CPET)
- large language model
- pulmonary function test
All Science Journal Classification (ASJC) codes
- Immunology and Allergy
- Pulmonary and Respiratory Medicine
- Public Health, Environmental and Occupational Health