Abstract
We revisit the classic problem of aggregating binary advice from conditionally independent experts, also known as the Naive Bayes setting. Our quantity of interest is the error probability of the optimal decision rule. In the case of symmetric errors (sensitivity = specificity), reasonably tight bounds on the optimal error probability are known. In the general asymmetric case, we are not aware of any nontrivial estimates on this quantity. Our contribution consists of sharp upper and lower bounds on the optimal error probability in the general case, which recover and sharpen the best known results in the symmetric special case. Additionally, our bounds are apparently the first to take the bias into account. Since this turns out to be closely connected to bounding the total variation distance between two product distributions, our results also have bearing on this important and challenging problem.
Original language | American English |
---|---|
Pages (from-to) | 653-663 |
Number of pages | 11 |
Journal | Proceedings of Machine Learning Research |
Volume | 272 |
State | Published - 1 Jan 2025 |
Event | 36th International Conference on Algorithmic Learning Theory, ALT 2025 - Milan, Italy Duration: 24 Feb 2025 → 27 Feb 2025 |
Keywords
- Neyman-Pearson lemma
- experts
- hypothesis testing
- naive Bayes
All Science Journal Classification (ASJC) codes
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability