Abstract
How many bits of information are revealed by a learning algorithm for a concept class of VC-dimension d? Previous works have shown that even for d = 1 the amount of information may be unbounded (tend to ∞ with the universe size). Can it be that all concepts in the class require leaking a large amount of information? We show that typically concepts do not require leakage. There exists a proper learning algorithm that reveals O(d) bits of information for most concepts in the class. This result is a special case of a more general phenomenon we explore. If there is a low information learner when the algorithm knows the underlying distribution on inputs, then there is a learner that reveals little information on an average concept without knowing the distribution on inputs.
| Original language | American English |
|---|---|
| Pages (from-to) | 633-646 |
| Number of pages | 14 |
| Journal | Proceedings of Machine Learning Research |
| Volume | 98 |
| State | Published - 2019 |
| Event | 30th International Conference on Algorithmic Learning Theory, ALT 2019 - Chicago, United States Duration: 22 Mar 2019 → 24 Mar 2019 https://proceedings.mlr.press/v98 |
Keywords
- Learning Theory, Information Theory
All Science Journal Classification (ASJC) codes
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability