TY - GEN
T1 - Superpolynomial Lower Bounds for Learning Monotone Classes
AU - Bshouty, Nader H.
N1 - Publisher Copyright: © 2023 Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing. All rights reserved.
PY - 2023/9
Y1 - 2023/9
N2 - Koch, Strassle, and Tan [SODA 2023], show that, under the randomized exponential time hypothesis, there is no distribution-free PAC-learning algorithm that runs in time nÕ(log log s) for the classes of n-variable size-s DNF, size-s Decision Tree, and log s-Junta by DNF (that returns a DNF hypothesis). Assuming a natural conjecture on the hardness of set cover, they give the lower bound nΩ(log s). This matches the best known upper bound for n-variable size-s Decision Tree, and log s-Junta. In this paper, we give the same lower bounds for PAC-learning of n-variable size-s Monotone DNF, size-s Monotone Decision Tree, and Monotone log s-Junta by DNF. This solves the open problem proposed by Koch, Strassle, and Tan and subsumes the above results. The lower bound holds, even if the learner knows the distribution, can draw a sample according to the distribution in polynomial time, and can compute the target function on all the points of the support of the distribution in polynomial time.
AB - Koch, Strassle, and Tan [SODA 2023], show that, under the randomized exponential time hypothesis, there is no distribution-free PAC-learning algorithm that runs in time nÕ(log log s) for the classes of n-variable size-s DNF, size-s Decision Tree, and log s-Junta by DNF (that returns a DNF hypothesis). Assuming a natural conjecture on the hardness of set cover, they give the lower bound nΩ(log s). This matches the best known upper bound for n-variable size-s Decision Tree, and log s-Junta. In this paper, we give the same lower bounds for PAC-learning of n-variable size-s Monotone DNF, size-s Monotone Decision Tree, and Monotone log s-Junta by DNF. This solves the open problem proposed by Koch, Strassle, and Tan and subsumes the above results. The lower bound holds, even if the learner knows the distribution, can draw a sample according to the distribution in polynomial time, and can compute the target function on all the points of the support of the distribution in polynomial time.
KW - Lower Bound
KW - Monotone DNF
KW - Monotone Decision Tree
KW - Monotone Junta
KW - PAC Learning
UR - http://www.scopus.com/inward/record.url?scp=85171989122&partnerID=8YFLogxK
U2 - https://doi.org/10.4230/LIPIcs.APPROX/RANDOM.2023.34
DO - https://doi.org/10.4230/LIPIcs.APPROX/RANDOM.2023.34
M3 - منشور من مؤتمر
T3 - Leibniz International Proceedings in Informatics, LIPIcs
BT - Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM 2023
A2 - Megow, Nicole
A2 - Smith, Adam
T2 - 26th International Conference on Approximation Algorithms for Combinatorial Optimization Problems, APPROX 2023 and the 27th International Conference on Randomization and Computation, RANDOM 2023
Y2 - 11 September 2023 through 13 September 2023
ER -