Abstract
Kernelization-a mathematical key concept for provably effective polynomial-time preprocessing of NP-hard problems-plays a central role in parameterized complexity and has triggered an extensive line of research. This is in part due to a lower bounds framework that allows to exclude polynomial-size kernels under the assumption of NP ⊠coNP/poly. In this paper we consider a restricted yet natural variant of kernelization, namely strict kernelization, where one is not allowed to increase the parameter of the reduced instance (the kernel) by more than an additive constant. Building on earlier work of Chen, Flum, and Müller [CiE 2009, Theory Comput. Syst. 2011], we underline the applicability of their framework by showing that a variety of fixed-parameter tractable problems, including graph problems and Turing machine computation problems, does not admit strict polynomial kernels under the assumption of P ≠NP, an assumption being weaker than the assumption of NP ⊠coNP/poly. Finally, we study an adaption of the framework to a relaxation of the notion of strict kernels, where in the latter one is not allowed to increase the parameter of the reduced instance by more than a constant times the input parameter.
Original language | American English |
---|---|
Pages (from-to) | 1-24 |
Number of pages | 24 |
Journal | Computability |
Volume | 9 |
Issue number | 1 |
DOIs | |
State | Published - 26 Feb 2020 |
Keywords
- Exponential Time Hypothesis
- NP-hard problems
- kernelization lower bounds
- parameterized complexity
- polynomial-time data reduction
All Science Journal Classification (ASJC) codes
- Theoretical Computer Science
- Computer Science Applications
- Computational Theory and Mathematics
- Artificial Intelligence