# Bounds for list-decoding and list-recovery of random linear codes

Venkatesan Guruswami, Ray Li, Jonathan Mosheiff, Nicolas Resch, Shashwat Silas, Mary Wootters

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

## Abstract

A family of error-correcting codes is list-decodable from error fraction p if, for every code in the family, the number of codewords in any Hamming ball of fractional radius p is less than some integer L that is independent of the code length. It is said to be list-recoverable for input list size ℓ if for every sufficiently large subset of codewords (of size L or more), there is a coordinate where the codewords take more than ℓ values. The parameter L is said to be the “list size” in either case. The capacity, i.e., the largest possible rate for these notions as the list size L → ∞, is known to be 1 − hq(p) for list-decoding, and 1 − logq ℓ for list-recovery, where q is the alphabet size of the code family. In this work, we study the list size of random linear codes for both list-decoding and list-recovery as the rate approaches capacity. We show the following claims hold with high probability over the choice of the code (below q is the alphabet size, and ε > 0 is the gap to capacity). A random linear code of rate 1 − logq(ℓ) − ε requires list size L ≥ ℓΩ(1) for list-recovery from input list size ℓ. This is surprisingly in contrast to completely random codes, where L = O(ℓ/ε) suffices w.h.p. A random linear code of rate 1−hq(p)−ε requires list size L ≥ bhq(p)/ε + 0.99c for list-decoding from error fraction p, when ε is sufficiently small. A random binary linear code of rate 1 − h2(p) − ε is list-decodable from average error fraction p with list size with L ≤ bh2(p)/εc+ 2. (The average error version measures the average Hamming distance of the codewords from the center of the Hamming ball, instead of the maximum distance as in list-decoding.) The second and third results together precisely pin down the list sizes for binary random linear codes for both list-decoding and average-radius list-decoding to three possible values. Our lower bounds follow by exhibiting an explicit subset of codewords so that this subset - or some symbol-wise permutation of it - lies in a random linear code with high probability. This uses a recent characterization of (Mosheiff, Resch, Ron-Zewi, Silas, Wootters, 2019) of configurations of codewords that are contained in random linear codes. Our upper bound follows from a refinement of the techniques of (Guruswami, Håstad, Sudan, Zuckerman, 2002) and strengthens a previous result of (Li, Wootters, 2018), which applied to list-decoding rather than average-radius list-decoding.

Original language American English Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM 2020 Jaroslaw Byrka, Raghu Meka Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing 9783959771641 https://doi.org/10.4230/LIPIcs.APPROX/RANDOM.2020.9 Published - 1 Aug 2020 Yes 23rd International Conference on Approximation Algorithms for Combinatorial Optimization Problems and 24th International Conference on Randomization and Computation, APPROX/RANDOM 2020 - Virtual, Online, United StatesDuration: 17 Aug 2020 → 19 Aug 2020

### Publication series

Name Leibniz International Proceedings in Informatics, LIPIcs 176

### Conference

Conference 23rd International Conference on Approximation Algorithms for Combinatorial Optimization Problems and 24th International Conference on Randomization and Computation, APPROX/RANDOM 2020 United States Virtual, Online 17/08/20 → 19/08/20

## Keywords

• Coding theory
• List-decoding
• List-recovery
• Random linear codes

• Software

## Fingerprint

Dive into the research topics of 'Bounds for list-decoding and list-recovery of random linear codes'. Together they form a unique fingerprint.