Space Bounds for Reliable Storage: Fundamental Limits of Coding

Alexander Spiegelman, Yuval Cassuto, Gregory Chockler, Idit Keidar

Research output: Working paperPreprint


We study the inherent space requirements of shared storage algorithms in asynchronous fault-prone systems. Previous works use codes to achieve a better storage cost than the well-known replication approach. However, a closer look reveals that they incur extra costs somewhere else: Some use unbounded storage in communication links, while others assume bounded concurrency or synchronous periods. We prove here that this is inherent, and indeed, if there is no bound on the concurrency level, then the storage cost of any reliable storage algorithm is at least f+1 times the data size, where f is the number of tolerated failures. We further present a technique for combining erasure-codes with full replication so as to obtain the best of both. We present a storage algorithm whose storage cost is close to the lower bound in the worst case, and adapts to the concurrency level.
Original languageEnglish
StatePublished - 18 Jul 2015


  • cs.DC


Dive into the research topics of 'Space Bounds for Reliable Storage: Fundamental Limits of Coding'. Together they form a unique fingerprint.

Cite this