Abstract
Erasure codes in large-scale storage systems allow recovery of data from a failed node. A recently developed class of codes, locally repairable codes (LRCs), offers tradeoffs between storage overhead and repair cost. LRCs facilitate efficient recovery scenarios by adding parity blocks to the system. However, these additional blocks may eventually increase the number of blocks that must be reconstructed. Existing LRCs differ in their use of the parity blocks, in their locality semantics, and in their parameter space. Thus, existing theoretical models cannot directly compare different LRCs to determine which code offers the best recovery performance, and at what cost. We perform the first systematic comparison of existing LRC approaches. We analyze Xorbas, Azure's LRCs, and Optimal-LRCs in light of two new metrics: average degraded read cost and normalized repair cost. We show the tradeoff between these costs and the code's fault tolerance, and that different approaches offer different choices in this tradeoff. Our experimental evaluation on a Ceph cluster further demonstrates the different effects of realistic system bottlenecks on the benefit from each LRC approach. Despite these differences, the normalized repair cost metric can reliably identify the LRC approach that would achieve the lowest repair cost in each setup.
Original language | English |
---|---|
Article number | 11 |
Journal | ACM Transactions on Storage |
Volume | 16 |
Issue number | 2 |
DOIs | |
State | Published - 13 Jun 2020 |
Keywords
- Erasure codes
- local repair
All Science Journal Classification (ASJC) codes
- Hardware and Architecture