Abstract
Deduplication is a special case of data compression in which repeated chunks of data are stored only once. The input data is cut into chunks and a cryptographically strong hash value of each (different) chunk is stored. To restrict the influence of small inserts and deletes to local perturbations, the chunk boundaries are usually defined in a data dependent way, which implies that the chunks are of variable length. Usually, the chunk sizes may spread over a large range, which might have a negative impact on the storage performance. This can be dealt with by imposing artificial lower and upper bounds. This paper proposes an alternative by which the chunk size distribution is controlled in a natural way. Some analytical and experimental results are given.
Original language | English |
---|---|
Pages (from-to) | 81-91 |
Number of pages | 11 |
Journal | Discrete Applied Mathematics |
Volume | 274 |
DOIs | |
State | Published - 15 Mar 2020 |
Keywords
- Chunk size
- Compression
- Deduplication
All Science Journal Classification (ASJC) codes
- Discrete Mathematics and Combinatorics
- Applied Mathematics