This design doc describes how the satellite should notify storage nodes about garbage pieces that they may be holding, and how storage nodes should go about deleting that data.
## Background
When clients move, replace, or delete data, Satellites, or clients on behalf of Satellites, will notify storage nodes that they are no longer required to store that data.
In configurations where delete messages are issued by the client, the metadata system (and thus a Satellite, with Satellite reputation on the line) will require proof that deletes were issued to a configurable minimum number of storage nodes.
This means that every time data is deleted, storage nodes that are online and reachable will receive notifications right away.
Storage nodes will sometimes be temporarily unavailable and will miss delete messages.
In these cases, unneeded data is considered garbage.
## Ways to create garbage data:
- Failed or interrupted upload (we used to delete the previously uploaded segments but since the introduction of order limits, we no longer do this.)
- Regular upload where the longtail is canceled (e.g. uploading to 90 pieces, but the slowest 10 are cut, so we end up with 80 relevant)
- Upon deletion
- Upon replacement
- When a satellite makes a repair and drops the node
- When the client stops paying their bills
- Uplink uploads data without committing it
The garbage collection process should not depend on how garbage is created.
## Design
What could be sent to the node:
- list of useless pieces for a storage node
- would mean the satellite has to keep track of these useless pieces.
- This list of pieces id would probably be smaller than the list of useful pieces if the storage node and the uplink are trustworthy.
- list of useful pieces for a storage node
- no need for the satellite to track deleted pieces for each storage node (except for audit purposes)
- More robust against nodes and uplinks that are not trustworthy
- possibility to use a probabilistic data structure such as Bloom filter
- We also had the idea of using two bloom filters (one containing pieces that should be deleted, one for pieces that should not be deleted), but that could potentially give us a false positive for deleting a piece. We definitely shouldn't delete useful pieces, so this would be too risky.
### Approach from the whitepaper
- The uplink makes a request to the satellite
- The satellite replies with a hash of the pieces the storage node should be holding
- If the storage node detects a difference, it makes a second request to the satellite
- The satellite replies with the bloom filter of the pieces the storage node should keep
- Upon receiving the bloom filter, the storage node checks, for each piece, if it is in the set. If it is not, it deletes it. The storage node may still hold deleted pieces, as bloom filter can trigger a false positive.
### Selected Approach
- The satellite keeps track of pieces and corresponding storage nodes by creating a new bloom filter for every storage node.
- The satellite creates the in-memory bloom filters using storage node IDs and piece IDs gotten from the pointerdb.
- As an early implementation, this bloom filter creation process can be integrated with the data repair checker loop that periodically accesses the pointerdb. This will lessen pointerdb overhead vs. creating a new process.
- The satellite periodically pushes a bloom filter (or cuckoo filter) containing the list of piece IDs it expects the storage node to be holding.
- If the storage node misses the push because it's offline, it will just miss that GC cycle and catch the next one.
- Each bloom filter will have a certain creation datetime. The storage node walks all pieces older than the bloom filter datetime and checks whether the piece exists in the bloom filter, if not then deletes it.
- There could also be some additional short time period (e.g. one hour) to be sure that it covered possible differences in the clocks. Storage nodes need to be accurate within an hour or they will suffer reputation failure.
Probabilistic data structures use hash functions to randomize and compactly represent a set of items. Membership querying can raise false positives, but no false negatives. We consider two type of filters for now: Bloom filter and cuckoo filter.
```go
type ProbabilisticSet interface {
contains(pieceId storj.PieceID) bool
add(piecesId storj.PieceID)
}
```
In our implementation, the satellite should create a new probabilistic data structure (Bloom filter or cuckoo filter) for every storage node that includes all piece IDs that the storage node should have.
Some probabilistic data structures allow for data removal (cuckoo for instance), but it would make garbage collection depends on how garbage is generated.
An advantage of using a probabilistic data structure is that it knows which pieces a storage node should hold. We don't have to care about how the garbage was created. Otherwise, if we do garbage collecting in a different way for specific scenarios (such as those listed under "Ways to create garbage data"), we would need to make sure we cover each case.
Since currently the repair checker considers every piece id and node id anyway, we will integrate storage node garbage collection in the checker loop for the short term, but more long-term, we should have the Bloom filter generation run off of a snapshot of the database in a separate server. It doesn't need to necessarily run every day, but perhaps once a week.
Previously we'd planned on building reverse index functionality for pointerdb, but doing so would require storing tons of data. This would cause RAM issues eventually. In the case of the Bloom filter, RAM becomes less of an issue, but compute time becomes more of one.
Whether we use Bloom filters, cuckoo filters, or another data structure for adding up data at rest, we need to make sure that it's something we can do concurrently, and then merge later. At some point we'll need to partition the garbage collection service.
## Rationale
### Bloom filters
A bloom filter is a probabilistic data structure used to test if an element belongs to a set. It can raise false positives, but no false negatives.
A Bloom filter is an array of *m* bits, and a set of *k* hash functions that return an integer between 0 and *m-1* . To add an element, it has to be fed to the different hash functions and the bits at the resulting positions are set to 1.
The probability of having a false positive depends on the size of the Bloom filter, the hash functions used and the number of elements in the set.
- **k**: The Bloom filter will be built such that the probability of a false positive is less than (1/2)^k
- **h**: hash functions
- Serialization available
- hash functions are to be given as a parameter to the constructor
### Willf's bloom filters
- Parameters:
- **m**: max size in bits
- **k**: number of hash functions
- hash functions not configurable
### Steakknife's bloom filters
- Parameters:
- **maxElements**: max number of elements in the set
- **p**: probability of false positive
- Serialization available
- murmur3 hash function
### Custom bloom filter
- Parameters:
- **maxElements**: max number of elements in the set
- **p**: probability of false positive
- The piece id is used as a hash function.
### Benchmark
We assume a typical storage nodes has 2 TB capacity, and a typical piece is ~2 MB, so we are testing the behavior with 1 million pieces.
We create a list of 1 million piece ids and add 95% of them to the Bloom filter. We then check if the 95% are contained in the set (there should be no false negative) and we evaluate the false positive rate by checking the remaining 5% piece ids.
For each target false positive probability between 1% and 20% and each bloom filter type, we measure the size (in bytes) of the encoded bloom filter and the observed false positive rate.
The benchmark code is available as a gist [here](https://gist.github.com/Fadila82/9f54c61b5f91f6b1a6f9207dfbb5dd2d).
An estimated number of elements must be provided when creating the bloom filter. We decide to use the last known piece count (obtained through the last iteration) as the number of elements for the creation of the new bloom filter.
If the difference of number of elements between the last iteration and the current iteration is too high (inducing a high false positive rate), we don't send the bloom filter to the storage node.