1b8bd6c082
The repair checker and repair worker both need to determine which pieces are healthy, which are retrievable, and which should be replaced, but they have been doing it in different ways in different code, which has been the cause of bugs. The same term could have very similar but subtly different meanings between the two, causing much confusion. With this change, the piece- and node-classification logic is consolidated into one place within the satellite/repair package, so that both subsystems can use it. This ought to make decision-making code more concise and more readable. The consolidated classification logic has been expanded to create more sets, so that the decision-making code does not need to do as much precalculation. It should now be clearer in comments and code that a piece can belong to multiple sets arbitrarily (except where the definition of the sets makes this logically impossible), and what the precise meaning of each set is. These sets include Missing, Suspended, Clumped, OutOfPlacement, InExcludedCountry, ForcingRepair, UnhealthyRetrievable, Unhealthy, Retrievable, and Healthy. Some other side effects of this change: * CreatePutRepairOrderLimits no longer needs to special-case excluded countries; it can just create as many order limits as requested (by way of len(newNodes)). * The repair checker will now queue a segment for repair when there are any pieces out of placement. The code calls this "forcing a repair". * The checker.ReliabilityCache is now accessed by way of a GetNodes() function similar to the one on the overlay. The classification methods like MissingPieces(), OutOfPlacementPieces(), and PiecesNodesLastNetsInOrder() are removed in favor of the classification logic in satellite/repair/classification.go. This means the reliability cache no longer needs access to the placement rules or excluded countries list. Change-Id: I105109fb94ee126952f07d747c6e11131164fadb |
||
---|---|---|
.github | ||
certificate | ||
cmd | ||
crashcollect | ||
docs | ||
installer/windows | ||
multinode | ||
private | ||
resources | ||
satellite | ||
scripts | ||
storagenode | ||
testsuite | ||
versioncontrol | ||
web | ||
.dockerignore | ||
.earthlyignore | ||
.gitattributes | ||
.gitignore | ||
.gitreview | ||
CODE_OF_CONDUCT.md | ||
CODEOWNERS | ||
CONTRIBUTING.md | ||
DEVELOPING.md | ||
docker-compose.tests.yaml | ||
Earthfile | ||
go.mod | ||
go.sum | ||
Jenkinsfile | ||
Jenkinsfile.premerge | ||
Jenkinsfile.public | ||
Jenkinsfile.verify | ||
LICENSE | ||
MAINTAINERS.md | ||
Makefile | ||
monkit.lock | ||
proto.lock | ||
README.md |
Storj V3 Network
Storj is building a distributed cloud storage network. Check out our white paper for more info!
Storj is an S3-compatible platform and suite of distributed applications that allows you to store data in a secure and distributed manner. Your files are encrypted, broken into little pieces and stored in a global distributed network of computers. Luckily, we also support allowing you (and only you) to retrieve those files!
Table of Contents
Contributing to Storj
All of our code for Storj v3 is open source. If anything feels off, or if you feel that some functionality is missing, please check out the contributing page. There you will find instructions for sharing your feedback, building the tool locally, and submitting pull requests to the project.
A Note about Versioning
While we are practicing semantic versioning for our client libraries such as uplink, we are not practicing semantic versioning in this repo, as we do not intend for it to be used via Go modules. We may have backwards-incompatible changes between minor and patch releases in this repo.
Start using Storj
Our wiki has documentation and tutorials. Check out these three tutorials:
License
This repository is currently licensed with the AGPLv3 license.
For code released under the AGPLv3, we request that contributors sign our Contributor License Agreement (CLA) so that we can relicense the code under Apache v2, or other licenses in the future.
Support
If you have any questions or suggestions please reach out to us on our community forum or file a ticket at https://support.storj.io/.