Refactoring the 'defer' function logic to just only have what's important to not forget before returning but simplifying its logic for making easy to understand the overall function logic.
Uplink must verify that every piece upload to a storage node return a
hash whose timestamp isn't older than the maximum elapsed time allowed
by the Satellite.
We cannot leave this check only to the Satellite site, because if there
is no error reported by this matter, the uplink cuts down the long tail.
When uplink submits the result uploads including these invalid ones, the
Satellite filters out the invalid ones and that can provoke that it gets
less than the optimal threshold amount of valid upload results, so it
rejects the request.
Detecting the error at this stage will allow the uplink to detect these
uploads as invalid and avoid to cut down the long tail prematurely.
This change adds a trusted registry (via the source code) of node address to node id mappings (currently only for well known Satellites) to defeat MITM attacks to Satellites. It also extends the uplink UI such that when entering a satellite address by hand, a node id prefix can also be added to defeat MITM attacks with unknown satellites.
When running uplink setup, satellite addresses can now be of the form 12EayRS2V1k@us-central-1.tardigrade.io (not even using a full node id) to ensure that the peer contacted is the peer that was expected. When using a known satellite address, the known node ids are used if no override is provided.
all of the packages and tests work with both grpc and
drpc. we'll probably need to do some jenkins pipelines
to run the tests with drpc as well.
most of the changes are really due to a bit of cleanup
of the pkg/transport.Client api into an rpc.Dialer in
the spirit of a net.Dialer. now that we don't need
observers, we can pass around stateless configuration
to everything rather than stateful things that issue
observations. it also adds a DialAddressID for the
case where we don't have a pb.Node, but we do have an
address and want to assert some ID. this happened
pretty frequently, and now there's no more weird
contortions creating custom tls options, etc.
a lot of the other changes are being consistent/using
the abstractions in the rpc package to do rpc style
things like finding peer information, or checking
status codes.
Change-Id: Ief62875e21d80a21b3c56a5a37f45887679f9412
What: we move api keys out of the grpc connection-level metadata on the client side and into the request protobufs directly. the server side still supports both mechanisms for backwards compatibility.
Why: dRPC won't support connection-level metadata. the only thing we currently use connection-level metadata for is api keys. we need to move all information needed by a request into the request protobuf itself for drpc support. check out the .proto changes for the main details.
One fun side-fact: Did you know that protobuf fields 1-15 are special and only use one byte for both the field number and type? Additionally did you know we don't use field 15 anywhere yet? So the new request header will use field 15, and should use field 15 on all protobufs going forward.
Please describe the tests: all existing tests should pass
Please describe the performance impact: none
* add outline for ECRepairer
* add description of process in TODO comments
* begin download/getting hash for a single piece
* verify piece hash and order limit during download
* fix download piece
* begin filling out ESREpair. Get
* wip move ecclient.Repair to ecrepairer.Repair
* pass satellite signee into repairer
* reconstruct original stripe from pieces
* move rebuildStripe()
* calculate piece size differently, increment successful count
* fix shares slices initialization
* rename stripeData to segment
* do not pad reader in Repair()
* temp debug
* create unsafeRSScheme
* use decode reader
* rename file name to be all lowercase
* make repair downloader async
* declare condition variable inside Get method
* set downloadAndVerifyPiece's in-memory buffer to be share size
* update unusedLimits var
* address comments
* remove unnecessary comments
* move initialization of segmentRepaire to be outside of repairer service
* use ReadAll during download
* remove dots and move hashing to after validating for order limit signature
* wip test
* make sure files exactly at min threshold are repaired
* remove unused code
* use corrput data and write back to storagenode
* only create corrupted node and piece ids once
* add comment
* address nat's comment
* fix linting and checker_test
* update comment
* add comments
* remove "copied from ecclient" comments
* add clarification comments in ec.Repair
What: this change makes sure the count of segments is not encrypted.
Why: having the segment count encrypted just makes things hard for no reason - a satellite operator can figure out how many segments an object has by looking at the other segments in the database. but if a user has access but has lost their encryption key, they now can't clean up or delete old segments because they can't know how many there are without just guessing until they get errors. :(
Backwards compatibility: clients will still understand old pointers and will still write old pointers. at some point in the future perhaps we can do a migration for remaining old pointers so we can delete the old code.
Please describe the tests: covered by existing tests
Please describe the performance impact: none
What: Change cmd/uplink to use scopes
It moves the fields that will be subsumed by scopes into an explicit legacy section and hides their configuration flags.
Why: So that it can read scopes in from files and stuff
* storagenode/piecestore: Unexport endpoint method
Make an exported endpoint method to be unexported because it's only used
by the same package and makes easy to change without thinking in
breaking changes.
* uplink/ecclient: Use structured logger
Swap sugared logger by the normal structured logger for having the full
stack traces of the error in the debug message.
* storagenode/piecestore: Send gRPC error codes upload
Refactoring in the storagenode/piecestore to send gRPC status error codes
when some of the methods involved by upload return an error.
The uplink related to uploads has also been modified to retrieve the
gRPC status code when an error is returned by the server.
* rename pkg/linksharing to linksharing
* rename pkg/httpserver to linksharing/httpserver
* rename pkg/eestream to uplink/eestream
* rename pkg/stream to uplink/stream
* rename pkg/metainfo/kvmetainfo to uplink/metainfo/kvmetainfo
* rename pkg/auth/signing to pkg/signing
* rename pkg/storage to uplink/storage
* rename pkg/accounting to satellite/accounting
* rename pkg/audit to satellite/audit
* rename pkg/certdb to satellite/certdb
* rename pkg/discovery to satellite/discovery
* rename pkg/overlay to satellite/overlay
* rename pkg/datarepair to satellite/repair
* Added the ability to pass timeout settings from cmd/uplink to libuplink.
* Removed commented out code.
* Updated 2min timeouts for the uplink CLI.
* Removed comment.
* Made transport defaultDialTimeout and defaultRequestTimeout public
* Added comments to describe where these defaults apply.
* Added a new defaults to libuplink and added tests.
* Added a new defaults to libuplink and added tests.
If we verify that the size matches reality, we can then expect to use
the filesystem to store the piece size as used in the signed PieceHash
from the uplink. Otherwise, the uplink might send a garbage size value,
leaving the storagenode with no good way to verify the uplink signature
on the piece at a later date.
Also fix the code in uplink/piecestore/ so that it sends a valid size,
because it was being rude and sending 0.