The fundamental problem is that both drpc and grpc servers
want to close the listener and they both want to ignore the
error from Accept after the listener is closed. There's no
way to do this in a race free way. Fortunately, the mux
hands out listeners that can be independently closed. That
means they can both do their own shutdown logic where they
ignore the error, and then after they're closed, the code
orchestrating the servers can close the listeners.
The final weird bit is that the server's Close method is
required to wait until the Run method has exited (or at
least enough for the listeners to definitely be closed)
because tests depend on that behavior, so we have to add
some channels/mutexes/onces to ensure that Run has exited
and that a new call can't start after Close is called.
Change-Id: I7c4ef293f7963f83138815f51824fd5b8d09ce15
this is a trivial operation for storagenode/console, as it doesn't
really need or use kademlia in the first place.
What:
Removes kademlia from storagenode/console
Why:
We are in the process of getting rid of kademlia, and this is one place where it's particularly easy.
Please describe the tests:
Existing tests exercise storagenode/console behavior; if they continue to work, everything here should be tested satisfactorily.
Please describe the performance impact:
None
* add init contact proto
* make 2 svcs
* add geenerated proto code
* fux conflict with NodeRequest name
* use correct version on proto
* rm extra node fields
* add protolock
* update field names so they are better
* rm node id since we dont need it
the current prometheus help messages have enough unexpected
characters that they are breaking prometheus parsing. they
may also be triggering prometheus to expect more from us (type
annotations) than we have to offer.
we're really not adding a lot of value with these help messages,
so just take them out
Change-Id: I9b723447a294bb492a6292480e9f88634346a80b
What: this change makes sure the count of segments is not encrypted.
Why: having the segment count encrypted just makes things hard for no reason - a satellite operator can figure out how many segments an object has by looking at the other segments in the database. but if a user has access but has lost their encryption key, they now can't clean up or delete old segments because they can't know how many there are without just guessing until they get errors. :(
Backwards compatibility: clients will still understand old pointers and will still write old pointers. at some point in the future perhaps we can do a migration for remaining old pointers so we can delete the old code.
Please describe the tests: covered by existing tests
Please describe the performance impact: none
* pkg/process: Fatal show complete error information
Change the general process execution function to not using the sugared
logger for outputting the full error information.
Delete some unreachable code because Zap logger Fatal method calls exit
1 internally.
* storagenode/storagenodedb: Add info to error
Add more information to an error returned due to some data
inconsistency.
* storagenode/orders: Don't use sugared logger
Don't use sugar logger and provide better contextualized error messages
in settle method.
* storagenode/orders: Add some log fields to error msgs
Add some relevant log fields to some logged errors of the sender settle
method.
* satellite/orders: Remove always nil error from debug
Remove an error which as logged in debug level which was always nil and
makes the logic that used this variable clear.
* storagenode/orders: Don't return error Archiving unsent
Don't stop the process which archive unsent orders if some of them
aren't found the DB because it cause the Storage Node to stop with a
fatal error.
this avoids a problem where setting on a flag isn't sufficient
to express complex data structures like []string.
Change-Id: I06f13656996d658b4c7a957451cb253728a67eda
* satellitedb/certDB: refactors of the node certificate storage DB table
The existing implementation doesnt allow to store the complete certificate chain of uplinkIDs or storagenodeIDs, so the current table is dropped and new table will be added which addresses the storage and retrieval of certificates
pkg/identity: fixes spelling mistakes that I missed on PR#2754
Fixes V3-1992/V3-2388
Deprecate the pieceinfo database, and start storing piece info as a header to
piece files. Institute a "storage format version" concept allowing us to handle
pieces stored under multiple different types of storage. Add a piece_expirations
table which will still be used to track expiration times, so we can query it, but
which should be much smaller than the pieceinfo database would be for the
same number of pieces. (Only pieces with expiration times need to be stored in piece_expirations, and we don't need to store large byte blobs like the serialized
order limit, etc.) Use specialized names for accessing any functionality related
only to dealing with V0 pieces (e.g., `store.V0PieceInfo()`). Move SpaceUsed-
type functionality under the purview of the piece store. Add some generic
interfaces for traversing all blobs or all pieces. Add lots of tests.
What: Change cmd/uplink to use scopes
It moves the fields that will be subsumed by scopes into an explicit legacy section and hides their configuration flags.
Why: So that it can read scopes in from files and stuff
* storagenode/piecestore: track live requests together
Change-Id: I9ed44e4484b97bcbe076c222450c3449fe8b1075
* show grpc status codes in monkit failures
Change-Id: I68bc3a8d24a372e8147ef2a74636fc3e40fa799a
* small nit
Change-Id: I722b09345377b079e41c5a3dc86d7fd6232c9d24
* pkg/server: don't use global logger
* satellite/overlay: use correct logger
* pkg/kademlia: use correct logger
* linksharing: use conventional way to pass in logger
* use zaptest in tests
* rename pkg/linksharing to linksharing
* rename pkg/httpserver to linksharing/httpserver
* rename pkg/eestream to uplink/eestream
* rename pkg/stream to uplink/stream
* rename pkg/metainfo/kvmetainfo to uplink/metainfo/kvmetainfo
* rename pkg/auth/signing to pkg/signing
* rename pkg/storage to uplink/storage
* rename pkg/accounting to satellite/accounting
* rename pkg/audit to satellite/audit
* rename pkg/certdb to satellite/certdb
* rename pkg/discovery to satellite/discovery
* rename pkg/overlay to satellite/overlay
* rename pkg/datarepair to satellite/repair
* Added a gc package at satellite/gc, which contains the gc.Service, which runs garbage collection integrated with the metainfoloop, and the gc PieceTracker, which implements the metainfo loop Observer interface and stores all of the filters (about which pieces are good) for each node.
* Added a gc config located at satellite/gc/service.go (loop disabled by default in release)
* Creates bloom filters with pieces to be retained inside the metainfo loop
* Sends RetainRequests (or filters with good piece ids) to all storage nodes.
* pkg/datarepair/repairer: Track always time for repair
Make a minor change in the worker function of the repairer, that when
successful, always track the metric time for repair independently if the
time since checker queue metric can be tracked.
* storage/postgreskv: Wrap error in Get func
Wrap the returned error of the Get function as it is done when the
query doesn't return any row.
* satellite/metainfo: Move debug msg to the right place
NewStore function was writing a debug log message when the DB was
connected, however it was always writing it out despite if an error
happened when getting the connection.
* pkg/datarepair/repairer: Wrap error before logging it
Wrap the error returned by process which is executed by the Run method
of the repairer service to add context to the error log message.
* pkg/datarepair/repairer: Make errors more specific in worker
Make the error messages of the "worker" method of the Service more
specific and the logged message for such errors.
* pkg/storage/repair: Improve error reporting Repair
In order of improving the error reporting by the
pkg/storage/repair.Repair method, several errors of this method and
functions/methods which this one relies one have been updated to be
wrapper into their corresponding classes.
* pkg/storage/segments: Track path param of Repair method
Track in monkit the path parameter passed to the Repair method.
* satellite/satellitedb: Wrap Error returned by Delete
Wrap the error returned by repairQueue.Delete method to enhance the
error with a class and stack and the
pkg/storage/segments.Repairer.Repair method get a more contextualized
error from it.
Create a new variable rather than reusing the existing one because the
name of the existing one is confusing when reading the logic and it
requires more time that the logic doesn't have a bug.
* Added the ability to pass timeout settings from cmd/uplink to libuplink.
* Removed commented out code.
* Updated 2min timeouts for the uplink CLI.
* Removed comment.
* Made transport defaultDialTimeout and defaultRequestTimeout public
* Added comments to describe where these defaults apply.
* Added a new defaults to libuplink and added tests.
* Added a new defaults to libuplink and added tests.
* pkg/datarepair: Add test to check num upload pieces
Add a new test for ensuring the number of pieces that the repair process
upload when a segment is injured.
* satellite/orders: Don't create "put order limits" over total
Repair must not create "put order limits" more than the total count.
* pkg/datarepair: Update upload repair pieces test
Update the test which checks the number of pieces which are uploaded
during a repair for using the same excess over the success threshold
value than the implementation.
* satellites/orders: Limit repair put order for not being total
Limit the number of put orders to be used by repair for only uploading
pieces to a % excess over the successful threshold.
* pkg/datarepair: Change DataRepair test to pass again
Make some changes in the DataRepair test to make pass again after the
repair upload repaired pieces only until a % excess over success
threshold.
Also update the steps description of the DataRepair test after it has been
changed, to match on what's now, besides to leave it more generic for
avoiding having to update it on minimal future refactorings.
* satellite: Make repair excess optimal threshold configurable
Add a new configuration parameter to the satellite for being able to
configure the percentage excess over the optimal threshold, used for
determining how many pieces should be repaired/uploaded, rather than
having the value hard coded.
* repairer: Add configurable param to segments/repairer
Add a new parameters to the segment/repairer to calculate the maximum
number of excess nodes, based on the optimal threshold, that repaired
pieces can be uploaded.
This new parameter has been added for not returning more nodes than the
number of upload orders for data repair satellite service calculate for
repairing pieces.
* pkg/storage/ec: Update log message in clien.Repair
* satellite: Update configuration lock file
checker_segment_total_count - Number of total segments in pointer during checker iteration
checker_segment_healthy_count - Number of healthy segments in pointer during checker iterationn
time_since_checker_queue - Seconds elapsed between checker queue and beginning repair
time_for_repair - Seconds elapsed between beginning repair and ending repair/dequeueing
* add db interface and methods, add sa metainfo endpoints and svc
* add bucket metainfo svc funcs
* add sadb bucekts
* bucket list gets all buckets
* filter buckets list on macaroon restrictions
* update pb cipher suite to be enum
* add conversion funcs
* updates per comments
* bucket settings should say default
* add direction to list buckets, add tests
* fix test bucket names
* lint err
* only support forward direction
* add comments
* minor refactoring
* make sure list up to limit
* update test
* update protolock file
* fix lint
* change per PR
* Fix some log message to actually report the number of pieces needed to
repaired for reaching the successful/optimal threshold.
* Remove some unneeded `nil` check conditional.
* monitor optimal wait fraction
Change-Id: I1c76da5e8031237cf78ce5a0774732dd5e558ea1
* monitor other times about the upload
Change-Id: Iae81c80fb1446fbf4b3dd04fc6b238f2ede96545
* fix orderdDB methods to take correct args
* update tally to save projectID in correct format
* update var names in splitBucket test
* changes per CR comments
* pkg/process/metrics: add an instance prefix
the distinction between which satellite is sending which
data should go in the instance field, not the suffix or application
fields. (un)fortunately, the instance id is deliberately not
configurable because we don't want it to be easy to accidentally
have multiple applications collide with the same instance id.
so we're currently stuffing the human readable instance in the
suffix. :(
perhaps a reasonable tradeoff would be an optional instance
prefix that allows operators to put their domain name in
the instance
Change-Id: I6fcc8498be908c5740439cc00f77474ad151febd
* linting
Change-Id: I9f9a44fa9a2634ef5e4f89548d42d57ce9e4450e
* add path implementation
This commit adds a pkg/paths package which contains two types,
Encrypted and Unencrypted, to statically enforce what is contained
in a path. It's part of a refactoring of the code base to be more
clear about what is contained in a storj.Path at all the layers.
Change-Id: Ifc4d4932da26a97ea99749b8356b4543496a8864
* add encryption store
This change adds an encryption.Store type to keep a collection
of root keys for arbitrary locations in some buckets. It allows
one to look up all of the necessary information to encrypt paths,
decrypt paths and decrypt list operations.
It adds some exported functions to perform encryption on paths
using a Store.
Change-Id: I1a3d230c521d65f0ede727f93e1cb389f8be9497
* add shim around streams store
This commit changes no functionality, but just reorganizes the code
so that changes can be made directly to the streams store
implementation without affecting callers.
It also adds a Path type that will be used at the interface boundary
for the streams store so that it can be sure that it's getting well
formed paths that it expects.
Change-Id: I50bd682995b185beb653b00562fab62ef11f1ab5
* refactor streams to use encryption store
This commit changes the streams store to use the path type as
well as the encryption store to handle all of it's encryption
and decryption.
Some changes were made to how the default key is returned in
the encryption store to have it include the case when the bucket
exists but no paths matched. The path iterator could also be
simplified to not report if a consume was valid: that information
is no longer necessary.
The kvmetainfo tests were changed to appropriately pass the
subtests *testing.T rather than having the closure it executes
use the parent one. The test framework now correctly reports
which test did the failing.
There are still some latent issues with listing in that listing
for "a/" and listing for "a" are not the same operation, but we
treat them as such. I suspect that there are also issues with
paths like "/" or "//foo", but that's for another time.
Change-Id: I81cad4ba2850c3d14ba7e632777c4cac93db9472
* use an encryption store at the upper layers
Change-Id: Id9b4dd5f27b3ecac863de586e9ae076f4f927f6f
* fix linting failures
Change-Id: Ifb8378879ad308d4d047a0483850156371a41280
* fix linting in encryption test
Change-Id: Ia35647dfe18b0f20fe13763b28e53294f75c38fa
* get rid of kvmetainfo rootKey
Change-Id: Id795ca03d9417e3fe9634365a121430eb678d6d5
* Fix linting failure for return with else
Change-Id: I0b9ffd92be42ffcd8fef7ea735c5fc114a55d3b5
* fix some bugs adding enc store to kvmetainfo
Change-Id: I8e765970ba817289c65ec62971ae3bfa2c53a1ba
* respond to review feedback
Change-Id: I43e2ce29ce2fb6677b1cd6b9469838d80ec92c86
* add voucher service on storage node
* config field tag syntax, go routines for requests
* hook up voucher service in storagenode/peer.go
* add voucher config to testplanet
* add voucher config to testplanet
* add voucher response status INVALID, ACCEPTED, REJECTED
* add a test for vouchers service
* handle no row from GetValid, test it
* add trust pool to voucher service
* use trusted list to get satellites
* verify vouchers upon receipt
* test VerifyVoucher
This commit adds two functions that implement the algorithms
described in the password key derivation design document. They
will be used during setup to derive bucket level root keys or
default passwords to use when buckets do not have their own
independent key.
Change-Id: Ie7fb2d8d549ba7465d0722716a2c1ac0ad907286
* pkg/audit: Add DQ test for too many failed audits
Add an integration test which checks that a node which fails several
audits gets disqualified but not before it reaches the audit reputation
disqualification cut-off.
* internal/testplanet: Set DQ cut-off config values
Set the values of the Overlay cache DQ cut-off configuration parameters
used by testplanet.
Move 2 helper function used for test which relay on testplanet from the
test file where they were created to separated file to contain them
because they are not only used in the test file were initially they were
created.