This change ensures that the upload timer of ECClient is always stopped after no more status is expected from uploaded pieces. It also ensures that the "Timer expired" message will be logged only if the context is not already cancelled.
This is to avoid confusing logs where a "Timer expired" message is logged significantly later and mixes with similar messages logged from the upload of the next file segments.
* psclient receives storage node hash and compare it to own hash for verification
* uplink sends delete request when hashes don't match
* valid hashes are propagated up to segments.Store for future sending to satellite
Removes most instances of pb.SignedMessage (there's more to take out but they shouldn't hurt anyone as is).
There used to be places in psserver where a PieceID was hmac'd with the SatelliteID, which was gotten from a SignedMessage. This PR makes it so some functions access the SatelliteID from the Payer Bandwidth Allocation instead.
This requires passing a SatelliteID into psserver functions where they weren't before, so the following proto messages have been changed:
* PieceId - satellite_id field added
This is so the psserver.Piece function has access to the SatelliteID when it needs to get the namespaced pieceID.
This proto message should probably be renamed to PieceRequest, or a new PieceRequest message should be created so this isn't misnamed.
* PieceDelete - satellite_id field added
This is so the psserver.Delete function has access to the SatelliteID when receiving a request to Delete.
* separate TLS options from server options (because we need them for dialing too)
* stop creating transports in multiple places
* ensure that we actually check revocation, whitelists, certificate signing, etc, for all connections.
this change removes the cryptopasta dependency.
a couple possible sources of problem with this change:
* the encoding used for ECDSA signatures on SignedMessage has changed.
the encoding employed by cryptopasta was workable, but not the same
as the encoding used for such signatures in the rest of the world
(most particularly, on ECDSA signatures in X.509 certificates). I
think we'll be best served by using one ECDSA signature encoding from
here on, but if we need to use the old encoding for backwards
compatibility with existing nodes, that can be arranged.
* since there's already a breaking change in SignedMessage, I changed
it to send and receive public keys in raw PKIX format, instead of
PEM. PEM just adds unhelpful overhead for this case.
* Add test for aio
* Don't trust the user to have images built for a version
* Make travis run the aio test
* Add missing values to docker-compose, sort some things, consider the gateway image
* today's changes
* config changed, again
* more fixes
* Expose satellite port on localhost:7778
* Add retries and a timeout around the big-testfile test in AIO
* Another config value changed
* Make this error message a little more useful
* Fix nil condition
* preparing for use of `customtype` gogo extension with `NodeID` type
* review changes
* preparing for use of `customtype` gogo extension with `NodeID` type
* review changes
* wip
* tests passing
* wip fixing tests
* more wip test fixing
* remove NodeIDList from proto files
* linter fixes
* linter fixes
* linter/review fixes
* more freaking linter fixes
* omg just kill me - linterrrrrrrr
* travis linter, i will muder you and your family in your sleep
* goimports everything - burn in hell travis
* goimports update
* go mod tidy
* add storeConfig struct and getSegmentStore helper for creating a segment store
* implement segment store in repairer, remove unnecessary repairer Repair method
* change repair method parameter from int to int32 to match type being passed in
* implement repairer service in captplanet
* rework Config, set Config defaults in captplanet/setup
* add filter field into OverlayOptions message
* chooseFiltered method, add excluded parameter in populate method
* change excluded type to []dht.NodeID in ChooseFiltered, change comment
* change name filter to excluded_nodes in proto
* implement helper function contains
* delete ChooseFiltered and add its functionality into Choose method to keep original author's history, add excluded argument into Choose calls
* regenerate mock_client.go
* regenerate protobuf
* adding the repair() func
* update test case to use new IDFromString function
* modified the repair() and updated streams mock
* modified the repair() and updated streams mock
* Options struct
* adding the repair() func
* modified the repair() and updated streams mock
* modified the repair() and updated streams mock
* integrating the segment repair()
* development repair with hack working
* repair segment changes
* integrated with mini hacks and rigged up test case with dev debug info
* integrated with ec and overlay
* added repair test case
* made the getNewUniqueNodes() to recursively go thru choose() to find get the required number of unique nodes
* cleaned up code
The old paths.Path type is now replaced with the new storj.Path.
storj.Path is simply an alias to the built-in string type. As such it can be used just as any string, which simplifies a lot working with paths. No more conversions paths.New and path.String().
As an alias storj.Path does not define any methods. However, any functions applying to strings (like those from the strings package) gracefully apply to storj.Path too. In addition we have a few more functions defined:
storj.SplitPath
storj.JoinPaths
encryption.EncryptPath
encryption.DecryptPath
encryption.DerivePathKey
encryption.DeriveContentKey
All code in master is migrated to the new storj.Path type.
The Path example is also updated and is good for reference: /pkg/encryption/examples_test.go
This PR also resolve a nonce misuse issue in path encryption: https://storjlabs.atlassian.net/browse/V3-545
* handle nil nodes in ec Put
* read and discard readers for nil nodes
* test 2 nil nodes, unique wont return false with nil nodes
* Discard reader data for nil nodes
* edit control flow
* add filter field into OverlayOptions message
* chooseFiltered method, add excluded parameter in populate method
* change excluded type to []dht.NodeID in ChooseFiltered, change comment
* change name filter to excluded_nodes in proto
* implement helper function contains
* delete ChooseFiltered and add its functionality into Choose method to keep original author's history, add excluded argument into Choose calls
* regenerate mock_client.go
* regenerate protobuf
* update test case to use new IDFromString function
* remove old kademlia test code
* begin adding path encryption
* do not encrypt/decrypt first element of path (bucket)
* add path encryption for delete and list
* use encrypted paths in streamstore.Meta
* fix listing with encrypted paths
* move encrypt/decryptAfterBucket to streamstore
* fix listing with no prefix
* remove duplicate logic for listing with no prefix
* Initial Layout
* Commit to test File Handling OS independed
* Hide struct properties to prevent manual interaction
* Fix Linting Errors
* 1st Working Windows Version
* Add missing Error Handling
* Fix Linting Errors
* Remove dependencies
* Further Improvements
* Remove commented code
* Improve comments and error messages
* No pointers to FPath
* Improve comment
* Do not filepath.ToSlash URL path
* Extract helper functions for parsing local path and Storj path
* Minor Improvements based on PR Comments
* Fix Linting Error and make Regex private
* Improve Layout
* Rework FPath and add tests
* Add more tests cases for windows
* Use for-loop instead of goto
* Use FPath in all uplink commands
* Add guard checks
* Add Test Cases and add comments
* merged the lasted master changes
* debug working of handling ctrl+c
* Handling of clean up of partially uploaded segments and pieces
* code cleanup per code comment
* updates based on code review comments
* Clean up last segment handling
* Fix increment for AES-GCM nonce
* Fix stream size calculation
* Adapt stream store tests
* Fix Delete method
* Rename info callback to segmentInfo
* Clearer calculation for offset in Nonce.AESGCMNonce()
* Adapt to the new little-endian nonce increment
* begin adding encryption for remote pieces
* begin adding decryption
* add encryption key as arg to Put and Get
* move encryption/decryption to object store
* Add encryption key to object store constructor
* Add the erasure scheme to object store constructor
* Ensure decrypter is initialized with the stripe size used by encrypter
* Revert "Ensure decrypter is initialized with the stripe size used by encrypter"
This reverts commit 07272333f461606edfb43ad106cc152f37a3bd46.
* Revert "Add the erasure scheme to object store constructor"
This reverts commit ea5e793b536159d993b96e3db69a37c1656a193c.
* move encryption to stream store
* move decryption stuff to stream store
* revert changes in object store
* add encryptedBlockSize and close rangers on error during Get
* calculate padding sizes correctly
* encryptedBlockSize -> encryptionBlockSize
* pass encryption key and block size into stream store
* remove encryption key and block size from object store constructor
* move encrypter/decrypter initialization
* remove unnecessary cast
* Fix padding issue
* Fix linter
* add todos
* use random encryption key for data encryption. Store an encrypted copy of this key in segment metadata
* use different encryption key for each segment
* encrypt data in one step if it is small enough
* refactor and move encryption stuff
* fix errors related to nil slices passed to copy
* fix encrypter vs. decrypter bug
* put encryption stuff in eestream
* get captplanet test to pass
* fix linting errors
* add types for encryption keys/nonces and clean up
* fix tests
* more review changes
* add Cipher type for encryption stuff
* fix rs_test
* Simplify type casting of key and nonce
* Init starting nonce to the segment index
* don't copy derived key
* remove default encryption key; force user to explicitly set it
* move getSegmentPath to streams package
* dont require user to specify encryption key for captplanet
* rename GenericKey and GenericNonce to Key and Nonce
* review changes
* fix linting error
* Download uses the encryption type from metadata
* Store enc block size in metadata and use it for download
* storage node quick check and startup validation
* rearranged the startup validation and quick check logic
* travis lint warning fixes
* travis lint warning fixes
* travis lint warning fixes
* code changes per review comments
* code clean dev debug info
* travis lint wranings
* code changes per code review comments
* code changes per code review comments
* code update per review
* sqlite SUM is having issue when getting the SUM of an empty column; filepath was checking a directory that doesn't exist when starting server; Example updated to print allocated and used space
* storage node quick check and startup validation
* rearranged the startup validation and quick check logic
* travis lint warning fixes
* travis lint warning fixes
* travis lint warning fixes
* code changes per review comments
* code clean dev debug info
* travis lint wranings
* code changes per code review comments
* code changes per code review comments
* code update per review
* no file or directory error
* Updated mock PSClient
* add mb command
* forgot colon
* add command descriptions
* use utils.ParseURL in commands
* return error message instead of minio.BucketAlreadyExists in mb
* ls command with bucket store functionality
* rb command with bucket store functionality
* rm command with bucket store functionality
* newline
* use print rather than errs for messages, add no buckets messsage
* cp command with bucket store functionality
* remove deprecated getStorjObjects function
* defer utils.LogClose(f) on instead of defer f.Close()
* Check for no buckets after for loop
* add checks for unspecified bucket in bucket store methods
* fix incorrect return types
* add no path error messages in object store methods
* split copy into helpers
* srcObj scheme check in download
* print buckets instead of appending to slice
* check if destObj.Host != srcObj.Host
* better method of handling destination name if not specified
* uplink rename
* final cleanups
* trailing slash fixes
* linting
* more linting
* helpful error messages
* Adjust startAfter after merging #328
* Improve output messages
* Improved error check for empty bucket and path
* No page limit on client side. Rely on server side limit.
* Better time formatting
* Fix paths in recursive list results
* Added initial functions for signing and verifying
* whoops
* Get client up to speed
* Added initial functions for signing and verifying
* whoops
* Get client up to speed
* wip
* wip
* actual signatures in tests
(cherry picked from commit 1464853b737f1d712d64fbf90147f535525c8fd9)
* bugfixing
* Generate private key in example
* Generate signatures for pieceranger tests
* Update examples to use TLS
* Use private key from identity inside of example
* Use crypto.PrivateKey interface
* Change err name in defers
* Pass tests
* Pass identity Key to PSClient
* Get tests passing on travis
* Resolve linter complaints
* first stab at PUT
* only PUT
* working on PUT
* Put with LimitReader
* start of Get
* reorder of files and proto meta
* working on Meta
* working on Meta
* add aware limit reader
* add size from segment put
* rm if for eof
* update to proto meta
* update gen proto file
* working on get
* working on get
* working on get
* working on list
* working on delete
* working on list
* working on meta method
* fix merge error and working on feedback from PR
* update to proto file
* rm size tuple
* mv eof limit reader to new file
* add toMeta
* rm varible names
* add updates from PR feedback
* updates from PR feedback
* updates from PR feedback
* add toMeta size based on total size
* update toMeta size calculation
* rm passthrough
* add default to config for segment size
* fix get method ranger bug
* add object support for nested stream proto
* rm nested stream meta data
* rm test for another PR
* pointerdb: separate client and server packages
the reason for this is so that things the server needs (bolt, auth)
don't get imported if all you need is the client. will result in
smaller binaries and less flag definitions
* review comments
* captplanet standalone farmer setup
* Bandwidth Allocation
* utils.Close method changed to utils.LogClose
* Get build temporarily working
* Get/Put for PSClient should take payer bandwidth allocations rather than the NewPSClient function
* Update example client to reflect changes in client API
* Update ecclient to use latest PSClient, Make NewPSClient return error also
* Updated pieceranger tests to check for errors; sign method should take byte array
* Handle defers in store.go better
* Fix defer functions in psdb.go
* fun times
* Protobuf bandwidthallocation data is now a byte array
* Remove psservice package and merge it into pstore server
* Write wrapper for database calls
* Change all expiration names in protobuf to be more informative; add defer in retrieve; remove old comment
* Make PSDB tests implementation independent rather than method independent
* get rid of payer, renter in ecclient
* add context monitoring in store and retrieve
* adds foundation for bucketStore
* adds prefixedObjStore to buckets package, adjusts gateway-storj accordingly
* fixes multi value assignment problems in gateway-storj
* fixes more multi value assignment errors in gateway-storj
* starts changing miniogw tests to accommodate buckets
* creates bucket store mock
* wip - fixing test cases in object tests
* adds get, put, and list object tests, comments out two test cases
* adds happy scenario tests for bucket methods
* fixes bug in list, removes redundant parts from gateway tests
* fixes nit
* Clean up tests from #188
* Fix bug with timestamp conversion in segment store
* fixes segments.Meta test
* Fix regression in listing objects in a bucket
* adds check to see if bucket is empty before deleting
* updates DeleteBucket test to account for empty/full bucket
* adds TODOs for DeleteBucket and MakeBucket for some cases, adjusts tests, filters out minio errors in logging.go
* adds checks for if buckets already exist or not in DeleteBucket and MakeBucket functions; adjusts tests
* adds BucketNotFound error check in bucket store, removes todo
* adds make_bucket to Travis test, updates boltdb client constructor to always create a bucket (table)
* starts adding segmentStore tests
* adds mocked interfaces for segmentStore tests
* adds tests for put, get, delete, and list
* regenerates pointerdb mock and updates calls to accommodate new changes
* begins adding inline segment support for segmentstore
* adds PeekThresholdReader struct plus Read and isInline methods
* moves PeekThresholdReader to peek.go, adds more simplified Read function
* adds PeekThresholdReader tests
* reverts Read function to earlier version, updates tests to use ReadFull instead
* Get function now handles inline type pointers
* adds correct type Size and ExpirationDate to inline segment
* fixes return value in Put func error condition
* moves thresholdBuf and Read tests into a table test
* adds border case test, fixes redundant parts
* passes sizedReader size to makeRemotePointer