* preparing for use of `customtype` gogo extension with `NodeID` type
* review changes
* preparing for use of `customtype` gogo extension with `NodeID` type
* review changes
* wip
* tests passing
* wip fixing tests
* more wip test fixing
* remove NodeIDList from proto files
* linter fixes
* linter fixes
* linter/review fixes
* more freaking linter fixes
* omg just kill me - linterrrrrrrr
* travis linter, i will muder you and your family in your sleep
* goimports everything - burn in hell travis
* goimports update
* go mod tidy
* initial commit of inspector gadget wireup
* change name of comman dline tool, setup grpc server
* Get inspector cli working with grpc client
* Wired up CountNodes command
* WIP getting buckets response working
* Added GetBucket command
* WIP working on get buckets command
* WIP working on bucket list
* Still WIP
* WIP getting bucket counts to work
* Some clean up of unnecessary changes
* List Buckets and Get Bucket are working
* Removing logs, getting ready for review
* initial commit of inspector gadget wireup
* change name of comman dline tool, setup grpc server
* Get inspector cli working with grpc client
* Wired up CountNodes command
* WIP getting buckets response working
* Added GetBucket command
* WIP working on get buckets command
* WIP working on bucket list
* Still WIP
* WIP getting bucket counts to work
* Some clean up of unnecessary changes
* List Buckets and Get Bucket are working
* Removing logs, getting ready for review
* Fix error return
* Trying to get tests passing
* Adds method on dht mock for tests
* Add dbx files back
* Fix package import error in dbx file
* Adds copyrights to pass linter
* tidy go mod
* Updates from code review
* Updates inspector to take flag arguments for address
* Format list-buckets output more prettier
* Wiring up PING in kad inspector tools
* remove api key from statdb server reqs; add statdb UpdateUptime and UpdateAuditSuccess to server
* update api key authentication in statdb server
* add todos for future statdb updates
* add UpdateUptime and UpdateAuditSuccess to statdb server
* fix apikey stuff in config.go and statdb_test.go
* fix tests
* update sdbclient.NewClient call in audit package
* fix UpdateUptime and UpdateAuditSuccess in sdbclient
* set api key from statdb/config.go
* change package for statdb tests
* linter fixes
* remove todo comments
* fix sdbclient err checking
* move validate auth functionality to auth package
* update description for statdb api key
* remove import
* initial commit of inspector gadget wireup
* change name of comman dline tool, setup grpc server
* Get inspector cli working with grpc client
* Wired up CountNodes command
* WIP getting buckets response working
* Added GetBucket command
* WIP working on get buckets command
* WIP working on bucket list
* Still WIP
* WIP getting bucket counts to work
* Some clean up of unnecessary changes
* List Buckets and Get Bucket are working
* Removing logs, getting ready for review
* initial commit of inspector gadget wireup
* change name of comman dline tool, setup grpc server
* Get inspector cli working with grpc client
* Wired up CountNodes command
* WIP getting buckets response working
* Added GetBucket command
* WIP working on get buckets command
* WIP working on bucket list
* Still WIP
* WIP getting bucket counts to work
* Some clean up of unnecessary changes
* List Buckets and Get Bucket are working
* Removing logs, getting ready for review
* Fix error return
* Trying to get tests passing
* Adds method on dht mock for tests
* Add dbx files back
* Fix package import error in dbx file
* Adds copyrights to pass linter
* tidy go mod
* Updates from code review
* Updates inspector to take flag arguments for address
* Format list-buckets output more prettier
* Signature verification
* Clean up agreement sender to have less errors
* overlay address in captnplanet
* Refactor bandwidth.proto to not use streams
* Make sure the send worked
* Handle connection to satellite
* Save renter public key inside of renter bandwidth allocations
* Default diag to sqlite. Make configurable
* Separate bw server and dbm; regenerate dbx files
* Make sure test uses protobufs
* Demonstrate creating bandwidth allocations
* add sdbclient.UpdateUptime; update args for sdbclient.CreateEntryIfNotExists
* add auditcount to node stats; restructure statdb.CreateEntryIfNotExists
* add noop mock sdbclient
* add the ability to create a node in statdb without "default" stats
* update statdb.CreateEntryIfNotExists
* take fewer args for sdbclient.CreateWithStats/FindValidNodes
* add sdbclient.UpdateAuditSuccess
* update sdbclient.Update so that all fields are updated when called (reduce args)
* update error checking in statdb.Create
* creates separate tally and rollup packages and writes skeleton for rollup
* TODO add rollupDB and rawDB to rollup struct
* TODO add rawDB to tally struct
* WIP starting to wire up the kademlia CLI tool
* WIP wiring up kad cli tools
* WIP starting to wire up the kademlia CLI tool
* WIP wiring up kad cli tools
* Got everything wired up
* WIP starting to wire up the kademlia CLI tool
* WIP wiring up kad cli tools
* merge in upstream
* WIP wiring up kad cli tools
* Got everything wired up
* WIP trying to get CLI to connect
* Inspector connects to overlay now
* Some refactoring
* Linter fixes
* Linter fixes
* Switch to pkg/process instead of using rootCmd.Execute
* fix audit stripe selector to work if last segment is smaller than stripe size
* fix audit bug related to indexing an incomplete list of nodes returned by overlay
* add storeConfig struct and getSegmentStore helper for creating a segment store
* implement segment store in repairer, remove unnecessary repairer Repair method
* change repair method parameter from int to int32 to match type being passed in
* implement repairer service in captplanet
* rework Config, set Config defaults in captplanet/setup
* protobuf for sending bandwidth agreements to satellite from storage nodes
* Setup process for sending agreements
* Add payer_id to db with bandwidth agreements for better sorting
* Linter errors
* Read agreements from PSDB
* Try writing message to server
* Cleanup
* Basic functionality
* Better error handelling
* Fix test
* setup config and server structure for receiving bandwidth agreements
* Resolve linter issues
* Optional commit for if we want to handle deletes all at once
* add identity to Server, add logic for receiving bandwidth messsages
* Bandwidth agreement DBX creation and integration with bw agreement endpoint
Co-authored-by: Kishore <kishore@storj.io>
Co-authored-by: Cam <cameron@storj.io>
* protobuf for sending bandwidth agreements to satellite from storage nodes
* Setup process for sending agreements
* Add payer_id to db with bandwidth agreements for better sorting
* Linter errors
* Read agreements from PSDB
* Try writing message to server
* Cleanup
* Basic functionality
* Better error handelling
* Fix test
* setup config and server structure for receiving bandwidth agreements
* Resolve linter issues
* Optional commit for if we want to handle deletes all at once
* add identity to Server, add logic for receiving bandwidth messsages
* Bandwidth agreement DBX creation and integration with bw agreement endpoint
Co-authored-by: Kishore <kishore@storj.io>
Co-authored-by: Cam <cameron@storj.io>
* added postgres create/read/delete test function
Co-authored-by: kishore <kishore@storj.io
Co-authored-by: cam <cameron@storj.io>
* edit comment
* removed sqlite3 driver from dbx
* remove generated sqlite code, add dbx read limitoffset
* remove getServerAndDB function, rename getDBPath to getPSQLInfo
* WIP writing server endpoint test
* code review changes
* protobuf for sending bandwidth agreements to satellite from storage nodes
* Setup process for sending agreements
* Add payer_id to db with bandwidth agreements for better sorting
* Renamed payer to satellite in psdb
* add filter field into OverlayOptions message
* chooseFiltered method, add excluded parameter in populate method
* change excluded type to []dht.NodeID in ChooseFiltered, change comment
* change name filter to excluded_nodes in proto
* implement helper function contains
* delete ChooseFiltered and add its functionality into Choose method to keep original author's history, add excluded argument into Choose calls
* regenerate mock_client.go
* regenerate protobuf
* adding the repair() func
* update test case to use new IDFromString function
* modified the repair() and updated streams mock
* modified the repair() and updated streams mock
* Options struct
* adding the repair() func
* modified the repair() and updated streams mock
* modified the repair() and updated streams mock
* integrating the segment repair()
* development repair with hack working
* repair segment changes
* integrated with mini hacks and rigged up test case with dev debug info
* integrated with ec and overlay
* added repair test case
* made the getNewUniqueNodes() to recursively go thru choose() to find get the required number of unique nodes
* cleaned up code
* disconnect from nodeclient
* cleanup connections in tests
* kademlia disconnects from nodeclient
* updating disconnect method for mocks
* creates separate disconnect and removeAll methods for tests
* adds init to connection pool
* fix folder cleanup and disconnect
* creates and cleans up test db files and disconnects kad
* removes db/.keep
* includes disconnect within cleanup methods
* creates public init method on connection pool to handle mutex copy issues
* remove all after disconnect
* pair creation and destruction
* checks disconnect error
* remove ctx
* fixes mock kad
The old paths.Path type is now replaced with the new storj.Path.
storj.Path is simply an alias to the built-in string type. As such it can be used just as any string, which simplifies a lot working with paths. No more conversions paths.New and path.String().
As an alias storj.Path does not define any methods. However, any functions applying to strings (like those from the strings package) gracefully apply to storj.Path too. In addition we have a few more functions defined:
storj.SplitPath
storj.JoinPaths
encryption.EncryptPath
encryption.DecryptPath
encryption.DerivePathKey
encryption.DeriveContentKey
All code in master is migrated to the new storj.Path type.
The Path example is also updated and is good for reference: /pkg/encryption/examples_test.go
This PR also resolve a nonce misuse issue in path encryption: https://storjlabs.atlassian.net/browse/V3-545
..although it ought to work for other storage.KeyValueStore needs as
well. it's just optimized to work pretty well for a largish hierarchy of
paths.
This includes the addition of "long benchmarks" for KeyValueStore
testing. These will only be run when -test-bench-long is added to the
test flags. In these benchmarks, a large corpus of paths matching a
natural ("real-life") hierarchy is read from paths.data.gz (which you
can get from https://github.com/storj/path-test-corpus) and imported
into a particular KeyValueStore. Recursive and non-recursive queries are
run on it to detect performance problems that arise only at scale.
This also includes alternate implementation of the postgreskv client,
which works in a less-bizarre way for non-recursive queries, but suffers
from poor performance in tests such as the long benchmarks. Once this
alternate impl is committed to the tree, we can remove it again; I just
want it to be available for future reference.
This is an old definition from the very early stage of development. It
is not used anymore.
Change-Id: I6a033e4006e6edfa7c18acc6ae91c9e4e1df0e6a
Signed-off-by: Kaloyan Raev <kaloyan@storj.io>
Reviewed-on: https://review.gerrithub.io/429582
Reviewed-by: JT Olio <hello@jtolio.com>
Tested-by: JT Olio <hello@jtolio.com>
* Travis uses Go 1.11
* Use go modules instead of storj-vendor
* Automatic caching of downloaded dependencies
* Ensures that modules incompatible linters run with modules
* handle nil nodes in ec Put
* read and discard readers for nil nodes
* test 2 nil nodes, unique wont return false with nil nodes
* Discard reader data for nil nodes
* edit control flow