* separate TLS options from server options (because we need them for dialing too)
* stop creating transports in multiple places
* ensure that we actually check revocation, whitelists, certificate signing, etc, for all connections.
this change removes the cryptopasta dependency.
a couple possible sources of problem with this change:
* the encoding used for ECDSA signatures on SignedMessage has changed.
the encoding employed by cryptopasta was workable, but not the same
as the encoding used for such signatures in the rest of the world
(most particularly, on ECDSA signatures in X.509 certificates). I
think we'll be best served by using one ECDSA signature encoding from
here on, but if we need to use the old encoding for backwards
compatibility with existing nodes, that can be arranged.
* since there's already a breaking change in SignedMessage, I changed
it to send and receive public keys in raw PKIX format, instead of
PEM. PEM just adds unhelpful overhead for this case.
* got tests passed
* wire up paginate function for cache node retrieval
* Add tests for paginate but they're failing
* fix the test arguments
* Updates paginate function to return more variable
* Updates
* Some test and logic tweaks
* improves config handling in discovery
* adds refresh offset to discovery struct
* add config fields for satellite restriction on psserver
* add whitelistedSatIDs to psserver Server struct
* check pbwa satellite ID against whitelist
* add whitelist to psserver tests
* reword help message, make approved() a method on server
* Consolidate identity management:
Move identity cretaion/signing out of storagenode setup command.
* fixes
* linters
* Consolidate identity management:
Move identity cretaion/signing out of storagenode setup command.
* fixes
* sava backups before saving signed certs
* add "-prebuilt-test-cmds" test flag
* linters
* prepare cli tests for travis
* linter fixes
* more fixes
* linter gods
* sp/sdk/sim
* remove ca.difficulty
* remove unused difficulty
* return setup to its rightful place
* wip travis
* Revert "wip travis"
This reverts commit 56834849dcf066d3cc0a4f139033fc3f6d7188ca.
* typo in travis.yaml
* remove tests
* remove more
* make it only create one identity at a time for consistency
* add config-dir for consitency
* add identity creation to storj-sim
* add flags
* simplify
* fix nolint and compile
* prevent overwrite and pass difficulty, concurrency, and parent creds
* goimports
* Add more info to SN logs
* remove config-dir from user config
* add output where config was stored
* add message for successful connection
* fix linter
* remove storage.path from user config
* resolve config path
* move success message to info
* log improvements
This change removes automatic metrics reporting for everything going
through process.Exec(), and re-adds metrics reporting for those commands
which are expected to be long-lived. Other commands (which may have been
intermittently sending metrics before this, if they ran unusually long)
will no longer send any metrics.
For commands where it makes sense, a node ID is used as the metrics ID.
* Edit config on Setup
* Default to 1TiB storage space and 500GiB bandwidth
* Use human readable formats
* Use memory
* units of 1024 are measured with KiB/MiB etc
* pkg/cfgstruct: allow values to be configured with human readable sizes
Change-Id: Ic4e9ae461516d1d26fb81f6e44c5ac5cfccf777f
* Modify tests
* Removed comments
* More merge conflict stuff resolved
* Fix lint
* test fixin
Change-Id: I3a008206bf03a4446da19f642a2f9c1f9acaae36
* Remove commented code but secretly leave it in the histroy forever
* Move flag definition to struct
now kad inspector features exist on every server that has
kademlia running. likewise, overlay and statdb.
this means kad inspection features are now available on
storage nodes
Change-Id: I343c873552341de13302bfb7a5d79cccc84fc6b8
* add 45 day expiration to PBAs
* add expiration field to relevant areas, DeleteExpired placeholder
* reject expired BWAs
* test for expired BWAs
* add BwExpiration config value
* fix - Satellite crashing on receiving a manipulated bandwidthagreement
* provider.PeerIdentityFromContext called twice. Remove one
* add storage node ID to serial number
* remove serialNum query and transaction
* add uuid to GeneratePayerBandwidthAllocation for testing
* enable expected failure on duplicate serialnum cases
* Revert "enable expected failure on duplicate serialnum cases"
This reverts commit 5948f43ed1741c280f0bb34a86c1c490365417bc.
* enable expected failure on duplicate serialnum cases
* Process for updating node stats in kademlia
* Mutex lock
* Rename and set up the refresher
* Wrap errors
* Lock should be around the if
* Address comments; Build updateSelf funciton in Kademlia Routing Table
* Added test
* Address comments
* pkg/identity: use sha256 instead of sha3 for pow
Change-Id: I9b7a4f2c3e624a6e248a233e3653eaccaf23c6f3
* pkg/identity: restructure key generation a bit
Change-Id: I0061a5cc62f04b0c86ffbf046519d5c0a154e896
* cmd/identity: indefinite key generation command
you can start this command and leave it running and it will fill up your
hard drive with node certificate authority private keys ordered by
difficulty.
Change-Id: I61c7a3438b9ff6656e74b8d74fef61e557e4d95a
* pkg/storj: more node id difficulty testing
Change-Id: Ie56b1859aa14ec6ef5973caf42aacb4c494b87c7
* review comments
Change-Id: Iff019aa8121a7804f10c248bf2e578189e5b829d
* Add test for aio
* Don't trust the user to have images built for a version
* Make travis run the aio test
* Add missing values to docker-compose, sort some things, consider the gateway image
* today's changes
* config changed, again
* more fixes
* Expose satellite port on localhost:7778
* Add retries and a timeout around the big-testfile test in AIO
* Another config value changed
* Make this error message a little more useful
* Fix nil condition
* wires up first draft of lifecycle methods
* creates interface on transport
* node lifecycle hooks works
* linter fixes
* adds error log at connection success
* chnages Observer interface to use context
* Makes Discovery take its own logger
* WIP
* linter fixes
* Test fixes
* adds in ConnFailure code for cache
Update statdb args/return values to minimize structs
Simplify statdb.Update() to update all stats instead of an arbitrary subset determined by flags
Remove CreateIfNotExists logic from statdb.Update()
Simplify audit code structure
* Add '--dir' param for all CLI parts (replace --base-path)
* FindDirParam method moved
* fix compilation error
* make param global
* remove unused fields
* rename param
* remove config flag
* goimports
* intial changes to migrate statdb to masterdb framework
* statdb refactor compiles
* added TestCreateDoesNotExist testcase
* Initial port of statdb to masterdb framework working
* refactored statdb proto def to pkg/statdb
* removed statdb/proto folder
* moved pb.Node to storj.NodeID
* CreateEntryIfNotExistsRequest moved pd.Node to storj.NodeID
* moved the fields from pb.Node to statdb.UpdateRequest
ported TestUpdateExists, TestUpdateUptimeExists, TestUpdateAuditSuccessExists TestUpdateBatchExists
* WIP possible discovery service impl
* Adds discovery service to CaptPlanet
* Updates the config and server for discovery service
* updates testplanet to use discovery package
* update satellite imports
* Removes unnecessary cache test
* linter fixes
* adds discovery startup to captplanet
* invoke refresh
* updates to discovery refresh cycle
* Make implementation more consistent with previous implementation
* add wait before trying to upload
* sleep a bit more
* remove kademlia bootstrap
* updates
* remove comments
* tallies up data stored on each node in pointerdb
* adds comments for data type enums
* changes Open to BeginTx because Go convention
* removes online status check from identify active nodes
* changes identifyactivenodes to calculatestaticdata
* updates accounting dbx names
* cmd/statreceiver: lua-scriptable stat receiver
Change-Id: I3ce0fe3f1ef4b1f4f27eed90bac0e91cfecf22d7
* some updates
Change-Id: I7c3485adcda1278fce01ae077b4761b3ddb9fb7a
* more comments
Change-Id: I0bb22993cd934c3d40fc1da80d07e49e686b80dd
* linter fixes
Change-Id: Ied014304ecb9aadcf00a6b66ad28f856a428d150
* catch errors
Change-Id: I6e1920f1fd941e66199b30bc427285c19769fc70
* review feedback
Change-Id: I9d4051851eab18970c5f5ddcf4ff265508e541d3
* errorgroup improvements
Change-Id: I4699dda3022f0485fbb50c9dafe692d3921734ff
* too tricky
the previous thing was better for memory with lots of errors at a time
but https://play.golang.org/p/RweTMRjoSCt is too much of a foot gun
Change-Id: I23f0b3d77dd4288fcc20b3756a7110359576bf44
* Merge bwagreement db into satellite master db
* adjust to recent tally changes
* linter problems
* linter problems
* returning db structs in more optimal way
* added pointer for assignment
* error message changed
* better param message
* adds channel for getting node out of lookup
* WIP adding the channels to lookups
* WIP adding channel to node lookups
* Wires up FindNodes method with channels
* WIP adds a test suite for lookup - tests are still failing
* WIP wires up use of testplanet for kademlia lookup tests
* WIP merging in node id changes
* Merges in pkg/storj node type changes
* Tests passing
* Lookup node working via Inspector now
* updates
* WIP working on getting tests passing
* WIP getting tests passing
* FindNode works
* Linter fix
* Adds copyrights to lookup_test
* removes a fmt.Printf I missed
* Removes commented out lines
* Pulls statdb stats into overlay cache whenever cache.Put() is called
* Updates overlay.FindStorageNodes()/overlayClient.Choose() to filter based on node stats
* Updates overlay.FindStorageNodes()/overlayClient.Choose() to exclude duplicate IP addresses
* preparing for use of `customtype` gogo extension with `NodeID` type
* review changes
* preparing for use of `customtype` gogo extension with `NodeID` type
* review changes
* wip
* tests passing
* wip fixing tests
* more wip test fixing
* remove NodeIDList from proto files
* linter fixes
* linter fixes
* linter/review fixes
* more freaking linter fixes
* omg just kill me - linterrrrrrrr
* travis linter, i will muder you and your family in your sleep
* goimports everything - burn in hell travis
* goimports update
* go mod tidy
* initial commit of inspector gadget wireup
* change name of comman dline tool, setup grpc server
* Get inspector cli working with grpc client
* Wired up CountNodes command
* WIP getting buckets response working
* Added GetBucket command
* WIP working on get buckets command
* WIP working on bucket list
* Still WIP
* WIP getting bucket counts to work
* Some clean up of unnecessary changes
* List Buckets and Get Bucket are working
* Removing logs, getting ready for review
* initial commit of inspector gadget wireup
* change name of comman dline tool, setup grpc server
* Get inspector cli working with grpc client
* Wired up CountNodes command
* WIP getting buckets response working
* Added GetBucket command
* WIP working on get buckets command
* WIP working on bucket list
* Still WIP
* WIP getting bucket counts to work
* Some clean up of unnecessary changes
* List Buckets and Get Bucket are working
* Removing logs, getting ready for review
* Fix error return
* Trying to get tests passing
* Adds method on dht mock for tests
* Add dbx files back
* Fix package import error in dbx file
* Adds copyrights to pass linter
* tidy go mod
* Updates from code review
* Updates inspector to take flag arguments for address
* Format list-buckets output more prettier
* Wiring up PING in kad inspector tools
* remove api key from statdb server reqs; add statdb UpdateUptime and UpdateAuditSuccess to server
* update api key authentication in statdb server
* add todos for future statdb updates
* add UpdateUptime and UpdateAuditSuccess to statdb server
* fix apikey stuff in config.go and statdb_test.go
* fix tests
* update sdbclient.NewClient call in audit package
* fix UpdateUptime and UpdateAuditSuccess in sdbclient
* set api key from statdb/config.go
* change package for statdb tests
* linter fixes
* remove todo comments
* fix sdbclient err checking
* move validate auth functionality to auth package
* update description for statdb api key
* remove import
* initial commit of inspector gadget wireup
* change name of comman dline tool, setup grpc server
* Get inspector cli working with grpc client
* Wired up CountNodes command
* WIP getting buckets response working
* Added GetBucket command
* WIP working on get buckets command
* WIP working on bucket list
* Still WIP
* WIP getting bucket counts to work
* Some clean up of unnecessary changes
* List Buckets and Get Bucket are working
* Removing logs, getting ready for review
* initial commit of inspector gadget wireup
* change name of comman dline tool, setup grpc server
* Get inspector cli working with grpc client
* Wired up CountNodes command
* WIP getting buckets response working
* Added GetBucket command
* WIP working on get buckets command
* WIP working on bucket list
* Still WIP
* WIP getting bucket counts to work
* Some clean up of unnecessary changes
* List Buckets and Get Bucket are working
* Removing logs, getting ready for review
* Fix error return
* Trying to get tests passing
* Adds method on dht mock for tests
* Add dbx files back
* Fix package import error in dbx file
* Adds copyrights to pass linter
* tidy go mod
* Updates from code review
* Updates inspector to take flag arguments for address
* Format list-buckets output more prettier
* Signature verification
* Clean up agreement sender to have less errors
* overlay address in captnplanet
* Refactor bandwidth.proto to not use streams
* Make sure the send worked
* Handle connection to satellite
* Save renter public key inside of renter bandwidth allocations
* Default diag to sqlite. Make configurable
* Separate bw server and dbm; regenerate dbx files
* Make sure test uses protobufs
* Demonstrate creating bandwidth allocations
* add sdbclient.UpdateUptime; update args for sdbclient.CreateEntryIfNotExists
* add auditcount to node stats; restructure statdb.CreateEntryIfNotExists
* add noop mock sdbclient
* add the ability to create a node in statdb without "default" stats
* update statdb.CreateEntryIfNotExists
* take fewer args for sdbclient.CreateWithStats/FindValidNodes
* add sdbclient.UpdateAuditSuccess
* update sdbclient.Update so that all fields are updated when called (reduce args)
* update error checking in statdb.Create
* creates separate tally and rollup packages and writes skeleton for rollup
* TODO add rollupDB and rawDB to rollup struct
* TODO add rawDB to tally struct
* WIP starting to wire up the kademlia CLI tool
* WIP wiring up kad cli tools
* WIP starting to wire up the kademlia CLI tool
* WIP wiring up kad cli tools
* Got everything wired up
* WIP starting to wire up the kademlia CLI tool
* WIP wiring up kad cli tools
* merge in upstream
* WIP wiring up kad cli tools
* Got everything wired up
* WIP trying to get CLI to connect
* Inspector connects to overlay now
* Some refactoring
* Linter fixes
* Linter fixes
* Switch to pkg/process instead of using rootCmd.Execute
* fix audit stripe selector to work if last segment is smaller than stripe size
* fix audit bug related to indexing an incomplete list of nodes returned by overlay
* add storeConfig struct and getSegmentStore helper for creating a segment store
* implement segment store in repairer, remove unnecessary repairer Repair method
* change repair method parameter from int to int32 to match type being passed in
* implement repairer service in captplanet
* rework Config, set Config defaults in captplanet/setup
* protobuf for sending bandwidth agreements to satellite from storage nodes
* Setup process for sending agreements
* Add payer_id to db with bandwidth agreements for better sorting
* Linter errors
* Read agreements from PSDB
* Try writing message to server
* Cleanup
* Basic functionality
* Better error handelling
* Fix test
* setup config and server structure for receiving bandwidth agreements
* Resolve linter issues
* Optional commit for if we want to handle deletes all at once
* add identity to Server, add logic for receiving bandwidth messsages
* Bandwidth agreement DBX creation and integration with bw agreement endpoint
Co-authored-by: Kishore <kishore@storj.io>
Co-authored-by: Cam <cameron@storj.io>
* protobuf for sending bandwidth agreements to satellite from storage nodes
* Setup process for sending agreements
* Add payer_id to db with bandwidth agreements for better sorting
* Linter errors
* Read agreements from PSDB
* Try writing message to server
* Cleanup
* Basic functionality
* Better error handelling
* Fix test
* setup config and server structure for receiving bandwidth agreements
* Resolve linter issues
* Optional commit for if we want to handle deletes all at once
* add identity to Server, add logic for receiving bandwidth messsages
* Bandwidth agreement DBX creation and integration with bw agreement endpoint
Co-authored-by: Kishore <kishore@storj.io>
Co-authored-by: Cam <cameron@storj.io>
* added postgres create/read/delete test function
Co-authored-by: kishore <kishore@storj.io
Co-authored-by: cam <cameron@storj.io>
* edit comment
* removed sqlite3 driver from dbx
* remove generated sqlite code, add dbx read limitoffset
* remove getServerAndDB function, rename getDBPath to getPSQLInfo
* WIP writing server endpoint test
* code review changes
* protobuf for sending bandwidth agreements to satellite from storage nodes
* Setup process for sending agreements
* Add payer_id to db with bandwidth agreements for better sorting
* Renamed payer to satellite in psdb
* add filter field into OverlayOptions message
* chooseFiltered method, add excluded parameter in populate method
* change excluded type to []dht.NodeID in ChooseFiltered, change comment
* change name filter to excluded_nodes in proto
* implement helper function contains
* delete ChooseFiltered and add its functionality into Choose method to keep original author's history, add excluded argument into Choose calls
* regenerate mock_client.go
* regenerate protobuf
* adding the repair() func
* update test case to use new IDFromString function
* modified the repair() and updated streams mock
* modified the repair() and updated streams mock
* Options struct
* adding the repair() func
* modified the repair() and updated streams mock
* modified the repair() and updated streams mock
* integrating the segment repair()
* development repair with hack working
* repair segment changes
* integrated with mini hacks and rigged up test case with dev debug info
* integrated with ec and overlay
* added repair test case
* made the getNewUniqueNodes() to recursively go thru choose() to find get the required number of unique nodes
* cleaned up code
* disconnect from nodeclient
* cleanup connections in tests
* kademlia disconnects from nodeclient
* updating disconnect method for mocks
* creates separate disconnect and removeAll methods for tests
* adds init to connection pool
* fix folder cleanup and disconnect
* creates and cleans up test db files and disconnects kad
* removes db/.keep
* includes disconnect within cleanup methods
* creates public init method on connection pool to handle mutex copy issues
* remove all after disconnect
* pair creation and destruction
* checks disconnect error
* remove ctx
* fixes mock kad
The old paths.Path type is now replaced with the new storj.Path.
storj.Path is simply an alias to the built-in string type. As such it can be used just as any string, which simplifies a lot working with paths. No more conversions paths.New and path.String().
As an alias storj.Path does not define any methods. However, any functions applying to strings (like those from the strings package) gracefully apply to storj.Path too. In addition we have a few more functions defined:
storj.SplitPath
storj.JoinPaths
encryption.EncryptPath
encryption.DecryptPath
encryption.DerivePathKey
encryption.DeriveContentKey
All code in master is migrated to the new storj.Path type.
The Path example is also updated and is good for reference: /pkg/encryption/examples_test.go
This PR also resolve a nonce misuse issue in path encryption: https://storjlabs.atlassian.net/browse/V3-545
..although it ought to work for other storage.KeyValueStore needs as
well. it's just optimized to work pretty well for a largish hierarchy of
paths.
This includes the addition of "long benchmarks" for KeyValueStore
testing. These will only be run when -test-bench-long is added to the
test flags. In these benchmarks, a large corpus of paths matching a
natural ("real-life") hierarchy is read from paths.data.gz (which you
can get from https://github.com/storj/path-test-corpus) and imported
into a particular KeyValueStore. Recursive and non-recursive queries are
run on it to detect performance problems that arise only at scale.
This also includes alternate implementation of the postgreskv client,
which works in a less-bizarre way for non-recursive queries, but suffers
from poor performance in tests such as the long benchmarks. Once this
alternate impl is committed to the tree, we can remove it again; I just
want it to be available for future reference.
This is an old definition from the very early stage of development. It
is not used anymore.
Change-Id: I6a033e4006e6edfa7c18acc6ae91c9e4e1df0e6a
Signed-off-by: Kaloyan Raev <kaloyan@storj.io>
Reviewed-on: https://review.gerrithub.io/429582
Reviewed-by: JT Olio <hello@jtolio.com>
Tested-by: JT Olio <hello@jtolio.com>
* Travis uses Go 1.11
* Use go modules instead of storj-vendor
* Automatic caching of downloaded dependencies
* Ensures that modules incompatible linters run with modules
* handle nil nodes in ec Put
* read and discard readers for nil nodes
* test 2 nil nodes, unique wont return false with nil nodes
* Discard reader data for nil nodes
* edit control flow
* add filter field into OverlayOptions message
* chooseFiltered method, add excluded parameter in populate method
* change excluded type to []dht.NodeID in ChooseFiltered, change comment
* change name filter to excluded_nodes in proto
* implement helper function contains
* delete ChooseFiltered and add its functionality into Choose method to keep original author's history, add excluded argument into Choose calls
* regenerate mock_client.go
* regenerate protobuf
* update test case to use new IDFromString function
* remove old kademlia test code
* begin adding path encryption
* do not encrypt/decrypt first element of path (bucket)
* add path encryption for delete and list
* use encrypted paths in streamstore.Meta
* fix listing with encrypted paths
* move encrypt/decryptAfterBucket to streamstore
* fix listing with no prefix
* remove duplicate logic for listing with no prefix
* Initial Layout
* Commit to test File Handling OS independed
* Hide struct properties to prevent manual interaction
* Fix Linting Errors
* 1st Working Windows Version
* Add missing Error Handling
* Fix Linting Errors
* Remove dependencies
* Further Improvements
* Remove commented code
* Improve comments and error messages
* No pointers to FPath
* Improve comment
* Do not filepath.ToSlash URL path
* Extract helper functions for parsing local path and Storj path
* Minor Improvements based on PR Comments
* Fix Linting Error and make Regex private
* Improve Layout
* Rework FPath and add tests
* Add more tests cases for windows
* Use for-loop instead of goto
* Use FPath in all uplink commands
* Add guard checks
* Add Test Cases and add comments
* Added a new table 'mib' with 'data', 'size' and 'method' columns
* added AddMIB() function and test case TestMIBHappyPath()
* added function and a test case to add entries into bandwidth usage table
* added functionality to create an entry, update the entry and readback the entry based on a given date into/from bandwidth tbl
* added initial SumBandwidthSizes()
* added the functionality to retrieve the total bw usage based on start and end date
* Added the unit test case for AddBwUsageTbl
* changed the arguments to take time format as arg than Unix format
* changed the arguments to take time format as arg than Unix format
* changes per code review comments
* adding back go.sum
* changes per code review comments
* changes per code review comments
* changes per code review comments
* creates checker
* tests offline nodes
* test id injured segs:
* Adds healthy pieces to injured segment struct
* changes inequality
* creates common files
* adds checker benchmarking
* creates more common files
* Replaces pointedb direct db with api call to a new iterate method on pointerdb
* move monkit
* removes identifyrequest proto
* remove healthypieces
* adds benchmarking
creates common file for datarepair
* recreates proto file
* api key on ctx
* create db directory if it does not exist
* linter fix
* pass db path in from config
* change mkdir to mkdirAll
* windows love
* PR comments
* changing the path
* change the config default to $CONFDIR/kademlia
* Let's do it right this time
* Oh travis...
* Handle redis URL
* Travis... why u gotta be like this?
* Handle when address does not use redis scheme
* Start repairer
* Match provider.Responsibility interface
* Simplify if statement
* Config doesn't need to be a pointer
* Initialize doesn't need to be exported
* Don't run checker or repairer on startup
* Fix travis complaints
* initial commit- wip, working on testing and library
* wip working on testing library functioon to get poointer
* working on nil reference for testing
* tests wip
* wip-working on getting tests to work
* working on tests
* put test passes
* working on test- need to export
* created pdclient, and now working on testing function
* tests working for list- getting object back
* wip - got derived piece id
* fixed making grpc public
* fixed linter errors and minor added method for size
* need to work on testing, added random integer function
* got psc server working for testing
* working on ranger test and ranger method
* testing creds for new computer
* working on getting segment metadata
* get random stripe
* added caveat to random fn
* fixed data types
* modified library to have one public function that returns a random stripe
* removed extra comments
* added commons.go file for audit
* added last path to be remembered
* changed random function to cryto/rand/ & worked on tests passing
* working on testing to get analysis of randomness
* changed to track last item in pagination
* finished testing randomness, cleaned up code
* fixed rebase errors
* removed error, kept common file
* fixed travis errors
* attempt to fix overlay issue
* fixed travis error
* updated pointer parameters
* made smaller functions, renamed audit
* made changes per suggestions
* removed gosum
* fixed pr per suggestions
* removed comment
* Creates cron-job for checker, adds it to captplanet and satellite
* removes datarepair from satellite & captplanet run
* Delete config.go
* removes unused datarepair imports
* adds comments to fix linter
* Loads cache from context for PointerDB access
* WIP adds overlay lookups to pointerdb requests
* Pointer lookup code is added for Get
* adds feature flag for pointerdb return
* refactors pointerdb code
* removes some unnecessary debug logs
* Fixes indent in config
* adds early return for non-remote pointers
* formats code, removes some comments
* Fixes tests broken by pointer proto changes
* adds error check and merges variable declaration
* removes commented out proto import
* adds error check to pdbclient
* merged the lasted master changes
* debug working of handling ctrl+c
* Handling of clean up of partially uploaded segments and pieces
* code cleanup per code comment
* updates based on code review comments
* Clean up last segment handling
* Fix increment for AES-GCM nonce
* Fix stream size calculation
* Adapt stream store tests
* Fix Delete method
* Rename info callback to segmentInfo
* Clearer calculation for offset in Nonce.AESGCMNonce()
* Adapt to the new little-endian nonce increment
* setup repairer loop
* added read from queue
* Refactor to make things easier to import
* add more control flow to repairer
* add comment
* basic interval structure for running check/repair
* change function name GetNext to Dequeue
* better increment/decrement syntax
* export Repairer struct
* delete 'unreachable code'
* add mon.Task() to Repairer.Repair
* remove 24 hour interval
* set maxRepair on Config as well as Repairer
* add comment for Repairer struct, check err
* comment out runCfg.Repair in cmd/satellite/main.go because it is NI yet
* pkg structure and repair queue implementation
* adds zeebo
* gets redis working with queue
* modifies interface
* changes re feedback
* pr changes w encoding and enqueue dequeue modifications
* test force error
* concurrent enqueue/dequeue
* refactor sequential to use only 1 slice
* added token for time conflicts
* begin adding encryption for remote pieces
* begin adding decryption
* add encryption key as arg to Put and Get
* move encryption/decryption to object store
* Add encryption key to object store constructor
* Add the erasure scheme to object store constructor
* Ensure decrypter is initialized with the stripe size used by encrypter
* Revert "Ensure decrypter is initialized with the stripe size used by encrypter"
This reverts commit 07272333f461606edfb43ad106cc152f37a3bd46.
* Revert "Add the erasure scheme to object store constructor"
This reverts commit ea5e793b536159d993b96e3db69a37c1656a193c.
* move encryption to stream store
* move decryption stuff to stream store
* revert changes in object store
* add encryptedBlockSize and close rangers on error during Get
* calculate padding sizes correctly
* encryptedBlockSize -> encryptionBlockSize
* pass encryption key and block size into stream store
* remove encryption key and block size from object store constructor
* move encrypter/decrypter initialization
* remove unnecessary cast
* Fix padding issue
* Fix linter
* add todos
* use random encryption key for data encryption. Store an encrypted copy of this key in segment metadata
* use different encryption key for each segment
* encrypt data in one step if it is small enough
* refactor and move encryption stuff
* fix errors related to nil slices passed to copy
* fix encrypter vs. decrypter bug
* put encryption stuff in eestream
* get captplanet test to pass
* fix linting errors
* add types for encryption keys/nonces and clean up
* fix tests
* more review changes
* add Cipher type for encryption stuff
* fix rs_test
* Simplify type casting of key and nonce
* Init starting nonce to the segment index
* don't copy derived key
* remove default encryption key; force user to explicitly set it
* move getSegmentPath to streams package
* dont require user to specify encryption key for captplanet
* rename GenericKey and GenericNonce to Key and Nonce
* review changes
* fix linting error
* Download uses the encryption type from metadata
* Store enc block size in metadata and use it for download
* storage node quick check and startup validation
* rearranged the startup validation and quick check logic
* travis lint warning fixes
* travis lint warning fixes
* travis lint warning fixes
* code changes per review comments
* code clean dev debug info
* travis lint wranings
* code changes per code review comments
* code changes per code review comments
* code update per review
* sqlite SUM is having issue when getting the SUM of an empty column; filepath was checking a directory that doesn't exist when starting server; Example updated to print allocated and used space
* storage node quick check and startup validation
* rearranged the startup validation and quick check logic
* travis lint warning fixes
* travis lint warning fixes
* travis lint warning fixes
* code changes per review comments
* code clean dev debug info
* travis lint wranings
* code changes per code review comments
* code changes per code review comments
* code update per review
* no file or directory error
* Updated mock PSClient
* Limit to only 1 database write
* Check file system rather than database
* Move check to storefile. We need to figure out how to fix this mess
* piecestore should not overwrite data, it should fail when trying to write to a file that already exists
* Format errors, delete unused function in psdb for checking if TTL exists
* Combine errors better
* Moving retrieve into multiple goroutines
* Make sure we pass nil errors into err channel
* restore tests
* incorporate locks in retrieve.go
* deserialize data only if we have something to deserealize when receiving bandwidth allocation in server store
* Adding logic for retrieve to be more efficient
* Add channel?
* hmm
* implement Throttle concurrency primitive
* using throttle
* Remove unused variables
* Egon comments addressed
* Get ba total correct
* Consume without waiting
* incrementally increase signing size
* Get downloads working with throttle
* Removed logging
* Make sure we handle errors properly
* Fix tests
>
>
Co-authored-by: Kaloyan <kaloyan@storj.io>
* Can't Fatalf in goroutine
* Add missing returns to tests
* add capacity to channel, smarter allocations
* rename things and don't use size as limit
* replace things with sync2.Throttle
* fix compilation errors
* add note about security
* fix ordering
* Max length is actually 64 bytes for piece ID
* Max length is actually 64 bytes for piece ID
* fix limit
* error comes from pending allocs, so no need to relog
* Optimize throughput
* TODO
* Deleted allocation manager
* Return when someone sends a smaller bandwidth allocation than the previous message
* review comments
* add mb command
* forgot colon
* add command descriptions
* use utils.ParseURL in commands
* return error message instead of minio.BucketAlreadyExists in mb
* ls command with bucket store functionality
* rb command with bucket store functionality
* rm command with bucket store functionality
* newline
* use print rather than errs for messages, add no buckets messsage
* cp command with bucket store functionality
* remove deprecated getStorjObjects function
* defer utils.LogClose(f) on instead of defer f.Close()
* Check for no buckets after for loop
* add checks for unspecified bucket in bucket store methods
* fix incorrect return types
* add no path error messages in object store methods
* split copy into helpers
* srcObj scheme check in download
* print buckets instead of appending to slice
* check if destObj.Host != srcObj.Host
* better method of handling destination name if not specified
* uplink rename
* final cleanups
* trailing slash fixes
* linting
* more linting
* helpful error messages
* Adjust startAfter after merging #328
* Improve output messages
* Improved error check for empty bucket and path
* No page limit on client side. Rely on server side limit.
* Better time formatting
* Fix paths in recursive list results
1. Added KeyValueStore.Iterate for implementing the different List, ListV2 etc. implementations. This allows for more efficient use of memory depending on the situation.
2. Implemented an inmemory teststore for running tests. This should allow to replace MockKeyValueStore in most places.
3. Rewrote tests
4. Pulled out logger from bolt implementation so it can be used for all other storage implementations.
5. Fixed multiple things in bolt and redis implementations.
* add CopyObject method
* use utils.LogClose
* extract common code from GetObject to getObject helper
* remove rr.Range from getObject helper, create helper putObject
* return rr, err in getObject helper
* extract code from PutObject into putObject helper
* remove commented out text
* remove other commented out code
* WIP trying to get storj cp command to work with copyObject
* fix typo in rb and now it works
* use rr.Size() instead of srcInfo.Size
* Revert "WIP trying to get storj cp command to work with copyObject"
This reverts commit e256b9f9a0fda728d41eb5b9d7a98b5446825842.
* add CopyObject test
* rebase and fix merge conflicts
* check error in gateway-storj test
* fix typo
* wip ca/ident cmds
* minor improvements and commenting
* combine id and ca commands and add $CONFDIR
* add `NewIdenity` test
* refactor `NewCA` benchmarks
* linter fixes
* Added initial functions for signing and verifying
* whoops
* Get client up to speed
* Added initial functions for signing and verifying
* whoops
* Get client up to speed
* wip
* wip
* actual signatures in tests
(cherry picked from commit 1464853b737f1d712d64fbf90147f535525c8fd9)
* bugfixing
* Generate private key in example
* Generate signatures for pieceranger tests
* Update examples to use TLS
* Use private key from identity inside of example
* Use crypto.PrivateKey interface
* Change err name in defers
* Pass tests
* Pass identity Key to PSClient
* Get tests passing on travis
* Resolve linter complaints
* Optimize DecodeReader performance
* A little bit better locking in PieceBuffer.Write
* Fix race issues
* Better fix for race condition in rs_test.go
* Improve PieceBuffer.Read to read the max available in one call
* PieceBuffer.Skip for more efficient discarding of old shares
* Rename bytesRead to nn
* Notify cvNewData only if a complete new share is available
* Small correction in PieceBuffer.Read
* Rename some fields to have longer names
* begin adding tls
* remove incomplete line in gw/main.go
* identity fixes+:
+ fix `peertls.NewCert` public key issue
+ fix `peertls.verfiyChain` issue
+ fix identity dial option
+ rename `GenerateCA` to `NewCA` and `generateCAWorker` to `newCAWorker` for better consistency/convention
* use pdbclient instead of pointerdb in miniogw
* fix tests
* go fmt
* make review changes
* modify how context.Background() is used
* more context stuff
* first stab at PUT
* only PUT
* working on PUT
* Put with LimitReader
* start of Get
* reorder of files and proto meta
* working on Meta
* working on Meta
* add aware limit reader
* add size from segment put
* rm if for eof
* update to proto meta
* update gen proto file
* working on get
* working on get
* working on get
* working on list
* working on delete
* working on list
* working on meta method
* fix merge error and working on feedback from PR
* update to proto file
* rm size tuple
* mv eof limit reader to new file
* add toMeta
* rm varible names
* add updates from PR feedback
* updates from PR feedback
* updates from PR feedback
* add toMeta size based on total size
* update toMeta size calculation
* rm passthrough
* add default to config for segment size
* fix get method ranger bug
* add object support for nested stream proto
* rm nested stream meta data
* rm test for another PR
* Don't use url.Parse for bolt paths: filepaths may not be valid URL-s.
* go.mod: update dependencies
* README.md: add Windows instructions
* pkg/overlay: check for the correct path and text in error
* pkg/overlay: fix tests for windows
* pkg/piecestore: make windows tests pass
* pkg/telemetry: skip test, as it doesn't shutdown nicely
* storage/redis: ensure that redis is clean before running tests
* pointerdb: separate client and server packages
the reason for this is so that things the server needs (bolt, auth)
don't get imported if all you need is the client. will result in
smaller binaries and less flag definitions
* review comments
Fixes go1.11 vet warnings.
Cancel on WithTimeout must always be called to avoid memory leak:
pkg/provider/provider.go:73: the cancel function returned by context.WithTimeout should be called, not discarded, to avoid a context leak
Range over non-copyable things:
pkg/pool/connection_pool_test.go:32: range var v copies lock: struct{pool pool.ConnectionPool; key string; expected pool.TestFoo; expectedError error} contains pool.ConnectionPool contains sync.RWMutex
pkg/pool/connection_pool_test.go:56: range var v copies lock: struct{pool pool.ConnectionPool; key string; value pool.TestFoo; expected pool.TestFoo; expectedError error} contains pool.ConnectionPool contains sync.RWMutex
pkg/pool/connection_pool_test.go:83: range var v copies lock: struct{pool pool.ConnectionPool; key string; value pool.TestFoo; expected interface{}; expectedError error} contains pool.ConnectionPool contains sync.RWMutex
zeebo/errs package always requires formatting directives:
pkg/peertls/peertls.go:50: Class.New call has arguments but no formatting directives
pkg/peertls/utils.go:47: Class.New call has arguments but no formatting directives
pkg/peertls/utils.go:87: Class.New call has arguments but no formatting directives
pkg/overlay/cache.go:94: Class.New call has arguments but no formatting directives
pkg/provider/certificate_authority.go:98: New call has arguments but no formatting directives
pkg/provider/identity.go:96: New call has arguments but no formatting directives
pkg/provider/utils.go:124: New call needs 1 arg but has 2 args
pkg/provider/utils.go:136: New call needs 1 arg but has 2 args
storage/redis/client.go:44: Class.New call has arguments but no formatting directives
storage/redis/client.go:64: Class.New call has arguments but no formatting directives
storage/redis/client.go:75: Class.New call has arguments but no formatting directives
storage/redis/client.go:80: Class.New call has arguments but no formatting directives
storage/redis/client.go:92: Class.New call has arguments but no formatting directives
storage/redis/client.go:96: Class.New call has arguments but no formatting directives
storage/redis/client.go:102: Class.New call has arguments but no formatting directives
storage/redis/client.go:126: Class.New call has arguments but no formatting directives
* implements connection success and fail on kad routing table
* modifications from code review
* todo
* test fixes
* passes in node rather than id
* removes rpath
* test fix
* move mock overlay from client to server
this doesn't really change much, but it does allow you to
run a standalone gateway against captain planet. it still does
not allow you to run a standalone gateway against a standalone
heavy client
* pointerdb: small error fixes
* some cleanups
* fix tests
* captplanet standalone farmer setup
* Bandwidth Allocation
* utils.Close method changed to utils.LogClose
* Get build temporarily working
* Get/Put for PSClient should take payer bandwidth allocations rather than the NewPSClient function
* Update example client to reflect changes in client API
* Update ecclient to use latest PSClient, Make NewPSClient return error also
* Updated pieceranger tests to check for errors; sign method should take byte array
* Handle defers in store.go better
* Fix defer functions in psdb.go
* fun times
* Protobuf bandwidthallocation data is now a byte array
* Remove psservice package and merge it into pstore server
* Write wrapper for database calls
* Change all expiration names in protobuf to be more informative; add defer in retrieve; remove old comment
* Make PSDB tests implementation independent rather than method independent
* get rid of payer, renter in ecclient
* add context monitoring in store and retrieve
* adds foundation for bucketStore
* adds prefixedObjStore to buckets package, adjusts gateway-storj accordingly
* fixes multi value assignment problems in gateway-storj
* fixes more multi value assignment errors in gateway-storj
* starts changing miniogw tests to accommodate buckets
* creates bucket store mock
* wip - fixing test cases in object tests
* adds get, put, and list object tests, comments out two test cases
* adds happy scenario tests for bucket methods
* fixes bug in list, removes redundant parts from gateway tests
* fixes nit
* Clean up tests from #188
* Fix bug with timestamp conversion in segment store
* fixes segments.Meta test
* Fix regression in listing objects in a bucket
* adds check to see if bucket is empty before deleting
* updates DeleteBucket test to account for empty/full bucket
* adds TODOs for DeleteBucket and MakeBucket for some cases, adjusts tests, filters out minio errors in logging.go
* adds checks for if buckets already exist or not in DeleteBucket and MakeBucket functions; adjusts tests
* adds BucketNotFound error check in bucket store, removes todo
* adds make_bucket to Travis test, updates boltdb client constructor to always create a bucket (table)
* adds comment
* runs deps
* adds print statements for debugging add node bkad
* more print statements
* removes bkad from routing and integrates on disk routing table
tests failing :(
wip
* removes testbootstrap
* kademlia_test not working
* adds kad tests back in
* Adds skips for tests broken due to wip kademlia
* starts adding segmentStore tests
* adds mocked interfaces for segmentStore tests
* adds tests for put, get, delete, and list
* regenerates pointerdb mock and updates calls to accommodate new changes
This is a naive implementation of the overlay worker.
Future improvements / to dos:
- Walk through the cache and remove nodes that don't respond
- Better look ups for new nodes
- Better random ID generation
- Kademlia hooks for automatically adding new nodes to the cache
* adding refresh cache functionality, added schedule function
* update put in db
* Tests passing
* wip overlay tests for cache refresh
* update scheduler code
* update refresh function
* WIP adding random lookups to refresh worker
* remove quit channel
* updates fire on schedule and the refresh function finds near nodes
* updates to refresh function, getting more buckets and nodes
* updates to refresh function and cache operations
* add cancellation to context, fix k number of nodes in lookups
* Unit test covarege increased for kademlia pkg
go style formatting added
Removed DHT param from newTestKademlia method, added comments for Bucket methods that informs that these tests will need to be updated
unnecessary comment deleted from newTestKademlia
Adjust Segment Store to the updated interface (#160)
* Adjust Segment Store to the updated interface
* Move /pkg/storage/segment to /pkg/storage/segments
* Fix overlay client tests
* Revert changes in NewOverlayClient return value
* Rename `rem` to `seg`
* Implement Meta()
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
Add files for testing builds in docker (#161)
* Add files for testing builds in docker
* Make tests check for redis running before trying to start redis-server, which may not exist.
* Clean redis server before any tests use it.
* Add more debugging for travis
* Explicitly requiring redis for travis
pkg/provider: with pkg/provider merged, make a single heavy client binary, gateway binary, and deprecate old services (#165)
* pkg/provider: with pkg/provider merged, make a single heavy client binary and deprecate old services
* add setup to gw binary too
* captplanet: output what addresses everything is listening on
* revert peertls/io_util changes
* define config flag across all commands
* use trimsuffix
fix docker makefile (#170)
* fix makefile
protos: update protobufs with go generate (#169)
the import for timestamp and duration should use
the path provided by a standard protocol buffer library
installation
Refactor List in PointerDB (#163)
* Refactor List in Pointer DB
* Fix pointerdb-client example
* Fix issue in Path type related to empty paths
* Test for the PointerDB service with some fixes
* Fixed debug message in example: trancated --> more
* GoDoc comments for unexported methods
* TODO comment to check if Put is overwriting
* Log warning if protobuf timestamp cannot be converted
* TODO comment to make ListPageLimit configurable
* Rename 'segment' package to 'segments' to reflect folder name
Minio integration with Object store (#156)
* initial WIP integration with Object store
* List WIP
* minio listobject function changes complete
* Code review changes and work in progress for the mock objectstore unit testing cases
* Warning fix redeclaration of err
* Warning fix redeclaration of err
* code review comments & unit testing inprogress
* fix compilation bug
* Fixed code review comments & added GetObject Mock test case
* rearraged the mock test file and gateway storj test file in to the proper directory
* added the missing file
* code clean up
* fix lint error on the mock generated code
* modified per code review comments
* added the PutObject mock test case
* added the GetObjectInfo mock test case
* added listobject mock test case
* fixed package from storj to miniogw
* resolved the gateway-storj.go initialization merge conflict
update readme (#174)
added assertion for unused errors (#152)
merging this PR to avoid future issues
updating github user to personal account (#171)
Test coverage ranger (#168)
* Fixed go panic for corner case
* Initial test coverage for ranger pkg
streamstore: add passthrough implementation (#176)
this doesn't implement streamstore, this just allows us to try and
get the june demo working again in the meantime
StatDB (#144)
* add statdb proto and example client
* server logic
* update readme
* remove boltdb from service.go
* sqlite3
* add statdb server executable file
* create statdb node table if it does not exist already
* get UpdateBatch working
* update based on jt review
* remove some commented lines
* fix linting issues
* reformat
* apiKey -> APIKey
* update statdb client apiKey->APIKey
Update README.md
Update README.md
overlay: correct dockerfile db (#179)
cmd/hc, cmd/gw, cmd/captplanet: simplify setup/run commands (#178)
also allows much more customization of services within captain planet,
such as reconfiguring the overlay service to use redis
pkg/process: don't require json formatting (#177)
Cleanup metadata across layers (#180)
* Cleanup metadata across layers
* Fix pointer db tests
Kademlia Routing Table (#164)
* adds comment
* runs deps
* creates boltdb kademlia routing table
* protobuf updates
* adds reverselist to mockkeyvaluestore interface
* xor wip
* xor wip
* fixes xor sort
* runs go fmt
* fixes
* goimports again
* trying to fix travis tests
* fixes mock tests
Ranger refactoring (#158)
* Fixed go panic for corner case
* Cosmetic changes, and small error fixes
miniogw: log all errors (#182)
* miniogw: log all errors
* tests added
* doc comment to satisfy linter
* fix test failure
Jennifer added to CLA list
* Temporary fix for storage/redis list method test