* protobuf for sending bandwidth agreements to satellite from storage nodes
* Setup process for sending agreements
* Add payer_id to db with bandwidth agreements for better sorting
* Renamed payer to satellite in psdb
* add filter field into OverlayOptions message
* chooseFiltered method, add excluded parameter in populate method
* change excluded type to []dht.NodeID in ChooseFiltered, change comment
* change name filter to excluded_nodes in proto
* implement helper function contains
* delete ChooseFiltered and add its functionality into Choose method to keep original author's history, add excluded argument into Choose calls
* regenerate mock_client.go
* regenerate protobuf
* adding the repair() func
* update test case to use new IDFromString function
* modified the repair() and updated streams mock
* modified the repair() and updated streams mock
* Options struct
* adding the repair() func
* modified the repair() and updated streams mock
* modified the repair() and updated streams mock
* integrating the segment repair()
* development repair with hack working
* repair segment changes
* integrated with mini hacks and rigged up test case with dev debug info
* integrated with ec and overlay
* added repair test case
* made the getNewUniqueNodes() to recursively go thru choose() to find get the required number of unique nodes
* cleaned up code
* disconnect from nodeclient
* cleanup connections in tests
* kademlia disconnects from nodeclient
* updating disconnect method for mocks
* creates separate disconnect and removeAll methods for tests
* adds init to connection pool
* fix folder cleanup and disconnect
* creates and cleans up test db files and disconnects kad
* removes db/.keep
* includes disconnect within cleanup methods
* creates public init method on connection pool to handle mutex copy issues
* remove all after disconnect
* pair creation and destruction
* checks disconnect error
* remove ctx
* fixes mock kad
The old paths.Path type is now replaced with the new storj.Path.
storj.Path is simply an alias to the built-in string type. As such it can be used just as any string, which simplifies a lot working with paths. No more conversions paths.New and path.String().
As an alias storj.Path does not define any methods. However, any functions applying to strings (like those from the strings package) gracefully apply to storj.Path too. In addition we have a few more functions defined:
storj.SplitPath
storj.JoinPaths
encryption.EncryptPath
encryption.DecryptPath
encryption.DerivePathKey
encryption.DeriveContentKey
All code in master is migrated to the new storj.Path type.
The Path example is also updated and is good for reference: /pkg/encryption/examples_test.go
This PR also resolve a nonce misuse issue in path encryption: https://storjlabs.atlassian.net/browse/V3-545
..although it ought to work for other storage.KeyValueStore needs as
well. it's just optimized to work pretty well for a largish hierarchy of
paths.
This includes the addition of "long benchmarks" for KeyValueStore
testing. These will only be run when -test-bench-long is added to the
test flags. In these benchmarks, a large corpus of paths matching a
natural ("real-life") hierarchy is read from paths.data.gz (which you
can get from https://github.com/storj/path-test-corpus) and imported
into a particular KeyValueStore. Recursive and non-recursive queries are
run on it to detect performance problems that arise only at scale.
This also includes alternate implementation of the postgreskv client,
which works in a less-bizarre way for non-recursive queries, but suffers
from poor performance in tests such as the long benchmarks. Once this
alternate impl is committed to the tree, we can remove it again; I just
want it to be available for future reference.
This is an old definition from the very early stage of development. It
is not used anymore.
Change-Id: I6a033e4006e6edfa7c18acc6ae91c9e4e1df0e6a
Signed-off-by: Kaloyan Raev <kaloyan@storj.io>
Reviewed-on: https://review.gerrithub.io/429582
Reviewed-by: JT Olio <hello@jtolio.com>
Tested-by: JT Olio <hello@jtolio.com>
* Travis uses Go 1.11
* Use go modules instead of storj-vendor
* Automatic caching of downloaded dependencies
* Ensures that modules incompatible linters run with modules
* handle nil nodes in ec Put
* read and discard readers for nil nodes
* test 2 nil nodes, unique wont return false with nil nodes
* Discard reader data for nil nodes
* edit control flow
* add filter field into OverlayOptions message
* chooseFiltered method, add excluded parameter in populate method
* change excluded type to []dht.NodeID in ChooseFiltered, change comment
* change name filter to excluded_nodes in proto
* implement helper function contains
* delete ChooseFiltered and add its functionality into Choose method to keep original author's history, add excluded argument into Choose calls
* regenerate mock_client.go
* regenerate protobuf
* update test case to use new IDFromString function
* remove old kademlia test code
* begin adding path encryption
* do not encrypt/decrypt first element of path (bucket)
* add path encryption for delete and list
* use encrypted paths in streamstore.Meta
* fix listing with encrypted paths
* move encrypt/decryptAfterBucket to streamstore
* fix listing with no prefix
* remove duplicate logic for listing with no prefix
* Initial Layout
* Commit to test File Handling OS independed
* Hide struct properties to prevent manual interaction
* Fix Linting Errors
* 1st Working Windows Version
* Add missing Error Handling
* Fix Linting Errors
* Remove dependencies
* Further Improvements
* Remove commented code
* Improve comments and error messages
* No pointers to FPath
* Improve comment
* Do not filepath.ToSlash URL path
* Extract helper functions for parsing local path and Storj path
* Minor Improvements based on PR Comments
* Fix Linting Error and make Regex private
* Improve Layout
* Rework FPath and add tests
* Add more tests cases for windows
* Use for-loop instead of goto
* Use FPath in all uplink commands
* Add guard checks
* Add Test Cases and add comments
* Added a new table 'mib' with 'data', 'size' and 'method' columns
* added AddMIB() function and test case TestMIBHappyPath()
* added function and a test case to add entries into bandwidth usage table
* added functionality to create an entry, update the entry and readback the entry based on a given date into/from bandwidth tbl
* added initial SumBandwidthSizes()
* added the functionality to retrieve the total bw usage based on start and end date
* Added the unit test case for AddBwUsageTbl
* changed the arguments to take time format as arg than Unix format
* changed the arguments to take time format as arg than Unix format
* changes per code review comments
* adding back go.sum
* changes per code review comments
* changes per code review comments
* changes per code review comments
* creates checker
* tests offline nodes
* test id injured segs:
* Adds healthy pieces to injured segment struct
* changes inequality
* creates common files
* adds checker benchmarking
* creates more common files
* Replaces pointedb direct db with api call to a new iterate method on pointerdb
* move monkit
* removes identifyrequest proto
* remove healthypieces
* adds benchmarking
creates common file for datarepair
* recreates proto file
* api key on ctx
* create db directory if it does not exist
* linter fix
* pass db path in from config
* change mkdir to mkdirAll
* windows love
* PR comments
* changing the path
* change the config default to $CONFDIR/kademlia
* Let's do it right this time
* Oh travis...
* Handle redis URL
* Travis... why u gotta be like this?
* Handle when address does not use redis scheme
* Start repairer
* Match provider.Responsibility interface
* Simplify if statement
* Config doesn't need to be a pointer
* Initialize doesn't need to be exported
* Don't run checker or repairer on startup
* Fix travis complaints
* initial commit- wip, working on testing and library
* wip working on testing library functioon to get poointer
* working on nil reference for testing
* tests wip
* wip-working on getting tests to work
* working on tests
* put test passes
* working on test- need to export
* created pdclient, and now working on testing function
* tests working for list- getting object back
* wip - got derived piece id
* fixed making grpc public
* fixed linter errors and minor added method for size
* need to work on testing, added random integer function
* got psc server working for testing
* working on ranger test and ranger method
* testing creds for new computer
* working on getting segment metadata
* get random stripe
* added caveat to random fn
* fixed data types
* modified library to have one public function that returns a random stripe
* removed extra comments
* added commons.go file for audit
* added last path to be remembered
* changed random function to cryto/rand/ & worked on tests passing
* working on testing to get analysis of randomness
* changed to track last item in pagination
* finished testing randomness, cleaned up code
* fixed rebase errors
* removed error, kept common file
* fixed travis errors
* attempt to fix overlay issue
* fixed travis error
* updated pointer parameters
* made smaller functions, renamed audit
* made changes per suggestions
* removed gosum
* fixed pr per suggestions
* removed comment
* Creates cron-job for checker, adds it to captplanet and satellite
* removes datarepair from satellite & captplanet run
* Delete config.go
* removes unused datarepair imports
* adds comments to fix linter
* Loads cache from context for PointerDB access
* WIP adds overlay lookups to pointerdb requests
* Pointer lookup code is added for Get
* adds feature flag for pointerdb return
* refactors pointerdb code
* removes some unnecessary debug logs
* Fixes indent in config
* adds early return for non-remote pointers
* formats code, removes some comments
* Fixes tests broken by pointer proto changes
* adds error check and merges variable declaration
* removes commented out proto import
* adds error check to pdbclient
* merged the lasted master changes
* debug working of handling ctrl+c
* Handling of clean up of partially uploaded segments and pieces
* code cleanup per code comment
* updates based on code review comments
* Clean up last segment handling
* Fix increment for AES-GCM nonce
* Fix stream size calculation
* Adapt stream store tests
* Fix Delete method
* Rename info callback to segmentInfo
* Clearer calculation for offset in Nonce.AESGCMNonce()
* Adapt to the new little-endian nonce increment
* setup repairer loop
* added read from queue
* Refactor to make things easier to import
* add more control flow to repairer
* add comment
* basic interval structure for running check/repair
* change function name GetNext to Dequeue
* better increment/decrement syntax
* export Repairer struct
* delete 'unreachable code'
* add mon.Task() to Repairer.Repair
* remove 24 hour interval
* set maxRepair on Config as well as Repairer
* add comment for Repairer struct, check err
* comment out runCfg.Repair in cmd/satellite/main.go because it is NI yet
* pkg structure and repair queue implementation
* adds zeebo
* gets redis working with queue
* modifies interface
* changes re feedback
* pr changes w encoding and enqueue dequeue modifications
* test force error
* concurrent enqueue/dequeue
* refactor sequential to use only 1 slice
* added token for time conflicts
* begin adding encryption for remote pieces
* begin adding decryption
* add encryption key as arg to Put and Get
* move encryption/decryption to object store
* Add encryption key to object store constructor
* Add the erasure scheme to object store constructor
* Ensure decrypter is initialized with the stripe size used by encrypter
* Revert "Ensure decrypter is initialized with the stripe size used by encrypter"
This reverts commit 07272333f461606edfb43ad106cc152f37a3bd46.
* Revert "Add the erasure scheme to object store constructor"
This reverts commit ea5e793b536159d993b96e3db69a37c1656a193c.
* move encryption to stream store
* move decryption stuff to stream store
* revert changes in object store
* add encryptedBlockSize and close rangers on error during Get
* calculate padding sizes correctly
* encryptedBlockSize -> encryptionBlockSize
* pass encryption key and block size into stream store
* remove encryption key and block size from object store constructor
* move encrypter/decrypter initialization
* remove unnecessary cast
* Fix padding issue
* Fix linter
* add todos
* use random encryption key for data encryption. Store an encrypted copy of this key in segment metadata
* use different encryption key for each segment
* encrypt data in one step if it is small enough
* refactor and move encryption stuff
* fix errors related to nil slices passed to copy
* fix encrypter vs. decrypter bug
* put encryption stuff in eestream
* get captplanet test to pass
* fix linting errors
* add types for encryption keys/nonces and clean up
* fix tests
* more review changes
* add Cipher type for encryption stuff
* fix rs_test
* Simplify type casting of key and nonce
* Init starting nonce to the segment index
* don't copy derived key
* remove default encryption key; force user to explicitly set it
* move getSegmentPath to streams package
* dont require user to specify encryption key for captplanet
* rename GenericKey and GenericNonce to Key and Nonce
* review changes
* fix linting error
* Download uses the encryption type from metadata
* Store enc block size in metadata and use it for download
* storage node quick check and startup validation
* rearranged the startup validation and quick check logic
* travis lint warning fixes
* travis lint warning fixes
* travis lint warning fixes
* code changes per review comments
* code clean dev debug info
* travis lint wranings
* code changes per code review comments
* code changes per code review comments
* code update per review
* sqlite SUM is having issue when getting the SUM of an empty column; filepath was checking a directory that doesn't exist when starting server; Example updated to print allocated and used space
* storage node quick check and startup validation
* rearranged the startup validation and quick check logic
* travis lint warning fixes
* travis lint warning fixes
* travis lint warning fixes
* code changes per review comments
* code clean dev debug info
* travis lint wranings
* code changes per code review comments
* code changes per code review comments
* code update per review
* no file or directory error
* Updated mock PSClient
* Limit to only 1 database write
* Check file system rather than database
* Move check to storefile. We need to figure out how to fix this mess
* piecestore should not overwrite data, it should fail when trying to write to a file that already exists
* Format errors, delete unused function in psdb for checking if TTL exists
* Combine errors better
* Moving retrieve into multiple goroutines
* Make sure we pass nil errors into err channel
* restore tests
* incorporate locks in retrieve.go
* deserialize data only if we have something to deserealize when receiving bandwidth allocation in server store
* Adding logic for retrieve to be more efficient
* Add channel?
* hmm
* implement Throttle concurrency primitive
* using throttle
* Remove unused variables
* Egon comments addressed
* Get ba total correct
* Consume without waiting
* incrementally increase signing size
* Get downloads working with throttle
* Removed logging
* Make sure we handle errors properly
* Fix tests
>
>
Co-authored-by: Kaloyan <kaloyan@storj.io>
* Can't Fatalf in goroutine
* Add missing returns to tests
* add capacity to channel, smarter allocations
* rename things and don't use size as limit
* replace things with sync2.Throttle
* fix compilation errors
* add note about security
* fix ordering
* Max length is actually 64 bytes for piece ID
* Max length is actually 64 bytes for piece ID
* fix limit
* error comes from pending allocs, so no need to relog
* Optimize throughput
* TODO
* Deleted allocation manager
* Return when someone sends a smaller bandwidth allocation than the previous message
* review comments
* add mb command
* forgot colon
* add command descriptions
* use utils.ParseURL in commands
* return error message instead of minio.BucketAlreadyExists in mb
* ls command with bucket store functionality
* rb command with bucket store functionality
* rm command with bucket store functionality
* newline
* use print rather than errs for messages, add no buckets messsage
* cp command with bucket store functionality
* remove deprecated getStorjObjects function
* defer utils.LogClose(f) on instead of defer f.Close()
* Check for no buckets after for loop
* add checks for unspecified bucket in bucket store methods
* fix incorrect return types
* add no path error messages in object store methods
* split copy into helpers
* srcObj scheme check in download
* print buckets instead of appending to slice
* check if destObj.Host != srcObj.Host
* better method of handling destination name if not specified
* uplink rename
* final cleanups
* trailing slash fixes
* linting
* more linting
* helpful error messages
* Adjust startAfter after merging #328
* Improve output messages
* Improved error check for empty bucket and path
* No page limit on client side. Rely on server side limit.
* Better time formatting
* Fix paths in recursive list results
1. Added KeyValueStore.Iterate for implementing the different List, ListV2 etc. implementations. This allows for more efficient use of memory depending on the situation.
2. Implemented an inmemory teststore for running tests. This should allow to replace MockKeyValueStore in most places.
3. Rewrote tests
4. Pulled out logger from bolt implementation so it can be used for all other storage implementations.
5. Fixed multiple things in bolt and redis implementations.
* add CopyObject method
* use utils.LogClose
* extract common code from GetObject to getObject helper
* remove rr.Range from getObject helper, create helper putObject
* return rr, err in getObject helper
* extract code from PutObject into putObject helper
* remove commented out text
* remove other commented out code
* WIP trying to get storj cp command to work with copyObject
* fix typo in rb and now it works
* use rr.Size() instead of srcInfo.Size
* Revert "WIP trying to get storj cp command to work with copyObject"
This reverts commit e256b9f9a0fda728d41eb5b9d7a98b5446825842.
* add CopyObject test
* rebase and fix merge conflicts
* check error in gateway-storj test
* fix typo
* wip ca/ident cmds
* minor improvements and commenting
* combine id and ca commands and add $CONFDIR
* add `NewIdenity` test
* refactor `NewCA` benchmarks
* linter fixes
* Added initial functions for signing and verifying
* whoops
* Get client up to speed
* Added initial functions for signing and verifying
* whoops
* Get client up to speed
* wip
* wip
* actual signatures in tests
(cherry picked from commit 1464853b737f1d712d64fbf90147f535525c8fd9)
* bugfixing
* Generate private key in example
* Generate signatures for pieceranger tests
* Update examples to use TLS
* Use private key from identity inside of example
* Use crypto.PrivateKey interface
* Change err name in defers
* Pass tests
* Pass identity Key to PSClient
* Get tests passing on travis
* Resolve linter complaints
* Optimize DecodeReader performance
* A little bit better locking in PieceBuffer.Write
* Fix race issues
* Better fix for race condition in rs_test.go
* Improve PieceBuffer.Read to read the max available in one call
* PieceBuffer.Skip for more efficient discarding of old shares
* Rename bytesRead to nn
* Notify cvNewData only if a complete new share is available
* Small correction in PieceBuffer.Read
* Rename some fields to have longer names
* begin adding tls
* remove incomplete line in gw/main.go
* identity fixes+:
+ fix `peertls.NewCert` public key issue
+ fix `peertls.verfiyChain` issue
+ fix identity dial option
+ rename `GenerateCA` to `NewCA` and `generateCAWorker` to `newCAWorker` for better consistency/convention
* use pdbclient instead of pointerdb in miniogw
* fix tests
* go fmt
* make review changes
* modify how context.Background() is used
* more context stuff
* first stab at PUT
* only PUT
* working on PUT
* Put with LimitReader
* start of Get
* reorder of files and proto meta
* working on Meta
* working on Meta
* add aware limit reader
* add size from segment put
* rm if for eof
* update to proto meta
* update gen proto file
* working on get
* working on get
* working on get
* working on list
* working on delete
* working on list
* working on meta method
* fix merge error and working on feedback from PR
* update to proto file
* rm size tuple
* mv eof limit reader to new file
* add toMeta
* rm varible names
* add updates from PR feedback
* updates from PR feedback
* updates from PR feedback
* add toMeta size based on total size
* update toMeta size calculation
* rm passthrough
* add default to config for segment size
* fix get method ranger bug
* add object support for nested stream proto
* rm nested stream meta data
* rm test for another PR
* Don't use url.Parse for bolt paths: filepaths may not be valid URL-s.
* go.mod: update dependencies
* README.md: add Windows instructions
* pkg/overlay: check for the correct path and text in error
* pkg/overlay: fix tests for windows
* pkg/piecestore: make windows tests pass
* pkg/telemetry: skip test, as it doesn't shutdown nicely
* storage/redis: ensure that redis is clean before running tests
* pointerdb: separate client and server packages
the reason for this is so that things the server needs (bolt, auth)
don't get imported if all you need is the client. will result in
smaller binaries and less flag definitions
* review comments
Fixes go1.11 vet warnings.
Cancel on WithTimeout must always be called to avoid memory leak:
pkg/provider/provider.go:73: the cancel function returned by context.WithTimeout should be called, not discarded, to avoid a context leak
Range over non-copyable things:
pkg/pool/connection_pool_test.go:32: range var v copies lock: struct{pool pool.ConnectionPool; key string; expected pool.TestFoo; expectedError error} contains pool.ConnectionPool contains sync.RWMutex
pkg/pool/connection_pool_test.go:56: range var v copies lock: struct{pool pool.ConnectionPool; key string; value pool.TestFoo; expected pool.TestFoo; expectedError error} contains pool.ConnectionPool contains sync.RWMutex
pkg/pool/connection_pool_test.go:83: range var v copies lock: struct{pool pool.ConnectionPool; key string; value pool.TestFoo; expected interface{}; expectedError error} contains pool.ConnectionPool contains sync.RWMutex
zeebo/errs package always requires formatting directives:
pkg/peertls/peertls.go:50: Class.New call has arguments but no formatting directives
pkg/peertls/utils.go:47: Class.New call has arguments but no formatting directives
pkg/peertls/utils.go:87: Class.New call has arguments but no formatting directives
pkg/overlay/cache.go:94: Class.New call has arguments but no formatting directives
pkg/provider/certificate_authority.go:98: New call has arguments but no formatting directives
pkg/provider/identity.go:96: New call has arguments but no formatting directives
pkg/provider/utils.go:124: New call needs 1 arg but has 2 args
pkg/provider/utils.go:136: New call needs 1 arg but has 2 args
storage/redis/client.go:44: Class.New call has arguments but no formatting directives
storage/redis/client.go:64: Class.New call has arguments but no formatting directives
storage/redis/client.go:75: Class.New call has arguments but no formatting directives
storage/redis/client.go:80: Class.New call has arguments but no formatting directives
storage/redis/client.go:92: Class.New call has arguments but no formatting directives
storage/redis/client.go:96: Class.New call has arguments but no formatting directives
storage/redis/client.go:102: Class.New call has arguments but no formatting directives
storage/redis/client.go:126: Class.New call has arguments but no formatting directives
* implements connection success and fail on kad routing table
* modifications from code review
* todo
* test fixes
* passes in node rather than id
* removes rpath
* test fix
* move mock overlay from client to server
this doesn't really change much, but it does allow you to
run a standalone gateway against captain planet. it still does
not allow you to run a standalone gateway against a standalone
heavy client
* pointerdb: small error fixes
* some cleanups
* fix tests
* captplanet standalone farmer setup
* Bandwidth Allocation
* utils.Close method changed to utils.LogClose
* Get build temporarily working
* Get/Put for PSClient should take payer bandwidth allocations rather than the NewPSClient function
* Update example client to reflect changes in client API
* Update ecclient to use latest PSClient, Make NewPSClient return error also
* Updated pieceranger tests to check for errors; sign method should take byte array
* Handle defers in store.go better
* Fix defer functions in psdb.go
* fun times
* Protobuf bandwidthallocation data is now a byte array
* Remove psservice package and merge it into pstore server
* Write wrapper for database calls
* Change all expiration names in protobuf to be more informative; add defer in retrieve; remove old comment
* Make PSDB tests implementation independent rather than method independent
* get rid of payer, renter in ecclient
* add context monitoring in store and retrieve
* adds foundation for bucketStore
* adds prefixedObjStore to buckets package, adjusts gateway-storj accordingly
* fixes multi value assignment problems in gateway-storj
* fixes more multi value assignment errors in gateway-storj
* starts changing miniogw tests to accommodate buckets
* creates bucket store mock
* wip - fixing test cases in object tests
* adds get, put, and list object tests, comments out two test cases
* adds happy scenario tests for bucket methods
* fixes bug in list, removes redundant parts from gateway tests
* fixes nit
* Clean up tests from #188
* Fix bug with timestamp conversion in segment store
* fixes segments.Meta test
* Fix regression in listing objects in a bucket
* adds check to see if bucket is empty before deleting
* updates DeleteBucket test to account for empty/full bucket
* adds TODOs for DeleteBucket and MakeBucket for some cases, adjusts tests, filters out minio errors in logging.go
* adds checks for if buckets already exist or not in DeleteBucket and MakeBucket functions; adjusts tests
* adds BucketNotFound error check in bucket store, removes todo
* adds make_bucket to Travis test, updates boltdb client constructor to always create a bucket (table)
* adds comment
* runs deps
* adds print statements for debugging add node bkad
* more print statements
* removes bkad from routing and integrates on disk routing table
tests failing :(
wip
* removes testbootstrap
* kademlia_test not working
* adds kad tests back in
* Adds skips for tests broken due to wip kademlia
* starts adding segmentStore tests
* adds mocked interfaces for segmentStore tests
* adds tests for put, get, delete, and list
* regenerates pointerdb mock and updates calls to accommodate new changes
This is a naive implementation of the overlay worker.
Future improvements / to dos:
- Walk through the cache and remove nodes that don't respond
- Better look ups for new nodes
- Better random ID generation
- Kademlia hooks for automatically adding new nodes to the cache
* adding refresh cache functionality, added schedule function
* update put in db
* Tests passing
* wip overlay tests for cache refresh
* update scheduler code
* update refresh function
* WIP adding random lookups to refresh worker
* remove quit channel
* updates fire on schedule and the refresh function finds near nodes
* updates to refresh function, getting more buckets and nodes
* updates to refresh function and cache operations
* add cancellation to context, fix k number of nodes in lookups
* Unit test covarege increased for kademlia pkg
go style formatting added
Removed DHT param from newTestKademlia method, added comments for Bucket methods that informs that these tests will need to be updated
unnecessary comment deleted from newTestKademlia
Adjust Segment Store to the updated interface (#160)
* Adjust Segment Store to the updated interface
* Move /pkg/storage/segment to /pkg/storage/segments
* Fix overlay client tests
* Revert changes in NewOverlayClient return value
* Rename `rem` to `seg`
* Implement Meta()
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
Add files for testing builds in docker (#161)
* Add files for testing builds in docker
* Make tests check for redis running before trying to start redis-server, which may not exist.
* Clean redis server before any tests use it.
* Add more debugging for travis
* Explicitly requiring redis for travis
pkg/provider: with pkg/provider merged, make a single heavy client binary, gateway binary, and deprecate old services (#165)
* pkg/provider: with pkg/provider merged, make a single heavy client binary and deprecate old services
* add setup to gw binary too
* captplanet: output what addresses everything is listening on
* revert peertls/io_util changes
* define config flag across all commands
* use trimsuffix
fix docker makefile (#170)
* fix makefile
protos: update protobufs with go generate (#169)
the import for timestamp and duration should use
the path provided by a standard protocol buffer library
installation
Refactor List in PointerDB (#163)
* Refactor List in Pointer DB
* Fix pointerdb-client example
* Fix issue in Path type related to empty paths
* Test for the PointerDB service with some fixes
* Fixed debug message in example: trancated --> more
* GoDoc comments for unexported methods
* TODO comment to check if Put is overwriting
* Log warning if protobuf timestamp cannot be converted
* TODO comment to make ListPageLimit configurable
* Rename 'segment' package to 'segments' to reflect folder name
Minio integration with Object store (#156)
* initial WIP integration with Object store
* List WIP
* minio listobject function changes complete
* Code review changes and work in progress for the mock objectstore unit testing cases
* Warning fix redeclaration of err
* Warning fix redeclaration of err
* code review comments & unit testing inprogress
* fix compilation bug
* Fixed code review comments & added GetObject Mock test case
* rearraged the mock test file and gateway storj test file in to the proper directory
* added the missing file
* code clean up
* fix lint error on the mock generated code
* modified per code review comments
* added the PutObject mock test case
* added the GetObjectInfo mock test case
* added listobject mock test case
* fixed package from storj to miniogw
* resolved the gateway-storj.go initialization merge conflict
update readme (#174)
added assertion for unused errors (#152)
merging this PR to avoid future issues
updating github user to personal account (#171)
Test coverage ranger (#168)
* Fixed go panic for corner case
* Initial test coverage for ranger pkg
streamstore: add passthrough implementation (#176)
this doesn't implement streamstore, this just allows us to try and
get the june demo working again in the meantime
StatDB (#144)
* add statdb proto and example client
* server logic
* update readme
* remove boltdb from service.go
* sqlite3
* add statdb server executable file
* create statdb node table if it does not exist already
* get UpdateBatch working
* update based on jt review
* remove some commented lines
* fix linting issues
* reformat
* apiKey -> APIKey
* update statdb client apiKey->APIKey
Update README.md
Update README.md
overlay: correct dockerfile db (#179)
cmd/hc, cmd/gw, cmd/captplanet: simplify setup/run commands (#178)
also allows much more customization of services within captain planet,
such as reconfiguring the overlay service to use redis
pkg/process: don't require json formatting (#177)
Cleanup metadata across layers (#180)
* Cleanup metadata across layers
* Fix pointer db tests
Kademlia Routing Table (#164)
* adds comment
* runs deps
* creates boltdb kademlia routing table
* protobuf updates
* adds reverselist to mockkeyvaluestore interface
* xor wip
* xor wip
* fixes xor sort
* runs go fmt
* fixes
* goimports again
* trying to fix travis tests
* fixes mock tests
Ranger refactoring (#158)
* Fixed go panic for corner case
* Cosmetic changes, and small error fixes
miniogw: log all errors (#182)
* miniogw: log all errors
* tests added
* doc comment to satisfy linter
* fix test failure
Jennifer added to CLA list
* Temporary fix for storage/redis list method test
* begins adding inline segment support for segmentstore
* adds PeekThresholdReader struct plus Read and isInline methods
* moves PeekThresholdReader to peek.go, adds more simplified Read function
* adds PeekThresholdReader tests
* reverts Read function to earlier version, updates tests to use ReadFull instead
* Get function now handles inline type pointers
* adds correct type Size and ExpirationDate to inline segment
* fixes return value in Put func error condition
* moves thresholdBuf and Read tests into a table test
* adds border case test, fixes redundant parts
* passes sizedReader size to makeRemotePointer
* unit tests for service.go in overlay package added
* comments removed
* unit tests for overlay package updated
* updated code for starting redis server
* add statdb proto and example client
* server logic
* update readme
* remove boltdb from service.go
* sqlite3
* add statdb server executable file
* create statdb node table if it does not exist already
* get UpdateBatch working
* update based on jt review
* remove some commented lines
* fix linting issues
* reformat
* apiKey -> APIKey
* update statdb client apiKey->APIKey
* initial WIP integration with Object store
* List WIP
* minio listobject function changes complete
* Code review changes and work in progress for the mock objectstore unit testing cases
* Warning fix redeclaration of err
* Warning fix redeclaration of err
* code review comments & unit testing inprogress
* fix compilation bug
* Fixed code review comments & added GetObject Mock test case
* rearraged the mock test file and gateway storj test file in to the proper directory
* added the missing file
* code clean up
* fix lint error on the mock generated code
* modified per code review comments
* added the PutObject mock test case
* added the GetObjectInfo mock test case
* added listobject mock test case
* fixed package from storj to miniogw
* resolved the gateway-storj.go initialization merge conflict
* Refactor List in Pointer DB
* Fix pointerdb-client example
* Fix issue in Path type related to empty paths
* Test for the PointerDB service with some fixes
* Fixed debug message in example: trancated --> more
* GoDoc comments for unexported methods
* TODO comment to check if Put is overwriting
* Log warning if protobuf timestamp cannot be converted
* TODO comment to make ListPageLimit configurable
* Rename 'segment' package to 'segments' to reflect folder name
* pkg/provider: with pkg/provider merged, make a single heavy client binary and deprecate old services
* add setup to gw binary too
* captplanet: output what addresses everything is listening on
* revert peertls/io_util changes
* define config flag across all commands
* use trimsuffix
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
* Adjust Segment Store to the updated interface
* Move /pkg/storage/segment to /pkg/storage/segments
* Fix overlay client tests
* Revert changes in NewOverlayClient return value
* Rename `rem` to `seg`
* Implement Meta()
* working on put request for nsclient
* working on put request for nsclient
* netstate put
* netstate put
* wip testing client
* wip - testing client
and working through some errors
* wip - testing client
and working through some errors
* put request works
* put request works for client
* get request working
* get request working
* get request working-minor edit
* get request working-minor edit
* list request works
* list request works
* working through delete error
* working through delete error
* fixed exp client, still working through delete error
* fixed exp client, still working through delete error
* delete works; fixed formatting issues
* delete works; fixed formatting issues
* deleted comment
* deleted comment
* resolving merge conflicts
* resolving merge conflict
* fixing merge conflict
* implemented and modified kayloyans paths file
* working on testing
* added test for path_test.go
* fixed string, read through netstate test
* deleted env variables
* initial commit for mocking out grpc client- got it working
* mocked grpc client
* mock put passed test
* 2 tests pass for PUT with mock
* put requests test pass, wip- want mini review
* get tests pass mock
* list test working
* initial commit for list test
* all list req. working, starting on delete tests
* delete tests passed
* cleaned up tests
* resolved merge conflicts
* resolved merge conflicts
* fixed linter errors
* fixed error found in travis
* initial commit for fixes from PR comments
* fixed pr comments and linting
* added error handling for api creds, and rebased
* fixes from dennis comments
* fixed pr with dennis suggestioon
* added copyrights to files
* fixed casing per dennis great comment
* fixed travis complaint on sprintf
* Add dockerfile and yaml for setting up piecestore servers
* Fix dockerfile for @aleitner (#115)
* Fix dockerfile for @aleitner
* Move files for @coyle
* Update yaml
* My linter had some errors so I resolved them
* Make jenkins do the needful
* Make piecestore-farmer look like overlay's build process
* Fix service spec to work in staging
* Make Jenkins push images, but not deploy them, yet.
* Modify entrypoint to fit new verbs
* Update piecestore-farmer entrypoint script to handle new app output
* Updates to config handling
- Add functions to load in configs
- Standardize location and naming of config files
- Configuration over convention style of config file handling for each
service
* update config handling and correclty handle cli flags being set
* generate configs from default if no config is found
- renamed pointerdbDB to pointerdb for clarity in config file
- set sane default for pkg/overlay boltDB file
- set srvPort to default to 8082 to avoid port collision on default
setting
* linter updates
* move boltdb path vars into function
* update tests to handle config environment changes
* --fix exec test mocks
* update tests to use viper instead of flag library
* fix typo
* add redis-server to services in travis for tests
* update examples with new config env function signature
* fix tests
* WIP ObjectStore
* Remove methods for extended attributes
* List returns metadata too
* No real need to prepend "object" in path
* Serialize metadata
* List retuns []ListItem instead of []Path
* lays out SegmentStore functions to implement
* Merge branch 'master' into segment-store
* adds overlay calls to put
* allows SegmentStore Put to upload a file to ecclient, then save pointer to pointerdb
* Merge branch 'master' into segment-store
* removes new overlay client instance in Put
* fixes syntax
* fixes syntax again
* fixes imports
* fixes typo
* removes pointerdb client from segmentStore struct for now
* changes SegmentStore to segmentStore
* changing types in parameters to fit other function calls
* takes RedundancyStrategy out of Put params
* changes NewClient param back to take an interface (not pointer to interface)
* fixes types
* moves pointer into PutRequest in SegmentStore Put
* passes interfact, not pointer to interface to NewSegmentStore
* fixes some types
* Get returns an instance of Meta
* fixes PutRequest fields
* adds remotePieces slice to pointerdb PutRequest
* ecClient Put now takes *proto.Nodes instead of proto.Nodes
* fixes syntax
* changes ec client dial interface to use *proto.Node
* changes other instances of proto.Node to *proto.Node in ecclient pkg
* adds *proto.Node to Get and Delete functions in ecclient pkg
* changes proto.Node to pointer in ec client_test
* changes proto.Node to pointer in ec client_test
* adds ecclient and pointerdb client to the segmentstore constructor
* adds ecclient and pointerDBClient to segmentStore constructor
* port changes
* Task monitor and setup merge from the staging
* Restructure + additional interface
* Add NewOverlayClient
* integrated DHT client interface
* added test for interface
* PR comments addressed
* lint issue
* added generated protobuf
* adding new interface
* added the interface framework
* deleted file
* fixes compilation errors and integrates new dhtcclient interface
* merged netstat latest changes and dht new interface chagnes
* fixed the address's port
* adding comments
* PR comments addressed
* netclient interface dial method added
* rename and integrated transportclient with minio gateway
* rename and code clean up
* made changes based on the Dennis's changes on the kad-client
* Code review comment changes based on kaloyan review comments
* reverted the changes to be similar to master
* removed unused file
* renamed to transportclient
* added the review changes
* store the address of the client
* updates per the code review comments, changes-> added error retry connection attempt logic, added error conditions including nil parameters
* updated the test case to test the bad address passed condition
* updated the code per code review comments
* Bolt backed overlay cache (#94)
* wip
* add separate `Process` tests for bolt and redis-backed overlay
* more testing
* fix gitignore
* fix linter error
* goimports goimports GOIMPORTS GoImPortS!!!!
* fix port madness
* forgot to add
* add `mux` as handler and shorten context timeouts
* gofreakingimports
* fix comments
* refactor test & add logger/monkit registry
* debugging travis
* add comment
* Set redisAddress to empty string for bolt-test
* travis experiment
* refactoring tests
* Merge remote-tracking branch 'upstream/master' into bolt-backed-overlay-cache
* Automatically build, tag and push docker images on merge to master (#103)
* port changes
* build overlay on successful merge to master
* fixes to Makefile
* permissions
* dep ensure
* gopath
* let's try vgo
* remove dep
* maybe alpine is the issue
* tagging go version on build
* stupid vgo
* vgo
* adding tags to push
* quotes
* local linting fixes & stupid travis
* prepend storjlabs to docker tag (#108)
* port changes
* fixing tag name
* Use continue instead of return in table tests (#106)
I did a dumb mistake for some of the table tests, which made some of the
test cases not being executed.
* pkg/kademlia tests and restructuring (#97)
* port changes
* Merge remote-tracking branch 'upstream/master'
* Merge remote-tracking branch 'upstream/master'
* Merge remote-tracking branch 'upstream/master'
* files created
* Merge remote-tracking branch 'upstream/master' into coyle/kad-tests
* wip
* Merge remote-tracking branch 'upstream/master' into coyle/kad-tests
* wip
* remove bkad dependencie from tests
* wip
* wip
* wip
* wip
* wip
* updated coyle/kademlia
* wip
* cleanup
* ports
* overlay upgraded
* linter fixes
* piecestore kademlia newID
* add changes from kad demo
* PR comments addresses
* go func
* force travis build
* fixed merge conflicts
* fixed merge conflicts
* Merge branch 'coyle/kad-tests' of https://github.com/coyle/storj into coyle/kad-tests
* linter issues
* linting issues
* fixed merge conflicts
* linter is stupid
* Coyle/docker fix (#109)
* port changes
* Merge remote-tracking branch 'upstream/master'
* Merge remote-tracking branch 'upstream/master'
* Merge remote-tracking branch 'upstream/master'
* Merge remote-tracking branch 'upstream/master'
* Merge remote-tracking branch 'upstream/master'
* Merge branch 'master' of https://github.com/storj/storj
* fixing tag name
* no idea
* testing
* changes
* testing on travis
* testing
* changes to travis build
* new approach
* Merge branch 'master' into coyle/docker-fix
* hardcode version (#111)
* hardcode version
* adding coveralls / code coverage (#112)
* adding coveralls
* adding code coverage badge
* fixing badges
* verbose
* swap tests and coverage
* extra line
* maybe
* maybe
* moar
* gover maybe
* testing
* cleanup
* protos/netstate: remove stuff we're not using (#100)
* protos/netstate: remove stuff we're not using
* protos/netstate: add metadata field for segmentstore
* fix netstate client test
* pkg/process: start replacing pkg/process with cobra helpers (#98)
* Implement psclient interface (#107)
* Implement psclient interface
* Add string method to pieceID type
* try to fix linter errors
* Whoops missed an error
* More linter errors
* Typo
* Lol double typo
* Get everything working, begin adding tests for psclient rpc
* goimports
* Forgot to change the piecestore cli when changed the piecestore code
* Fix CLI
* remove ID length, added validator to pieceID
* Move grpc ranger to client
Change client PUT api to take a reader rather than return a writer
* GRPCRanger -> PieceRanger; Make PieceRanger a RangeCloser
* Forgot to remove offset
* Added message upon successful store
* Do that thing dennis and kaloyan wanted
* goimports
* Make closeConn a part of the interface for psclient
* Use interface
* Removed uneccessary new lines
* goimport
* Whoops
* Actually we don't want to use the interface in Piece Ranger
* Renamed piecestore in examples to piecestore-client; moved piecestore-cli to examples
* Make comments look nicer
* modified transport client based on the the design discussion
* modified transport client based on the the design discussion
* added the as discussed connection cache interface functionality
* added the as discussed connection cache interface functionality
* transport client changes
* transport client per code review changes
* per the code review comments
* transport client incorporates review comments
* fixes lint warnings
* lint warning fixes....client interface has to be Client
* initial draft of Objectstore
* transport client review changes
* client.go changes
* transport.go changes
* added test case
* added test cases
* streams iface
* comment fix
* object store changes
* comment fix
* initialized the objectstore in gw
* Added PutObject with test support for encryption file
* added object store test cases
* tested & integrated the objectstore with miniogw
* handled the ranger and paths
* indentation change
* Kalyon's code review comments resolution implemented after the 30min code review meeting
* Compilation error fix
* fixes the tavis build warnings
* corrects the ListObject return type to be slice of slice
* corrects the ListObject return type to be slice of slice
* added the serialization using protobuf
* added the unmarshalling of data in getobject()
* Jt's Review comments
* Kaloyan's review comments, moved the unmarshalling logic and other minor code indentation fixes
* more code reivew
* more code reivew
* Changes the expiration time to zeroTime and added error check in putObject function
* Changes the expiration time to zeroTime and added error check in putObject function
* minor warning fix- had to add a comment and fix the wording
* added a TODO comment per kaloyan
* code clean up removed unused variables
* WIP creating admin node service
- WIP changing the process pkg to accept multiple services
- WIP looping over services passed to process
- add netstate/service.go file and abstract it for service processing
* implement goroutine to launch each process
* goroutines working with multiple services
* code review fixes
* more code updates for review
* Add pkg lock and mod files back in
* code review updates
* update process.Main with better concurrent error handling
* Update error handling and pass ctx to StartService
* Update error handling with channel implementation
* Merge in upstream changes
- Simplify error handling channels
* updates
* Updates per reviewable
* fix test
* Setup test exec
* Scaffold test setup
* process main test working
* update admin process test
* Test multiple processes done
* Add error classes for testing, test main logger error
* Updates to tests
* Update how process.Main() handles configs
* Complete merge
* Update Gopkg and add Copyright
* Fix cyclical import issue
- Added .coverprofile to gitignore
- Update admin main.go function call
* remove unnecessary line
* Updates
* DRY up cmd/netstate package
* update service function calls
* updates
* Trying no-ops in examples
* rename netstate to pointerdb
* trying to fix merge
* dep ensure and run tests
* remove flag.Parse
* Update deps
* Skip offending test in pkg/process, to be fixed later
* adds netstate rpc server pagination, mocks pagination in test/util.go
* updates ns client example, combines ns client and server test to netstate_test, adds pagination to bolt client
* better organizes netstate test calls
* wip breaking netstate test into smaller tests
* wip modularizing netstate tests
* adds some test panics
* wip netstate test attempts
* testing bug in netstate TestDeleteAuth
* wip fixes global variable problem, still issues with list
* wip fixes get request params and args
* fixes bug in path when using MakePointers helper fn
* updates mockdb list func, adds test, changes Limit to int
* fixes merge conflicts
* fixes broken tests from merge
* remove unnecessary PointerEntry struct
* removes error when Get returns nil value from boltdb
* breaks boltdb client tests into smaller tests
* renames AssertNoErr test helper to HandleErr
* adds StartingKey and Limit parameters to redis list func, adds beginning of redis tests
* adds helper func for mockdb List function
* if no starting key provided for netstate List, the first value in storage will be used
* adds basic pagination for redis List function, adds tests
* adds list limit to call in overlay/server.go
* streamlines/fixes some nits from review
* removes use of obsolete EncryptedUnencryptedSize
* uses MockKeyValueStore instead of redis instance in redis client test
* changes test to expect nil returned for getting missing key
* remove error from `KeyValueStore#Get`
* fix bolt test
* Merge pull request #1 from bryanchriswhite/nat-pagination
remove error from `KeyValueStore#Get`
* adds Get returning error back to KeyValueStore interface and affected clients
* trying to appease travis: returns errors in Get calls in overlay/cache and cache_test
* handles redis get error when no key found
* port changes
* Task monitor and setup merge from the staging
* Restructure + additional interface
* Add NewOverlayClient
* integrated DHT client interface
* added test for interface
* PR comments addressed
* lint issue
* added generated protobuf
* adding new interface
* added the interface framework
* deleted file
* fixes compilation errors and integrates new dhtcclient interface
* merged netstat latest changes and dht new interface chagnes
* fixed the address's port
* adding comments
* PR comments addressed
* netclient interface dial method added
* rename and integrated transportclient with minio gateway
* rename and code clean up
* made changes based on the Dennis's changes on the kad-client
* Code review comment changes based on kaloyan review comments
* reverted the changes to be similar to master
* removed unused file
* renamed to transportclient
* added the review changes
* store the address of the client
* updates per the code review comments, changes-> added error retry connection attempt logic, added error conditions including nil parameters
* updated the test case to test the bad address passed condition
* updated the code per code review comments
* Bolt backed overlay cache (#94)
* wip
* add separate `Process` tests for bolt and redis-backed overlay
* more testing
* fix gitignore
* fix linter error
* goimports goimports GOIMPORTS GoImPortS!!!!
* fix port madness
* forgot to add
* add `mux` as handler and shorten context timeouts
* gofreakingimports
* fix comments
* refactor test & add logger/monkit registry
* debugging travis
* add comment
* Set redisAddress to empty string for bolt-test
* travis experiment
* refactoring tests
* Merge remote-tracking branch 'upstream/master' into bolt-backed-overlay-cache
* Automatically build, tag and push docker images on merge to master (#103)
* port changes
* build overlay on successful merge to master
* fixes to Makefile
* permissions
* dep ensure
* gopath
* let's try vgo
* remove dep
* maybe alpine is the issue
* tagging go version on build
* stupid vgo
* vgo
* adding tags to push
* quotes
* local linting fixes & stupid travis
* prepend storjlabs to docker tag (#108)
* port changes
* fixing tag name
* Use continue instead of return in table tests (#106)
I did a dumb mistake for some of the table tests, which made some of the
test cases not being executed.
* pkg/kademlia tests and restructuring (#97)
* port changes
* Merge remote-tracking branch 'upstream/master'
* Merge remote-tracking branch 'upstream/master'
* Merge remote-tracking branch 'upstream/master'
* files created
* Merge remote-tracking branch 'upstream/master' into coyle/kad-tests
* wip
* Merge remote-tracking branch 'upstream/master' into coyle/kad-tests
* wip
* remove bkad dependencie from tests
* wip
* wip
* wip
* wip
* wip
* updated coyle/kademlia
* wip
* cleanup
* ports
* overlay upgraded
* linter fixes
* piecestore kademlia newID
* add changes from kad demo
* PR comments addresses
* go func
* force travis build
* fixed merge conflicts
* fixed merge conflicts
* Merge branch 'coyle/kad-tests' of https://github.com/coyle/storj into coyle/kad-tests
* linter issues
* linting issues
* fixed merge conflicts
* linter is stupid
* Coyle/docker fix (#109)
* port changes
* Merge remote-tracking branch 'upstream/master'
* Merge remote-tracking branch 'upstream/master'
* Merge remote-tracking branch 'upstream/master'
* Merge remote-tracking branch 'upstream/master'
* Merge remote-tracking branch 'upstream/master'
* Merge branch 'master' of https://github.com/storj/storj
* fixing tag name
* no idea
* testing
* changes
* testing on travis
* testing
* changes to travis build
* new approach
* Merge branch 'master' into coyle/docker-fix
* hardcode version (#111)
* hardcode version
* adding coveralls / code coverage (#112)
* adding coveralls
* adding code coverage badge
* fixing badges
* verbose
* swap tests and coverage
* extra line
* maybe
* maybe
* moar
* gover maybe
* testing
* cleanup
* protos/netstate: remove stuff we're not using (#100)
* protos/netstate: remove stuff we're not using
* protos/netstate: add metadata field for segmentstore
* fix netstate client test
* pkg/process: start replacing pkg/process with cobra helpers (#98)
* Implement psclient interface (#107)
* Implement psclient interface
* Add string method to pieceID type
* try to fix linter errors
* Whoops missed an error
* More linter errors
* Typo
* Lol double typo
* Get everything working, begin adding tests for psclient rpc
* goimports
* Forgot to change the piecestore cli when changed the piecestore code
* Fix CLI
* remove ID length, added validator to pieceID
* Move grpc ranger to client
Change client PUT api to take a reader rather than return a writer
* GRPCRanger -> PieceRanger; Make PieceRanger a RangeCloser
* Forgot to remove offset
* Added message upon successful store
* Do that thing dennis and kaloyan wanted
* goimports
* Make closeConn a part of the interface for psclient
* Use interface
* Removed uneccessary new lines
* goimport
* Whoops
* Actually we don't want to use the interface in Piece Ranger
* Renamed piecestore in examples to piecestore-client; moved piecestore-cli to examples
* Make comments look nicer
* modified transport client based on the the design discussion
* modified transport client based on the the design discussion
* added the as discussed connection cache interface functionality
* added the as discussed connection cache interface functionality
* transport client changes
* transport client per code review changes
* per the code review comments
* transport client incorporates review comments
* fixes lint warnings
* lint warning fixes....client interface has to be Client
* client.go changes
* transport.go changes
* added test case
* added test cases
* comment fix
* comment fix
* Implement psclient interface
* Add string method to pieceID type
* try to fix linter errors
* Whoops missed an error
* More linter errors
* Typo
* Lol double typo
* Get everything working, begin adding tests for psclient rpc
* goimports
* Forgot to change the piecestore cli when changed the piecestore code
* Fix CLI
* remove ID length, added validator to pieceID
* Move grpc ranger to client
Change client PUT api to take a reader rather than return a writer
* GRPCRanger -> PieceRanger; Make PieceRanger a RangeCloser
* Forgot to remove offset
* Added message upon successful store
* Do that thing dennis and kaloyan wanted
* goimports
* Make closeConn a part of the interface for psclient
* Use interface
* Removed uneccessary new lines
* goimport
* Whoops
* Actually we don't want to use the interface in Piece Ranger
* Renamed piecestore in examples to piecestore-client; moved piecestore-cli to examples
* Make comments look nicer
* internal/test: switch errors to error classes
if you construct an error directly at package init time, you
won't get useful stack traces or anything. zeebo/errs expects that
you always construct an error (.New) when the error actually
happens. instead, you call Class at init time, then use (Class).Has
to test for error type membership.
* fix linter
* fix test
* piecestore: connect to kad
* piecestore: Linter errors
* piecestore: added pstore config command line utility
* piecestore: removed main.go, implement methods and structs
* piecestore: Import Cam's config code into piecestore-farmer
* piecestore: moving farmer to urfavecli
* piecestore: added create command
* piecestore: Removed old config, added server start code to cli
* piecestore: Get server code working
* piecestore: Changed default dir for storing piece store data; added ttl to config
* piecestore: Generate id; add bootstrap ip for kad
* piecestore: Separate kad port and server port
* piecestore: goimports
* piecestore: Removed print
* piecestore: use pkg/process
* piecestore: Better config file
* piecestore: base58 encode for id
* piecestore: base58 encode and clean up cli
* piecestore: Typo
* piecestore: removed unnecessary variable
* piecestore: Fixed more typos
* piecestore: Place data in a directory based on nodeid
* piecestore: base58 encode instead
* piecestore: Add dependency to go.mod
* piecestore: Fix typo in rpc server start; clear data on failed piece upload
* add reference to dht to overlay client struct
* wip
* wip
* Implement FindNode
* get nodes
* WIP
* Merge in Dennis kademlia code, get it working with our code
* ping and moar
* WIP trying to get cache working with kademlia
* WIP more wiring up
* WIP
* Update service cli commands
* WIP
* added GetNodes
* added nodes to Kbucket
* default transport changed to TCP
* GetBuckets interface changed
* filling in more routing
* timestamp methods
* removed store
* Added initial network overlay explorer page
* Updating and building with dockerfile
* Working on adding bootstrap node code
* WIP merging in dennis' code
* WIP
* connects cache to pkg/kademlia implementation
* WIP redis cache
* testing
* Add bootstrap network function for CLI usage
* cleanup
* call bootstrap on init network
* Add BootstrapNetwork function to interface
* Merge in dennis kad code
* WIP updates to redis/overlay client interface
* WIP trying to get the DHT connected to the cache
* go mod & test
* deps
* Bootstrap node now setting up correctly
- Need to pass it through CLI commands better
* WIP adding refresh and walk functions, added cli flags
- added cli flags for custom bootstrap port and ip
* PR comments addressed
* adding FindStorageNodes to overlay cache
* fix GetBucket
* using SplitHostPort
* Use JoinHostPort
* updates to findstoragenodes response and request
* WIP merge in progress, having issues with a panic
* wip
* adjustments
* update port for dht bootstrap test
* Docker
* wip
* dockerfile
* fixes
* makefile changes
* Update port in NewKademlia call
* Update local kademlia DHT config
* kubernetes yaml
* cleanup
* making tests pass
* k8s yaml
* lint issues
* Edit cli flags to allow for configurable bootstrap IP and Port args
* cleanup
* cache walking the network now
* Rough prototype of Walk function laid out
* Move walk function into bootstrap function
* Update dht.go
* changes to yaml
* goimports
* wip
* wip
* get nodes
* ping and moar
* added GetNodes
* added nodes to Kbucket
* default transport changed to TCP
* GetBuckets interface changed
* filling in more routing
* timestamp methods
* removed store
* testing
* cleanup
* go mod & test
* deps
* PR comments addressed
* fix GetBucket
* using SplitHostPort
* Use JoinHostPort
* added spawn scripts
* wip
* Make server not require length
* Added test for 0 length
* Added test for accepting length of 0
* linter errors
* Make store not take anything more than an id and maybe a ttl if you want
* initial commit for PUT request authorization
* inital auth for put request
* auth. request working for all req
* deleted library
* removed .db files
* work in progress for modifying test suite to accomodate credentials
* modified tests
* gofmt, fixed code based on suggestions; test passed
* gofmt again
* merged nat's update on pointers, passed tests, cleanup from git rebase
* fixed fmt
* fixed fmt on tests
* work in progress
* reduced code
* fixed naming conventions
* added line in code
* fixed server bug, merged new code to server, added env
* fixed linter; getting cright issues on piecestore
* added comments for what passes on the creds
* added spawn scripts
* Determine random id for storing
* Moved determine id to rpc example
* Added tests
* Better test
* goimports
* Updated tests
* Fix typos
* Added piecestore
* gofmt
* Added requested changes
* Added cli
* Removed ranger because I wanted something that can stand alone
* Add example of http server using piece store
* Changed piecestore code to make it more optial for error handelling
* Merged with piecestore
* Added missing package
* Forgot io import
* gofmt
* gofmt
* Forgot io
* Make path by hash exported
* updated to simplify again whoops
* Updated server to work real good
* Forgot ampersand
* Updated to match FilePiece
* Merged in cam's delete code
* Remove unused io
* Added RPC code
* Give the download request a reader
* Removed http server stuff; changed receive stream to say io.reader
* Added expiration date to shardInfo
* gRPC Ranger
* Change all instances of Shard to Piece; change protobuf name; moved client insance to outside functions
* Adapt to latest changes in piece store rpc api
* added ttl info request
* Initialize grpcRanger type with named fields
* Move scripts to http server pr; added close method for Retrieve api
* added rpc server tests for getting piece meta data and retrieval routes
* Adapt to PieceStreamReader now being a ReadCloser
* Resolved linter errors, moved to prc server to pkg, updated go.mod to use latest protobuf
* Imported cams test
* Bump gometalinter deadline
* Adapt to package name changes
* Remove Garbage
* Adapt to latest changes in piece store rpc api
* NewCustomRoute constructor to allow mocking the gRPC client
* Name struct values in constructor.
* Added piecestore
* gofmt
* Added requested changes
* Added cli
* Removed ranger because I wanted something that can stand alone
* Add example of http server using piece store
* Changed piecestore code to make it more optial for error handelling
* Merged with piecestore
* Added missing package
* Forgot io import
* gofmt
* gofmt
* Forgot io
* Make path by hash exported
* updated to simplify again whoops
* Updated server to work real good
* Forgot ampersand
* Updated to match FilePiece
* Merged in cam's delete code
* Remove unused io
* Added RPC code
* Give the download request a reader
* Removed http server stuff; changed receive stream to say io.reader
* Added expiration date to shardInfo
* Change all instances of Shard to Piece; change protobuf name; moved client insance to outside functions
* added ttl info request
* Move scripts to http server pr; added close method for Retrieve api
* added rpc server tests for getting piece meta data and retrieval routes
* Resolved linter errors, moved to prc server to pkg, updated go.mod to use latest protobuf
* Imported cams test
* Bump gometalinter deadline
* WIP adding tests
* added tests for store and delete routes
* Add changes as requested by Kaloyan, also cleaned up some code
* Get the code actually working whoops
* More cleanup
* Separating database calls from api.go
* need to rename expiration
* Added some changes requested by JT
* Fix read size
* Fixed total amount to read
* added tests
* Simplify protobuf, add store tests, edited api to handle invalid stores properly, return errors instead of messages
* Moved rpc client and server to piece store
* Moved piecestore protobuf to the /protos folder
* Cleaned up messages
* Clean up linter errors
* Added missing sqlite import
* Add ability to do iterative reads and writes to pstore
* Incrementally read data
* Fix linter and import errors
* Solve linter Error
* Change return types
* begin test refactor
* refactored to implement only 1 db connection, moved SQLite row checking into separate function and removed defer on rows.Close(), fixed os.tempDir in rpc_test.go
* Cleaning up tests
* Added retrieve tests
* refactored delete tests
* Deleted old tests
* Updated cmd/piecestore to reflect changes to piecestore
* Refactored server tests and server/client store code
* gofmt
* WIP implementing TTL struct
* Read 4k at a time when Retrieving
* implemented ttl struct
* Accidentally removed fpiece dependency?
* Double resolve merge conflict
* Moved client to the examples since it is an example
* Change hash to id in protobuf. TODO: Change client and server to reflect these changes
* Update client and protobuf
* changed hash to id
* Handle eof properly in retrieve
* Id -> ID
* forgot to change import for client after moving
* Moved client and server main to examples
* Made changes requested by JT
* checkEntries is now private, created currentTime variable for checkEntries, added defer rows.Close()
* Print not fatal
* Handle overflow when reading from server
* added const IDLength
* removed types from comments
* Add reader/writer for download data from and uploading data to server
* goimports and comments
* fixed nits, casts, added OK const, DBCleanup now exits program on error
* Add stream reader and writer for server
* Fix errors
* i beforee except after c lol
* customizable data dir
* Forgot to rename variable
* customizable data dir
* Handle closing of server stream properly
* linter
* pass on inserting the same data twice
* linter
* linter
* Do the easy things JT asked for
* Handle overflow from reads properly; handle custom db path
* Handle overflow for server stream read; TODO Combine server and client stream reads
* Allow for TTL of 0 to stay forever
* Make Client cleaner for user
* integrated eestream's serve-pieces functionality
* added ref to http request function
* created dummy bucket list
* Initialized the buckets with files with hardcoded sample data
* supports upload Object(s)
* uploads to corresponding folders
* code cleanup for review
* updated based on code review comments
* updates based on missed code review comments
* updated with review comments
* implemented review comments
* merged latest and tested
* added filepath.Join()
* updates based on the comments
* fixes the eestreamer parameter due to merge
* Moved filepiece into storj
* Fix linter errors
* Seek comment for linter
* gofmt/golinter accidentally removed import
* Fix small typos
* Use the weird iota. P cool dude ✌️
* Do things the cool way
* Changes requested by kaloyan
* didn't need test main
* Path encryption library
* Use base64 instead of hex encoding
* Prepend version number to encrypted path segments
* Remove redundant var alias
* Simplified returns
* wrap errors
* Buffered eestream EncodeReader
* Extracted fillBuffer() helper function
* Slow channels will be closed if there are still at least k fast channels
* Doc comment for maxBufferMemory
* Use timer more efficiently
* Timer initialization should be inside the for-loop
* Parallel copy of encoded data to reader buffer channels
* Transfer input read errors to output encoded readers
* minimum and optimum thresholds
* Use time.AfterFunc
* Simplify error handling in constructor
* adds pointer to netstate proto file
* generated updated netstate proto
* changes boltdb netstate to save pointers as values
* updates netstate Put to save Pointers, updates client example to put a pointer, adds grpc status errors, updates tests, changes boltdb 'File' struct to 'PointerEntry'
* updates netstate client example and client test to save pointers, updates netstate List and Delete
* begins adding netstate-http tests
* removes netstate http service
* re-adds netstate auth
* updates boltdb netstate test
* changes encrypted_unencrypted_size from int64 to bytes in netstate proto
* updates READMEs
* Added piecestore
* gofmt
* Added requested changes
* Added cli
* Removed ranger because I wanted something that can stand alone
* Changed piecestore code to make it more optial for error handelling
* Added missing package
* Forgot io import
* gofmt
* Make path by hash exported
* updated to simplify again whoops
* Forgot ampersand
* Updated to match FilePiece
* Change store to use io.CopyN
* Clear golinter errors
* Updated go.mod
* Updated go.mod
* AES GCM implementation and unit test code
* modified and tested per the code review comments
* modified and tested per the code review comments
* updated to return err
* removed the debug printf commented code
* support of aes-gcm
* updated the renaming convention per GOLANG coding standards
* Initial Go & C biniding with libstorj
* Initial GO & C bindings with libstorj
* fixing the callback
* moved a millimeter :-) on c to go to gone....
* added error condition, per review comment
* removed the .idea directory and also movie.avi file
* deleting files not to be in this pull request
* Revert "deleting files not to be in this pull request"
This reverts commit 026b834fe00f6b20a7566e71973fe224c12f533f.
* deleting files not to be in this pull request
* resolving conflicts
* syncing the file to master
* Use aes gcm in eestream rs tests
* Use aes gcm in serve example
* Fixed comment
* adds proto files for netstate crud
* moves netstate grpc client lib into pkg/netstate where grpc netstate service is defined
* starts adding grpc client and server tests
* moves creation of grpc server into cmd/netstate/main.go, removes pkg/netstate/service.go, adds more client testing
* changed all 'Path' and 'Value' fields from strings to bytes, updated tests
* changes Get and Delete in proto file to receive 'requests' instead of 'file paths', adds tests for Get, List, and Delete
* changes netstate-routes to get 'fileValue' bytes not 'fileInfo'
* adds example rpc client in 'examples' and adds more specific debug logs
* adds readmes for netstate rpc services and updates netstate-routes
* WIP eestream avoid waiting for slow pieces
* Use non-blocking select to harvest all available inbufs
* Better way to check for elapsed time in tests
* Improve comment
* Determine EOF based on expected size
* Remove unused readerError type
* Configurable readers channel size
* Close properly decodedReader in tests
* Use context for properly closing the decodedReader
* Handle infectious errors using the new descendent classes
* Refactor decodedReader.Read() into helper functions
* Reenable TestRSErrors
* Test with Rangers
* Remove obsolete comment
* Decoder can tolerate readers with unexpected EOF
* Return EOF if required number of inbufs are at EOF
* Use existing randData() to generate random data for tests
* Test case for io.ErrUnexpectedEOF
* Add TransformReaderSize constructor
* Do not fail decoding on first read error
Try decoding with the rest successfully read inputs.
* some small code improvements
* Allocate map memory upfront