* protobuf for sending bandwidth agreements to satellite from storage nodes
* Setup process for sending agreements
* Add payer_id to db with bandwidth agreements for better sorting
* Renamed payer to satellite in psdb
* Added a new table 'mib' with 'data', 'size' and 'method' columns
* added AddMIB() function and test case TestMIBHappyPath()
* added function and a test case to add entries into bandwidth usage table
* added functionality to create an entry, update the entry and readback the entry based on a given date into/from bandwidth tbl
* added initial SumBandwidthSizes()
* added the functionality to retrieve the total bw usage based on start and end date
* Added the unit test case for AddBwUsageTbl
* changed the arguments to take time format as arg than Unix format
* changed the arguments to take time format as arg than Unix format
* changes per code review comments
* adding back go.sum
* changes per code review comments
* changes per code review comments
* changes per code review comments
* storage node quick check and startup validation
* rearranged the startup validation and quick check logic
* travis lint warning fixes
* travis lint warning fixes
* travis lint warning fixes
* code changes per review comments
* code clean dev debug info
* travis lint wranings
* code changes per code review comments
* code changes per code review comments
* code update per review
* sqlite SUM is having issue when getting the SUM of an empty column; filepath was checking a directory that doesn't exist when starting server; Example updated to print allocated and used space
* storage node quick check and startup validation
* rearranged the startup validation and quick check logic
* travis lint warning fixes
* travis lint warning fixes
* travis lint warning fixes
* code changes per review comments
* code clean dev debug info
* travis lint wranings
* code changes per code review comments
* code changes per code review comments
* code update per review
* no file or directory error
* Updated mock PSClient
* Limit to only 1 database write
* Check file system rather than database
* Move check to storefile. We need to figure out how to fix this mess
* piecestore should not overwrite data, it should fail when trying to write to a file that already exists
* Format errors, delete unused function in psdb for checking if TTL exists
* Combine errors better
* Moving retrieve into multiple goroutines
* Make sure we pass nil errors into err channel
* restore tests
* incorporate locks in retrieve.go
* deserialize data only if we have something to deserealize when receiving bandwidth allocation in server store
* Adding logic for retrieve to be more efficient
* Add channel?
* hmm
* implement Throttle concurrency primitive
* using throttle
* Remove unused variables
* Egon comments addressed
* Get ba total correct
* Consume without waiting
* incrementally increase signing size
* Get downloads working with throttle
* Removed logging
* Make sure we handle errors properly
* Fix tests
>
>
Co-authored-by: Kaloyan <kaloyan@storj.io>
* Can't Fatalf in goroutine
* Add missing returns to tests
* add capacity to channel, smarter allocations
* rename things and don't use size as limit
* replace things with sync2.Throttle
* fix compilation errors
* add note about security
* fix ordering
* Max length is actually 64 bytes for piece ID
* Max length is actually 64 bytes for piece ID
* fix limit
* error comes from pending allocs, so no need to relog
* Optimize throughput
* TODO
* Deleted allocation manager
* Return when someone sends a smaller bandwidth allocation than the previous message
* review comments
* Added initial functions for signing and verifying
* whoops
* Get client up to speed
* Added initial functions for signing and verifying
* whoops
* Get client up to speed
* wip
* wip
* actual signatures in tests
(cherry picked from commit 1464853b737f1d712d64fbf90147f535525c8fd9)
* bugfixing
* Generate private key in example
* Generate signatures for pieceranger tests
* Update examples to use TLS
* Use private key from identity inside of example
* Use crypto.PrivateKey interface
* Change err name in defers
* Pass tests
* Pass identity Key to PSClient
* Get tests passing on travis
* Resolve linter complaints
* Don't use url.Parse for bolt paths: filepaths may not be valid URL-s.
* go.mod: update dependencies
* README.md: add Windows instructions
* pkg/overlay: check for the correct path and text in error
* pkg/overlay: fix tests for windows
* pkg/piecestore: make windows tests pass
* pkg/telemetry: skip test, as it doesn't shutdown nicely
* storage/redis: ensure that redis is clean before running tests
* captplanet standalone farmer setup
* Bandwidth Allocation
* utils.Close method changed to utils.LogClose
* Get build temporarily working
* Get/Put for PSClient should take payer bandwidth allocations rather than the NewPSClient function
* Update example client to reflect changes in client API
* Update ecclient to use latest PSClient, Make NewPSClient return error also
* Updated pieceranger tests to check for errors; sign method should take byte array
* Handle defers in store.go better
* Fix defer functions in psdb.go
* fun times
* Protobuf bandwidthallocation data is now a byte array
* Remove psservice package and merge it into pstore server
* Write wrapper for database calls
* Change all expiration names in protobuf to be more informative; add defer in retrieve; remove old comment
* Make PSDB tests implementation independent rather than method independent
* get rid of payer, renter in ecclient
* add context monitoring in store and retrieve
* pkg/provider: with pkg/provider merged, make a single heavy client binary and deprecate old services
* add setup to gw binary too
* captplanet: output what addresses everything is listening on
* revert peertls/io_util changes
* define config flag across all commands
* use trimsuffix
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
* Add dockerfile and yaml for setting up piecestore servers
* Fix dockerfile for @aleitner (#115)
* Fix dockerfile for @aleitner
* Move files for @coyle
* Update yaml
* My linter had some errors so I resolved them
* Make jenkins do the needful
* Make piecestore-farmer look like overlay's build process
* Fix service spec to work in staging
* Make Jenkins push images, but not deploy them, yet.
* Modify entrypoint to fit new verbs
* Update piecestore-farmer entrypoint script to handle new app output
* Implement psclient interface
* Add string method to pieceID type
* try to fix linter errors
* Whoops missed an error
* More linter errors
* Typo
* Lol double typo
* Get everything working, begin adding tests for psclient rpc
* goimports
* Forgot to change the piecestore cli when changed the piecestore code
* Fix CLI
* remove ID length, added validator to pieceID
* Move grpc ranger to client
Change client PUT api to take a reader rather than return a writer
* GRPCRanger -> PieceRanger; Make PieceRanger a RangeCloser
* Forgot to remove offset
* Added message upon successful store
* Do that thing dennis and kaloyan wanted
* goimports
* Make closeConn a part of the interface for psclient
* Use interface
* Removed uneccessary new lines
* goimport
* Whoops
* Actually we don't want to use the interface in Piece Ranger
* Renamed piecestore in examples to piecestore-client; moved piecestore-cli to examples
* Make comments look nicer
* piecestore: connect to kad
* piecestore: Linter errors
* piecestore: added pstore config command line utility
* piecestore: removed main.go, implement methods and structs
* piecestore: Import Cam's config code into piecestore-farmer
* piecestore: moving farmer to urfavecli
* piecestore: added create command
* piecestore: Removed old config, added server start code to cli
* piecestore: Get server code working
* piecestore: Changed default dir for storing piece store data; added ttl to config
* piecestore: Generate id; add bootstrap ip for kad
* piecestore: Separate kad port and server port
* piecestore: goimports
* piecestore: Removed print
* piecestore: use pkg/process
* piecestore: Better config file
* piecestore: base58 encode for id
* piecestore: base58 encode and clean up cli
* piecestore: Typo
* piecestore: removed unnecessary variable
* piecestore: Fixed more typos
* piecestore: Place data in a directory based on nodeid
* piecestore: base58 encode instead
* piecestore: Add dependency to go.mod
* piecestore: Fix typo in rpc server start; clear data on failed piece upload
* added spawn scripts
* wip
* Make server not require length
* Added test for 0 length
* Added test for accepting length of 0
* linter errors
* Make store not take anything more than an id and maybe a ttl if you want
* added spawn scripts
* Determine random id for storing
* Moved determine id to rpc example
* Added tests
* Better test
* goimports
* Updated tests
* Fix typos
* Added piecestore
* gofmt
* Added requested changes
* Added cli
* Removed ranger because I wanted something that can stand alone
* Add example of http server using piece store
* Changed piecestore code to make it more optial for error handelling
* Merged with piecestore
* Added missing package
* Forgot io import
* gofmt
* gofmt
* Forgot io
* Make path by hash exported
* updated to simplify again whoops
* Updated server to work real good
* Forgot ampersand
* Updated to match FilePiece
* Merged in cam's delete code
* Remove unused io
* Added RPC code
* Give the download request a reader
* Removed http server stuff; changed receive stream to say io.reader
* Added expiration date to shardInfo
* gRPC Ranger
* Change all instances of Shard to Piece; change protobuf name; moved client insance to outside functions
* Adapt to latest changes in piece store rpc api
* added ttl info request
* Initialize grpcRanger type with named fields
* Move scripts to http server pr; added close method for Retrieve api
* added rpc server tests for getting piece meta data and retrieval routes
* Adapt to PieceStreamReader now being a ReadCloser
* Resolved linter errors, moved to prc server to pkg, updated go.mod to use latest protobuf
* Imported cams test
* Bump gometalinter deadline
* Adapt to package name changes
* Remove Garbage
* Adapt to latest changes in piece store rpc api
* NewCustomRoute constructor to allow mocking the gRPC client
* Name struct values in constructor.
* Added piecestore
* gofmt
* Added requested changes
* Added cli
* Removed ranger because I wanted something that can stand alone
* Add example of http server using piece store
* Changed piecestore code to make it more optial for error handelling
* Merged with piecestore
* Added missing package
* Forgot io import
* gofmt
* gofmt
* Forgot io
* Make path by hash exported
* updated to simplify again whoops
* Updated server to work real good
* Forgot ampersand
* Updated to match FilePiece
* Merged in cam's delete code
* Remove unused io
* Added RPC code
* Give the download request a reader
* Removed http server stuff; changed receive stream to say io.reader
* Added expiration date to shardInfo
* Change all instances of Shard to Piece; change protobuf name; moved client insance to outside functions
* added ttl info request
* Move scripts to http server pr; added close method for Retrieve api
* added rpc server tests for getting piece meta data and retrieval routes
* Resolved linter errors, moved to prc server to pkg, updated go.mod to use latest protobuf
* Imported cams test
* Bump gometalinter deadline
* WIP adding tests
* added tests for store and delete routes
* Add changes as requested by Kaloyan, also cleaned up some code
* Get the code actually working whoops
* More cleanup
* Separating database calls from api.go
* need to rename expiration
* Added some changes requested by JT
* Fix read size
* Fixed total amount to read
* added tests
* Simplify protobuf, add store tests, edited api to handle invalid stores properly, return errors instead of messages
* Moved rpc client and server to piece store
* Moved piecestore protobuf to the /protos folder
* Cleaned up messages
* Clean up linter errors
* Added missing sqlite import
* Add ability to do iterative reads and writes to pstore
* Incrementally read data
* Fix linter and import errors
* Solve linter Error
* Change return types
* begin test refactor
* refactored to implement only 1 db connection, moved SQLite row checking into separate function and removed defer on rows.Close(), fixed os.tempDir in rpc_test.go
* Cleaning up tests
* Added retrieve tests
* refactored delete tests
* Deleted old tests
* Updated cmd/piecestore to reflect changes to piecestore
* Refactored server tests and server/client store code
* gofmt
* WIP implementing TTL struct
* Read 4k at a time when Retrieving
* implemented ttl struct
* Accidentally removed fpiece dependency?
* Double resolve merge conflict
* Moved client to the examples since it is an example
* Change hash to id in protobuf. TODO: Change client and server to reflect these changes
* Update client and protobuf
* changed hash to id
* Handle eof properly in retrieve
* Id -> ID
* forgot to change import for client after moving
* Moved client and server main to examples
* Made changes requested by JT
* checkEntries is now private, created currentTime variable for checkEntries, added defer rows.Close()
* Print not fatal
* Handle overflow when reading from server
* added const IDLength
* removed types from comments
* Add reader/writer for download data from and uploading data to server
* goimports and comments
* fixed nits, casts, added OK const, DBCleanup now exits program on error
* Add stream reader and writer for server
* Fix errors
* i beforee except after c lol
* customizable data dir
* Forgot to rename variable
* customizable data dir
* Handle closing of server stream properly
* linter
* pass on inserting the same data twice
* linter
* linter
* Do the easy things JT asked for
* Handle overflow from reads properly; handle custom db path
* Handle overflow for server stream read; TODO Combine server and client stream reads
* Allow for TTL of 0 to stay forever
* Make Client cleaner for user
* Moved filepiece into storj
* Fix linter errors
* Seek comment for linter
* gofmt/golinter accidentally removed import
* Fix small typos
* Use the weird iota. P cool dude ✌️
* Do things the cool way
* Changes requested by kaloyan
* didn't need test main
* Added piecestore
* gofmt
* Added requested changes
* Added cli
* Removed ranger because I wanted something that can stand alone
* Changed piecestore code to make it more optial for error handelling
* Added missing package
* Forgot io import
* gofmt
* Make path by hash exported
* updated to simplify again whoops
* Forgot ampersand
* Updated to match FilePiece
* Change store to use io.CopyN
* Clear golinter errors
* Updated go.mod
* Updated go.mod