Go to file
Alexander Leitner 2eb660d4b7 Bandwidth allocation pipeline data (#276)
* Moving retrieve into multiple goroutines

* Make sure we pass nil errors into err channel

* restore tests

* incorporate locks in retrieve.go

* deserialize data only if we have something to deserealize when receiving bandwidth allocation in server store

* Adding logic for retrieve to be more efficient

* Add channel?

* hmm

* implement Throttle concurrency primitive

* using throttle

* Remove unused variables

* Egon comments addressed

* Get ba total correct

* Consume without waiting

* incrementally increase signing size

* Get downloads working with throttle

* Removed logging

* Make sure we handle errors properly

* Fix tests
>
>
Co-authored-by: Kaloyan <kaloyan@storj.io>

* Can't Fatalf in goroutine

* Add missing returns to tests

* add capacity to channel, smarter allocations

* rename things and don't use size as limit

* replace things with sync2.Throttle

* fix compilation errors

* add note about security

* fix ordering

* Max length is actually 64 bytes for piece ID

* Max length is actually 64 bytes for piece ID

* fix limit

* error comes from pending allocs, so no need to relog

* Optimize throughput

* TODO

* Deleted allocation manager

* Return when someone sends a smaller bandwidth allocation than the previous message

* review comments
2018-09-10 03:18:41 -06:00
cmd CLI commands with bucket store API (#304) 2018-09-09 11:31:26 -06:00
docs Add error handling section 2018-04-11 07:43:23 -05:00
examples Add functions for signing and verifying during bandwidth exchange (#246) 2018-08-27 14:35:27 -04:00
internal Bandwidth allocation pipeline data (#276) 2018-09-10 03:18:41 -06:00
logo wip on structure 2018-04-06 12:32:34 -04:00
pkg Bandwidth allocation pipeline data (#276) 2018-09-10 03:18:41 -06:00
pointerdb/auth Refactor List in PointerDB (#163) 2018-07-27 09:02:59 +03:00
protos Add IsPrefix support to piecestore (#313) 2018-09-07 17:20:15 +03:00
scripts Normalize the Docker Bits (#325) 2018-09-07 15:46:08 -04:00
static Cache (#67) 2018-06-05 17:06:37 -04:00
storage merge uplink and storj (#323) 2018-09-07 09:01:04 -06:00
test Build and upload binaries (#296) 2018-08-31 11:21:44 -04:00
.clabot thepaul signed the cla (#332) 2018-09-10 10:54:11 +03:00
.dockerignore Build and upload binaries (#296) 2018-08-31 11:21:44 -04:00
.gitignore add reed-solomon encoding benchmark (#307) 2018-09-05 17:33:20 +03:00
.travis.yml Test captplanet with travis (#226) 2018-08-14 12:58:16 -04:00
all-in-one.md All-in-one docker-compose project (#267) 2018-08-23 11:48:03 -04:00
CODE_OF_CONDUCT.md styling changes to the code of conduct (#234) 2018-08-16 10:07:30 -04:00
docker-compose.yaml Normalize the Docker Bits (#325) 2018-09-07 15:46:08 -04:00
go.mod Prevent sql injection for farmer (#322) 2018-09-08 11:06:44 -06:00
go.sum Prevent sql injection for farmer (#322) 2018-09-08 11:06:44 -06:00
Gopkg.lock cross-compilation: fix darwin (#293) 2018-08-27 15:06:55 -06:00
Gopkg.toml Fix the sha3 error with the build (#230) 2018-08-15 14:49:30 -04:00
index.html Cache (#67) 2018-06-05 17:06:37 -04:00
Jenkinsfile Build and upload binaries (#296) 2018-08-31 11:21:44 -04:00
LICENSE license code with agplv3 (#126) 2018-07-05 10:24:26 -04:00
Makefile Normalize the Docker Bits (#325) 2018-09-07 15:46:08 -04:00
README.md Update README.md (#299) 2018-08-29 09:28:29 -06:00

Storj V3 Network

Go Report Card Go Doc Coverage Status

Storj is building a decentralized cloud storage network and is launching in early 2019.


Storj is an S3 compatible platform and suite of decentralized applications that allows you to store data in a secure and decentralized manner. Your files are encrypted, broken into little pieces and stored in a global decentralized network of computers. Luckily, we also support allowing you (and only you) to retrieve those files!

Table of Contents

Start Contributing to Storj

Install required packages

Download and install the latest release of Go, at least Go 1.11: https://golang.org/

You will also need Git. (brew install git, apt-get install git, etc).

Install git and golang. We support Linux, Mac, and Windows operating systems. Other operating systems supported by Go are probably not much additional work.

Download and compile Storj

Aside about GOPATH: Go 1.11 supports a new feature called Go modules, and Storj has adopted Go module support. If you've used previous Go versions, Go modules no longer require a GOPATH environment variable. Go by default falls back to the old behavior if you check out code inside of the directory referenced by your GOPATH variable, so make sure to use another directory, unset GOPATH entirely, or set GO111MODULE=on before continuing with these instructions. If you don't have a GOPATH set, you can ignore this aside.

git clone git@github.com:storj/storj storj
cd storj
go install -v ./cmd/...

Configure a test network

~/go/bin/captplanet setup

Start the test network

~/go/bin/captplanet run

Run unit tests

go test -v ./...

You can execute only a single test package if you like. For example: go test ./pkg/kademlia. Add -v for more informations about the executed unit tests.

Start Using Storj via the Storj CLI

Configure the Storj CLI

  1. In a new terminal setup the Storj CLI: $ storj setup
  2. Edit the API Key, overlay address, and pointer db address fields in the Storj CLI config file located at ~/.storj/cli/config.yaml with values from the captplanet config file located at ~/.storj/capt/config.yaml

Test out some Storj CLI commands!

  1. Create a bucket: $ storj mb s3://bucket-name
  2. Upload an object: $ storj cp ~/Desktop/your-large-file.mp4 s3://bucket-name
  3. List objects in a bucket: $ storj ls s3://bucket-name/
  4. Download an object: $ storj cp s3://bucket-name/your-large-file.mp4 ~/Desktop/your-large-file.mp4
  5. Delete an object: $ storj rm s3://bucket-name/your-large-file.mp4

Start Using Storj via the AWS S3 CLI

Configure AWS CLI

Download and install the AWS S3 CLI: https://docs.aws.amazon.com/cli/latest/userguide/installing.html

In a new terminal session configure the AWS S3 CLI:

$ aws configure
AWS Access Key ID [None]: insecure-dev-access-key
AWS Secret Access Key [None]: insecure-dev-secret-key
Default region name [None]: us-east-1
Default output format [None]:
$ aws configure set default.s3.multipart_threshold 1TB  # until we support multipart

Test out some AWS S3 CLI commands!

  1. Create a bucket: $ aws s3 --endpoint=http://localhost:7777/ mb s3://bucket-name
  2. Upload an object: $ aws s3 --endpoint=http://localhost:7777/ cp ~/Desktop/your-large-file.mp4 s3://bucket-name
  3. List objects in a bucket: $ aws s3 --endpoint=http://localhost:7777/ ls s3://bucket-name/
  4. Download an object: $ aws s3 --endpoint=http://localhost:7777/ cp s3://bucket-name/your-large-file.mp4 ~/Desktop/your-large-file.mp4
  5. Generate a URL for an object: $ aws s3 --endpoint=http://localhost:7777/ presign s3://bucket-name/your-large-file.mp4
  6. Delete an object: $ aws s3 --endpoint=http://localhost:7777/ rm s3://bucket-name/your-large-file.mp4

For more information about the AWS s3 CLI visit: https://docs.aws.amazon.com/cli/latest/reference/s3/index.html

License

The network under construction (this repo) is currently licensed with the AGPLv3 license. Once the network reaches beta phase, we will be licensing all client-side code via the Apache v2 license.

For code released under the AGPLv3, we request that contributors sign our Contributor License Agreement (CLA) so that we can relicense the code under Apache v2, or other licenses in the future.

Support

If you have any questions or suggestions please reach out to us on Rocketchat or Twitter.