2019-01-24 20:15:10 +00:00
|
|
|
// Copyright (C) 2019 Storj Labs, Inc.
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
// See LICENSE for copying information.
|
|
|
|
|
2019-01-02 10:23:25 +00:00
|
|
|
package identity
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
|
|
|
|
import (
|
2018-12-13 13:50:05 +00:00
|
|
|
"bytes"
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
"context"
|
|
|
|
"crypto"
|
|
|
|
"crypto/x509"
|
2019-04-03 16:03:53 +01:00
|
|
|
"crypto/x509/pkix"
|
2019-08-09 20:23:29 +01:00
|
|
|
"encoding/asn1"
|
2019-01-11 14:59:35 +00:00
|
|
|
"fmt"
|
2018-08-13 09:39:45 +01:00
|
|
|
"io/ioutil"
|
2019-01-11 14:59:35 +00:00
|
|
|
"path/filepath"
|
|
|
|
"strconv"
|
|
|
|
"strings"
|
|
|
|
"time"
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
|
2018-08-13 09:39:45 +01:00
|
|
|
"github.com/zeebo/errs"
|
|
|
|
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
"storj.io/storj/pkg/peertls"
|
2019-04-03 16:03:53 +01:00
|
|
|
"storj.io/storj/pkg/peertls/extensions"
|
2019-02-07 09:04:29 +00:00
|
|
|
"storj.io/storj/pkg/pkcrypto"
|
2019-09-19 05:46:39 +01:00
|
|
|
"storj.io/storj/pkg/rpc/rpcpeer"
|
2018-11-30 13:40:13 +00:00
|
|
|
"storj.io/storj/pkg/storj"
|
2018-08-13 09:39:45 +01:00
|
|
|
)
|
|
|
|
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
// PeerIdentity represents another peer on the network.
|
|
|
|
type PeerIdentity struct {
|
2018-11-01 15:48:43 +00:00
|
|
|
RestChain []*x509.Certificate
|
2019-04-08 19:15:19 +01:00
|
|
|
// CA represents the peer's self-signed CA.
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
CA *x509.Certificate
|
|
|
|
// Leaf represents the leaf they're currently using. The leaf should be
|
|
|
|
// signed by the CA. The leaf is what is used for communication.
|
|
|
|
Leaf *x509.Certificate
|
2019-04-08 19:15:19 +01:00
|
|
|
// The ID taken from the CA public key.
|
2018-11-29 18:39:27 +00:00
|
|
|
ID storj.NodeID
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
// FullIdentity represents you on the network. In addition to a PeerIdentity,
|
2018-08-13 09:39:45 +01:00
|
|
|
// a FullIdentity also has a Key, which a PeerIdentity doesn't have.
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
type FullIdentity struct {
|
2018-11-01 15:48:43 +00:00
|
|
|
RestChain []*x509.Certificate
|
2018-08-13 09:39:45 +01:00
|
|
|
// CA represents the peer's self-signed CA. The ID is taken from this cert.
|
|
|
|
CA *x509.Certificate
|
|
|
|
// Leaf represents the leaf they're currently using. The leaf should be
|
|
|
|
// signed by the CA. The leaf is what is used for communication.
|
|
|
|
Leaf *x509.Certificate
|
2019-04-08 19:15:19 +01:00
|
|
|
// The ID taken from the CA public key.
|
2018-11-29 18:39:27 +00:00
|
|
|
ID storj.NodeID
|
2018-08-13 09:39:45 +01:00
|
|
|
// Key is the key this identity uses with the leaf for communication.
|
|
|
|
Key crypto.PrivateKey
|
|
|
|
}
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
|
2019-04-03 16:03:53 +01:00
|
|
|
// ManageablePeerIdentity is a `PeerIdentity` and its corresponding `FullCertificateAuthority`
|
|
|
|
// in a single struct. It is used for making changes to the identity that require CA
|
|
|
|
// authorization; e.g. adding extensions.
|
|
|
|
type ManageablePeerIdentity struct {
|
|
|
|
*PeerIdentity
|
|
|
|
CA *FullCertificateAuthority
|
|
|
|
}
|
|
|
|
|
|
|
|
// ManageableFullIdentity is a `FullIdentity` and its corresponding `FullCertificateAuthority`
|
|
|
|
// in a single struct. It is used for making changes to the identity that require CA
|
|
|
|
// authorization and the leaf private key; e.g. revoking a leaf cert (private key changes).
|
|
|
|
type ManageableFullIdentity struct {
|
|
|
|
*FullIdentity
|
|
|
|
CA *FullCertificateAuthority
|
|
|
|
}
|
|
|
|
|
2019-01-02 10:23:25 +00:00
|
|
|
// SetupConfig allows you to run a set of Responsibilities with the given
|
2018-08-13 09:39:45 +01:00
|
|
|
// identity. You can also just load an Identity from disk.
|
2019-01-02 10:23:25 +00:00
|
|
|
type SetupConfig struct {
|
2019-11-12 11:31:19 +00:00
|
|
|
CertPath string `help:"path to the certificate chain for this identity" default:"$IDENTITYDIR/identity.cert" path:"true"`
|
|
|
|
KeyPath string `help:"path to the private key for this identity" default:"$IDENTITYDIR/identity.key" path:"true"`
|
2019-02-26 08:55:52 +00:00
|
|
|
Overwrite bool `help:"if true, existing identity certs AND keys will overwritten for" default:"false" setup:"true"`
|
2018-08-13 09:39:45 +01:00
|
|
|
Version string `help:"semantic version of identity storage format" default:"0"`
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
}
|
|
|
|
|
2019-01-02 10:23:25 +00:00
|
|
|
// Config allows you to run a set of Responsibilities with the given
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
// identity. You can also just load an Identity from disk.
|
2019-01-02 10:23:25 +00:00
|
|
|
type Config struct {
|
2019-11-12 11:31:19 +00:00
|
|
|
CertPath string `help:"path to the certificate chain for this identity" default:"$IDENTITYDIR/identity.cert" user:"true" path:"true"`
|
|
|
|
KeyPath string `help:"path to the private key for this identity" default:"$IDENTITYDIR/identity.key" user:"true" path:"true"`
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
}
|
|
|
|
|
2019-02-06 16:40:55 +00:00
|
|
|
// PeerConfig allows you to interact with a peer identity (cert, no key) on disk.
|
|
|
|
type PeerConfig struct {
|
2019-11-12 11:31:19 +00:00
|
|
|
CertPath string `help:"path to the certificate chain for this identity" default:"$IDENTITYDIR/identity.cert" user:"true" path:"true"`
|
2019-02-06 16:40:55 +00:00
|
|
|
}
|
|
|
|
|
2019-02-25 07:38:03 +00:00
|
|
|
// FullCertificateAuthorityFromPEM loads a FullIdentity from a certificate chain and
|
2019-04-08 19:15:19 +01:00
|
|
|
// private key PEM-encoded bytes.
|
2019-02-25 07:38:03 +00:00
|
|
|
func FullCertificateAuthorityFromPEM(chainPEM, keyPEM []byte) (*FullCertificateAuthority, error) {
|
|
|
|
peerCA, err := PeerCertificateAuthorityFromPEM(chainPEM)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
// NB: there shouldn't be multiple keys in the key file but if there
|
|
|
|
// are, this uses the first one
|
|
|
|
key, err := pkcrypto.PrivateKeyFromPEM(keyPEM)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
return &FullCertificateAuthority{
|
|
|
|
RestChain: peerCA.RestChain,
|
|
|
|
Cert: peerCA.Cert,
|
|
|
|
Key: key,
|
|
|
|
ID: peerCA.ID,
|
|
|
|
}, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// PeerCertificateAuthorityFromPEM loads a FullIdentity from a certificate chain and
|
2019-04-08 19:15:19 +01:00
|
|
|
// private key PEM-encoded bytes.
|
2019-02-25 07:38:03 +00:00
|
|
|
func PeerCertificateAuthorityFromPEM(chainPEM []byte) (*PeerCertificateAuthority, error) {
|
|
|
|
chain, err := pkcrypto.CertsFromPEM(chainPEM)
|
|
|
|
if err != nil {
|
|
|
|
return nil, errs.Wrap(err)
|
|
|
|
}
|
|
|
|
// NB: the "leaf" cert in a CA chain is the "CA" cert in an identity chain
|
2019-04-08 19:15:19 +01:00
|
|
|
nodeID, err := NodeIDFromCert(chain[peertls.LeafIndex])
|
2019-02-25 07:38:03 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
return &PeerCertificateAuthority{
|
|
|
|
RestChain: chain[peertls.CAIndex:],
|
|
|
|
Cert: chain[peertls.LeafIndex],
|
|
|
|
ID: nodeID,
|
|
|
|
}, nil
|
|
|
|
}
|
|
|
|
|
2018-08-13 09:39:45 +01:00
|
|
|
// FullIdentityFromPEM loads a FullIdentity from a certificate chain and
|
2019-04-08 19:15:19 +01:00
|
|
|
// private key PEM-encoded bytes.
|
2018-12-07 13:44:25 +00:00
|
|
|
func FullIdentityFromPEM(chainPEM, keyPEM []byte) (*FullIdentity, error) {
|
2019-02-06 16:40:55 +00:00
|
|
|
peerIdent, err := PeerIdentityFromPEM(chainPEM)
|
2018-08-13 09:39:45 +01:00
|
|
|
if err != nil {
|
2019-02-06 16:40:55 +00:00
|
|
|
return nil, err
|
2019-01-11 14:59:35 +00:00
|
|
|
}
|
2019-02-06 16:40:55 +00:00
|
|
|
|
2018-08-13 09:39:45 +01:00
|
|
|
// NB: there shouldn't be multiple keys in the key file but if there
|
|
|
|
// are, this uses the first one
|
2019-02-07 18:40:28 +00:00
|
|
|
key, err := pkcrypto.PrivateKeyFromPEM(keyPEM)
|
2018-08-13 09:39:45 +01:00
|
|
|
if err != nil {
|
2019-02-07 18:40:28 +00:00
|
|
|
return nil, err
|
2018-08-13 09:39:45 +01:00
|
|
|
}
|
2019-02-06 16:40:55 +00:00
|
|
|
|
|
|
|
return &FullIdentity{
|
|
|
|
RestChain: peerIdent.RestChain,
|
|
|
|
CA: peerIdent.CA,
|
|
|
|
Leaf: peerIdent.Leaf,
|
|
|
|
Key: key,
|
|
|
|
ID: peerIdent.ID,
|
|
|
|
}, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// PeerIdentityFromPEM loads a PeerIdentity from a certificate chain and
|
2019-04-08 19:15:19 +01:00
|
|
|
// private key PEM-encoded bytes.
|
2019-02-06 16:40:55 +00:00
|
|
|
func PeerIdentityFromPEM(chainPEM []byte) (*PeerIdentity, error) {
|
2019-02-07 18:40:28 +00:00
|
|
|
chain, err := pkcrypto.CertsFromPEM(chainPEM)
|
2019-02-06 16:40:55 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, errs.Wrap(err)
|
|
|
|
}
|
|
|
|
if len(chain) < peertls.CAIndex+1 {
|
2019-02-07 18:40:28 +00:00
|
|
|
return nil, pkcrypto.ErrChainLength.New("identity chain does not contain a CA certificate")
|
2019-02-06 16:40:55 +00:00
|
|
|
}
|
2019-04-08 19:15:19 +01:00
|
|
|
nodeID, err := NodeIDFromCert(chain[peertls.CAIndex])
|
2018-08-13 09:39:45 +01:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2019-02-06 16:40:55 +00:00
|
|
|
return &PeerIdentity{
|
2018-12-18 11:55:55 +00:00
|
|
|
RestChain: chain[peertls.CAIndex+1:],
|
|
|
|
CA: chain[peertls.CAIndex],
|
|
|
|
Leaf: chain[peertls.LeafIndex],
|
|
|
|
ID: nodeID,
|
2018-08-13 09:39:45 +01:00
|
|
|
}, nil
|
|
|
|
}
|
|
|
|
|
2019-04-08 19:15:19 +01:00
|
|
|
// PeerIdentityFromChain loads a PeerIdentity from an identity certificate chain.
|
|
|
|
func PeerIdentityFromChain(chain []*x509.Certificate) (*PeerIdentity, error) {
|
|
|
|
nodeID, err := NodeIDFromCert(chain[peertls.CAIndex])
|
2018-08-13 09:39:45 +01:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
return &PeerIdentity{
|
2019-04-08 19:15:19 +01:00
|
|
|
RestChain: chain[peertls.CAIndex+1:],
|
|
|
|
CA: chain[peertls.CAIndex],
|
|
|
|
ID: nodeID,
|
|
|
|
Leaf: chain[peertls.LeafIndex],
|
2018-08-13 09:39:45 +01:00
|
|
|
}, nil
|
|
|
|
}
|
|
|
|
|
2019-04-08 19:15:19 +01:00
|
|
|
// PeerIdentityFromPeer loads a PeerIdentity from a peer connection.
|
2019-09-19 05:46:39 +01:00
|
|
|
func PeerIdentityFromPeer(peer *rpcpeer.Peer) (*PeerIdentity, error) {
|
|
|
|
chain := peer.State.PeerCertificates
|
2019-04-08 19:15:19 +01:00
|
|
|
if len(chain)-1 < peertls.CAIndex {
|
2018-08-27 19:35:27 +01:00
|
|
|
return nil, Error.New("invalid certificate chain")
|
|
|
|
}
|
2019-04-08 19:15:19 +01:00
|
|
|
pi, err := PeerIdentityFromChain(chain)
|
2018-08-27 19:35:27 +01:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
return pi, nil
|
|
|
|
}
|
|
|
|
|
2019-04-08 19:15:19 +01:00
|
|
|
// PeerIdentityFromContext loads a PeerIdentity from a ctx TLS credentials.
|
2018-10-11 15:35:55 +01:00
|
|
|
func PeerIdentityFromContext(ctx context.Context) (*PeerIdentity, error) {
|
2019-09-19 05:46:39 +01:00
|
|
|
peer, err := rpcpeer.FromContext(ctx)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
2018-10-11 15:35:55 +01:00
|
|
|
}
|
2019-09-19 05:46:39 +01:00
|
|
|
return PeerIdentityFromPeer(peer)
|
2018-10-11 15:35:55 +01:00
|
|
|
}
|
|
|
|
|
2019-04-08 19:15:19 +01:00
|
|
|
// NodeIDFromCertPath loads a node ID from a certificate file path.
|
2019-01-15 15:02:54 +00:00
|
|
|
func NodeIDFromCertPath(certPath string) (storj.NodeID, error) {
|
|
|
|
certBytes, err := ioutil.ReadFile(certPath)
|
|
|
|
if err != nil {
|
|
|
|
return storj.NodeID{}, err
|
|
|
|
}
|
|
|
|
return NodeIDFromPEM(certBytes)
|
|
|
|
}
|
|
|
|
|
2019-04-08 19:15:19 +01:00
|
|
|
// NodeIDFromPEM loads a node ID from certificate bytes.
|
2019-01-15 15:02:54 +00:00
|
|
|
func NodeIDFromPEM(pemBytes []byte) (storj.NodeID, error) {
|
2019-02-07 18:40:28 +00:00
|
|
|
chain, err := pkcrypto.CertsFromPEM(pemBytes)
|
2019-01-15 15:02:54 +00:00
|
|
|
if err != nil {
|
|
|
|
return storj.NodeID{}, Error.New("invalid identity certificate")
|
|
|
|
}
|
2019-04-08 19:15:19 +01:00
|
|
|
if len(chain)-1 < peertls.CAIndex {
|
2019-01-15 15:02:54 +00:00
|
|
|
return storj.NodeID{}, Error.New("no CA in identity certificate")
|
|
|
|
}
|
2019-04-08 19:15:19 +01:00
|
|
|
return NodeIDFromCert(chain[peertls.CAIndex])
|
|
|
|
}
|
|
|
|
|
|
|
|
// NodeIDFromCert looks for a version in an ID version extension in the passed
|
|
|
|
// cert and then calculates a versioned node ID using the certificate public key.
|
|
|
|
// NB: `cert` would typically be an identity's certificate authority certificate.
|
|
|
|
func NodeIDFromCert(cert *x509.Certificate) (id storj.NodeID, err error) {
|
|
|
|
version, err := storj.IDVersionFromCert(cert)
|
|
|
|
if err != nil {
|
2019-07-09 03:52:52 +01:00
|
|
|
return id, Error.Wrap(err)
|
2019-04-08 19:15:19 +01:00
|
|
|
}
|
|
|
|
return NodeIDFromKey(cert.PublicKey, version)
|
2019-01-15 15:02:54 +00:00
|
|
|
}
|
|
|
|
|
2019-04-08 19:15:19 +01:00
|
|
|
// NodeIDFromKey calculates the node ID for a given public key with the passed version.
|
|
|
|
func NodeIDFromKey(k crypto.PublicKey, version storj.IDVersion) (storj.NodeID, error) {
|
2019-04-03 16:03:53 +01:00
|
|
|
idBytes, err := peertls.DoubleSHA256PublicKey(k)
|
2018-12-13 20:01:43 +00:00
|
|
|
if err != nil {
|
|
|
|
return storj.NodeID{}, storj.ErrNodeID.Wrap(err)
|
|
|
|
}
|
2019-04-08 19:15:19 +01:00
|
|
|
return storj.NewVersionedID(idBytes, version), nil
|
2018-12-13 20:01:43 +00:00
|
|
|
}
|
|
|
|
|
2019-04-08 19:15:19 +01:00
|
|
|
// NewFullIdentity creates a new ID for nodes with difficulty and concurrency params.
|
2019-04-09 18:01:45 +01:00
|
|
|
func NewFullIdentity(ctx context.Context, opts NewCAOptions) (*FullIdentity, error) {
|
|
|
|
ca, err := NewCA(ctx, opts)
|
2018-12-13 20:01:43 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
identity, err := ca.NewIdentity()
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
return identity, err
|
|
|
|
}
|
|
|
|
|
2019-03-04 20:03:33 +00:00
|
|
|
// ToChains takes a number of certificate chains and returns them as a 2d slice of chains of certificates.
|
|
|
|
func ToChains(chains ...[]*x509.Certificate) [][]*x509.Certificate {
|
|
|
|
combinedChains := make([][]*x509.Certificate, len(chains))
|
2019-08-22 12:40:15 +01:00
|
|
|
copy(combinedChains, chains)
|
2019-03-04 20:03:33 +00:00
|
|
|
return combinedChains
|
|
|
|
}
|
|
|
|
|
2019-04-03 16:03:53 +01:00
|
|
|
// NewManageablePeerIdentity returns a manageable identity given a full identity and a full certificate authority.
|
|
|
|
func NewManageablePeerIdentity(ident *PeerIdentity, ca *FullCertificateAuthority) *ManageablePeerIdentity {
|
|
|
|
return &ManageablePeerIdentity{
|
|
|
|
PeerIdentity: ident,
|
|
|
|
CA: ca,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// NewManageableFullIdentity returns a manageable identity given a full identity and a full certificate authority.
|
|
|
|
func NewManageableFullIdentity(ident *FullIdentity, ca *FullCertificateAuthority) *ManageableFullIdentity {
|
|
|
|
return &ManageableFullIdentity{
|
|
|
|
FullIdentity: ident,
|
|
|
|
CA: ca,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-12-20 18:29:05 +00:00
|
|
|
// Status returns the status of the identity cert/key files for the config
|
2019-03-12 14:42:38 +00:00
|
|
|
func (is SetupConfig) Status() (TLSFilesStatus, error) {
|
2018-08-13 09:39:45 +01:00
|
|
|
return statTLSFiles(is.CertPath, is.KeyPath)
|
|
|
|
}
|
|
|
|
|
|
|
|
// Create generates and saves a CA using the config
|
2019-01-02 10:23:25 +00:00
|
|
|
func (is SetupConfig) Create(ca *FullCertificateAuthority) (*FullIdentity, error) {
|
2018-08-23 15:08:26 +01:00
|
|
|
fi, err := ca.NewIdentity()
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
if err != nil {
|
2018-08-13 09:39:45 +01:00
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
fi.CA = ca.Cert
|
2019-01-02 10:23:25 +00:00
|
|
|
ic := Config{
|
2018-08-13 09:39:45 +01:00
|
|
|
CertPath: is.CertPath,
|
|
|
|
KeyPath: is.KeyPath,
|
|
|
|
}
|
|
|
|
return fi, ic.Save(fi)
|
|
|
|
}
|
|
|
|
|
2019-01-04 17:23:23 +00:00
|
|
|
// FullConfig converts a `SetupConfig` to `Config`
|
|
|
|
func (is SetupConfig) FullConfig() Config {
|
|
|
|
return Config{
|
|
|
|
CertPath: is.CertPath,
|
|
|
|
KeyPath: is.KeyPath,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-08-13 09:39:45 +01:00
|
|
|
// Load loads a FullIdentity from the config
|
2019-01-02 10:23:25 +00:00
|
|
|
func (ic Config) Load() (*FullIdentity, error) {
|
2018-08-13 09:39:45 +01:00
|
|
|
c, err := ioutil.ReadFile(ic.CertPath)
|
|
|
|
if err != nil {
|
|
|
|
return nil, peertls.ErrNotExist.Wrap(err)
|
|
|
|
}
|
|
|
|
k, err := ioutil.ReadFile(ic.KeyPath)
|
|
|
|
if err != nil {
|
|
|
|
return nil, peertls.ErrNotExist.Wrap(err)
|
|
|
|
}
|
2018-12-07 13:44:25 +00:00
|
|
|
fi, err := FullIdentityFromPEM(c, k)
|
2018-08-13 09:39:45 +01:00
|
|
|
if err != nil {
|
|
|
|
return nil, errs.New("failed to load identity %#v, %#v: %v",
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
ic.CertPath, ic.KeyPath, err)
|
|
|
|
}
|
2018-08-13 09:39:45 +01:00
|
|
|
return fi, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// Save saves a FullIdentity according to the config
|
2019-01-02 10:23:25 +00:00
|
|
|
func (ic Config) Save(fi *FullIdentity) error {
|
2018-12-18 11:55:55 +00:00
|
|
|
var (
|
|
|
|
certData, keyData bytes.Buffer
|
|
|
|
writeChainErr, writeChainDataErr, writeKeyErr, writeKeyDataErr error
|
|
|
|
)
|
2018-08-13 09:39:45 +01:00
|
|
|
|
2018-11-01 15:48:43 +00:00
|
|
|
chain := []*x509.Certificate{fi.Leaf, fi.CA}
|
|
|
|
chain = append(chain, fi.RestChain...)
|
2018-12-13 13:50:05 +00:00
|
|
|
|
2018-12-18 11:55:55 +00:00
|
|
|
if ic.CertPath != "" {
|
|
|
|
writeChainErr = peertls.WriteChain(&certData, chain...)
|
|
|
|
writeChainDataErr = writeChainData(ic.CertPath, certData.Bytes())
|
|
|
|
}
|
|
|
|
|
|
|
|
if ic.KeyPath != "" {
|
2019-02-07 18:40:28 +00:00
|
|
|
writeKeyErr = pkcrypto.WritePrivateKeyPEM(&keyData, fi.Key)
|
2018-12-18 11:55:55 +00:00
|
|
|
writeKeyDataErr = writeKeyData(ic.KeyPath, keyData.Bytes())
|
|
|
|
}
|
|
|
|
|
2019-03-29 12:30:23 +00:00
|
|
|
writeErr := errs.Combine(writeChainErr, writeKeyErr)
|
2018-12-13 13:50:05 +00:00
|
|
|
if writeErr != nil {
|
|
|
|
return writeErr
|
2018-08-13 09:39:45 +01:00
|
|
|
}
|
2018-12-13 13:50:05 +00:00
|
|
|
|
2019-03-29 12:30:23 +00:00
|
|
|
return errs.Combine(
|
2018-12-18 11:55:55 +00:00
|
|
|
writeChainDataErr,
|
|
|
|
writeKeyDataErr,
|
2018-12-13 13:50:05 +00:00
|
|
|
)
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
}
|
|
|
|
|
2019-01-11 14:59:35 +00:00
|
|
|
// SaveBackup saves the certificate of the config with a timestamped filename
|
|
|
|
func (ic Config) SaveBackup(fi *FullIdentity) error {
|
|
|
|
return Config{
|
|
|
|
CertPath: backupPath(ic.CertPath),
|
2019-02-06 16:40:55 +00:00
|
|
|
KeyPath: backupPath(ic.KeyPath),
|
2019-01-11 14:59:35 +00:00
|
|
|
}.Save(fi)
|
|
|
|
}
|
|
|
|
|
2019-02-06 16:40:55 +00:00
|
|
|
// PeerConfig converts a Config to a PeerConfig
|
|
|
|
func (ic Config) PeerConfig() *PeerConfig {
|
|
|
|
return &PeerConfig{
|
|
|
|
CertPath: ic.CertPath,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Load loads a PeerIdentity from the config
|
|
|
|
func (ic PeerConfig) Load() (*PeerIdentity, error) {
|
|
|
|
c, err := ioutil.ReadFile(ic.CertPath)
|
|
|
|
if err != nil {
|
|
|
|
return nil, peertls.ErrNotExist.Wrap(err)
|
|
|
|
}
|
|
|
|
pi, err := PeerIdentityFromPEM(c)
|
|
|
|
if err != nil {
|
2019-08-27 17:07:12 +01:00
|
|
|
return nil, errs.New("failed to load identity %#v: %v", ic.CertPath, err)
|
2019-02-06 16:40:55 +00:00
|
|
|
}
|
|
|
|
return pi, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// Save saves a PeerIdentity according to the config
|
2019-04-03 16:03:53 +01:00
|
|
|
func (ic PeerConfig) Save(peerIdent *PeerIdentity) error {
|
|
|
|
chain := []*x509.Certificate{peerIdent.Leaf, peerIdent.CA}
|
|
|
|
chain = append(chain, peerIdent.RestChain...)
|
2019-02-06 16:40:55 +00:00
|
|
|
|
|
|
|
if ic.CertPath != "" {
|
2019-08-27 17:07:12 +01:00
|
|
|
var certData bytes.Buffer
|
|
|
|
err := peertls.WriteChain(&certData, chain...)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
2019-02-06 16:40:55 +00:00
|
|
|
|
2019-08-27 17:07:12 +01:00
|
|
|
return writeChainData(ic.CertPath, certData.Bytes())
|
2019-02-06 16:40:55 +00:00
|
|
|
}
|
|
|
|
|
2019-08-27 17:07:12 +01:00
|
|
|
return nil
|
2019-02-06 16:40:55 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// SaveBackup saves the certificate of the config with a timestamped filename
|
|
|
|
func (ic PeerConfig) SaveBackup(pi *PeerIdentity) error {
|
|
|
|
return PeerConfig{
|
|
|
|
CertPath: backupPath(ic.CertPath),
|
|
|
|
}.Save(pi)
|
|
|
|
}
|
|
|
|
|
2019-03-04 20:03:33 +00:00
|
|
|
// Chain returns the Identity's certificate chain
|
|
|
|
func (fi *FullIdentity) Chain() []*x509.Certificate {
|
|
|
|
return append([]*x509.Certificate{fi.Leaf, fi.CA}, fi.RestChain...)
|
|
|
|
}
|
|
|
|
|
|
|
|
// RawChain returns all of the certificate chain as a 2d byte slice
|
|
|
|
func (fi *FullIdentity) RawChain() [][]byte {
|
|
|
|
chain := fi.Chain()
|
|
|
|
rawChain := make([][]byte, len(chain))
|
|
|
|
for i, cert := range chain {
|
|
|
|
rawChain[i] = cert.Raw
|
2019-01-25 18:05:21 +00:00
|
|
|
}
|
2019-03-04 20:03:33 +00:00
|
|
|
return rawChain
|
2019-01-25 18:05:21 +00:00
|
|
|
}
|
|
|
|
|
2019-03-04 20:03:33 +00:00
|
|
|
// RawRestChain returns the rest (excluding leaf and CA) of the certificate chain as a 2d byte slice
|
|
|
|
func (fi *FullIdentity) RawRestChain() [][]byte {
|
|
|
|
rawChain := make([][]byte, len(fi.RestChain))
|
2018-11-01 15:48:43 +00:00
|
|
|
for _, cert := range fi.RestChain {
|
2019-03-04 20:03:33 +00:00
|
|
|
rawChain = append(rawChain, cert.Raw)
|
2018-11-01 15:48:43 +00:00
|
|
|
}
|
2019-03-04 20:03:33 +00:00
|
|
|
return rawChain
|
2018-11-01 15:48:43 +00:00
|
|
|
}
|
|
|
|
|
2019-02-06 16:40:55 +00:00
|
|
|
// PeerIdentity converts a FullIdentity into a PeerIdentity
|
|
|
|
func (fi *FullIdentity) PeerIdentity() *PeerIdentity {
|
|
|
|
return &PeerIdentity{
|
|
|
|
CA: fi.CA,
|
|
|
|
Leaf: fi.Leaf,
|
|
|
|
ID: fi.ID,
|
|
|
|
RestChain: fi.RestChain,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-04-09 18:01:45 +01:00
|
|
|
// Version looks up the version based on the certificate's ID version extension.
|
|
|
|
func (fi *FullIdentity) Version() (storj.IDVersion, error) {
|
|
|
|
return storj.IDVersionFromCert(fi.CA)
|
|
|
|
}
|
|
|
|
|
2019-04-03 16:03:53 +01:00
|
|
|
// AddExtension adds extensions to the leaf cert of an identity. Extensions
|
|
|
|
// are serialized into the certificate's raw bytes and is re-signed by it's
|
|
|
|
// certificate authority.
|
|
|
|
func (manageableIdent *ManageablePeerIdentity) AddExtension(ext ...pkix.Extension) error {
|
|
|
|
if err := extensions.AddExtraExtension(manageableIdent.Leaf, ext...); err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
2019-04-04 00:21:32 +01:00
|
|
|
updatedCert, err := peertls.CreateCertificate(manageableIdent.Leaf.PublicKey, manageableIdent.CA.Key, manageableIdent.Leaf, manageableIdent.CA.Cert)
|
2019-04-03 16:03:53 +01:00
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
manageableIdent.Leaf = updatedCert
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// Revoke extends the CA certificate with a certificate revocation extension.
|
|
|
|
func (manageableIdent *ManageableFullIdentity) Revoke() error {
|
|
|
|
ext, err := extensions.NewRevocationExt(manageableIdent.CA.Key, manageableIdent.Leaf)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
revokingIdent, err := manageableIdent.CA.NewIdentity(ext)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
manageableIdent.Leaf = revokingIdent.Leaf
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2019-01-11 14:59:35 +00:00
|
|
|
func backupPath(path string) string {
|
|
|
|
pathExt := filepath.Ext(path)
|
|
|
|
base := strings.TrimSuffix(path, pathExt)
|
|
|
|
return fmt.Sprintf(
|
|
|
|
"%s.%s%s",
|
|
|
|
base,
|
|
|
|
strconv.Itoa(int(time.Now().Unix())),
|
|
|
|
pathExt,
|
|
|
|
)
|
|
|
|
}
|
2019-08-09 20:23:29 +01:00
|
|
|
|
2019-08-12 15:41:34 +01:00
|
|
|
// EncodePeerIdentity encodes the complete identity chain to bytes
|
2019-08-09 20:23:29 +01:00
|
|
|
func EncodePeerIdentity(pi *PeerIdentity) []byte {
|
|
|
|
var chain []byte
|
|
|
|
chain = append(chain, pi.Leaf.Raw...)
|
|
|
|
chain = append(chain, pi.CA.Raw...)
|
|
|
|
for _, cert := range pi.RestChain {
|
|
|
|
chain = append(chain, cert.Raw...)
|
|
|
|
}
|
|
|
|
return chain
|
|
|
|
}
|
|
|
|
|
2019-08-12 15:41:34 +01:00
|
|
|
// DecodePeerIdentity Decodes the bytes into complete identity chain
|
2019-08-09 20:23:29 +01:00
|
|
|
func DecodePeerIdentity(ctx context.Context, chain []byte) (_ *PeerIdentity, err error) {
|
|
|
|
defer mon.Task()(&ctx)(&err)
|
|
|
|
|
|
|
|
var certs []*x509.Certificate
|
|
|
|
for len(chain) > 0 {
|
|
|
|
var raw asn1.RawValue
|
|
|
|
var err error
|
|
|
|
|
|
|
|
chain, err = asn1.Unmarshal(chain, &raw)
|
|
|
|
if err != nil {
|
|
|
|
return nil, Error.Wrap(err)
|
|
|
|
}
|
|
|
|
|
|
|
|
cert, err := pkcrypto.CertFromDER(raw.FullBytes)
|
|
|
|
if err != nil {
|
|
|
|
return nil, Error.Wrap(err)
|
|
|
|
}
|
|
|
|
|
|
|
|
certs = append(certs, cert)
|
|
|
|
}
|
|
|
|
if len(certs) < 2 {
|
|
|
|
return nil, Error.New("not enough certificates")
|
|
|
|
}
|
|
|
|
return PeerIdentityFromChain(certs)
|
|
|
|
}
|