2019-01-24 20:15:10 +00:00
|
|
|
// Copyright (C) 2019 Storj Labs, Inc.
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
// See LICENSE for copying information.
|
|
|
|
|
|
|
|
package main
|
|
|
|
|
|
|
|
import (
|
2018-11-08 13:37:55 +00:00
|
|
|
"context"
|
|
|
|
"fmt"
|
2019-01-16 21:09:13 +00:00
|
|
|
"io"
|
2018-08-13 16:33:21 +01:00
|
|
|
"os"
|
2019-01-16 21:09:13 +00:00
|
|
|
"os/exec"
|
2018-07-26 15:21:35 +01:00
|
|
|
"path/filepath"
|
2019-01-16 21:09:13 +00:00
|
|
|
"runtime"
|
2018-11-08 13:37:55 +00:00
|
|
|
"sort"
|
|
|
|
"text/tabwriter"
|
2018-07-26 15:21:35 +01:00
|
|
|
|
2019-01-16 21:09:13 +00:00
|
|
|
"github.com/fatih/color"
|
2018-11-20 18:29:07 +00:00
|
|
|
"github.com/gogo/protobuf/proto"
|
2019-01-16 21:09:13 +00:00
|
|
|
"github.com/golang/protobuf/ptypes"
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
"github.com/spf13/cobra"
|
2019-01-23 10:39:03 +00:00
|
|
|
"github.com/zeebo/errs"
|
2018-12-18 17:37:56 +00:00
|
|
|
"go.uber.org/zap"
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
|
2018-12-03 15:51:56 +00:00
|
|
|
"storj.io/storj/internal/fpath"
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
"storj.io/storj/pkg/cfgstruct"
|
2019-01-25 17:46:09 +00:00
|
|
|
"storj.io/storj/pkg/identity"
|
2018-11-08 13:37:55 +00:00
|
|
|
"storj.io/storj/pkg/pb"
|
2019-01-16 21:09:13 +00:00
|
|
|
"storj.io/storj/pkg/piecestore/psclient"
|
2018-11-08 13:37:55 +00:00
|
|
|
"storj.io/storj/pkg/piecestore/psserver/psdb"
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
"storj.io/storj/pkg/process"
|
2018-11-30 13:40:13 +00:00
|
|
|
"storj.io/storj/pkg/storj"
|
2019-01-18 17:08:45 +00:00
|
|
|
"storj.io/storj/pkg/transport"
|
2019-01-23 10:39:03 +00:00
|
|
|
"storj.io/storj/storagenode"
|
|
|
|
"storj.io/storj/storagenode/storagenodedb"
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
)
|
|
|
|
|
2019-01-23 10:39:03 +00:00
|
|
|
// StorageNodeFlags defines storage node configuration
|
|
|
|
type StorageNodeFlags struct {
|
2019-01-18 10:36:58 +00:00
|
|
|
EditConf bool `default:"false" help:"open config in default editor"`
|
|
|
|
SaveAllDefaults bool `default:"false" help:"save all default values to config.yaml file" setup:"true"`
|
2019-01-07 11:06:10 +00:00
|
|
|
|
2019-01-23 10:39:03 +00:00
|
|
|
storagenode.Config
|
2019-01-07 09:48:16 +00:00
|
|
|
}
|
|
|
|
|
2019-01-25 17:46:09 +00:00
|
|
|
// Inspector holds the kad client for node inspection
|
|
|
|
type Inspector struct {
|
|
|
|
kad pb.KadInspectorClient
|
|
|
|
}
|
|
|
|
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
var (
|
|
|
|
rootCmd = &cobra.Command{
|
2018-08-25 02:52:58 +01:00
|
|
|
Use: "storagenode",
|
|
|
|
Short: "StorageNode",
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
}
|
2018-08-13 16:33:21 +01:00
|
|
|
runCmd = &cobra.Command{
|
|
|
|
Use: "run",
|
2018-08-25 02:52:58 +01:00
|
|
|
Short: "Run the storagenode",
|
2018-08-13 16:33:21 +01:00
|
|
|
RunE: cmdRun,
|
|
|
|
}
|
|
|
|
setupCmd = &cobra.Command{
|
2018-12-14 21:14:59 +00:00
|
|
|
Use: "setup",
|
|
|
|
Short: "Create config files",
|
|
|
|
RunE: cmdSetup,
|
|
|
|
Annotations: map[string]string{"type": "setup"},
|
2018-08-13 16:33:21 +01:00
|
|
|
}
|
2019-01-02 15:12:07 +00:00
|
|
|
configCmd = &cobra.Command{
|
|
|
|
Use: "config",
|
|
|
|
Short: "Edit config files",
|
|
|
|
RunE: cmdConfig,
|
|
|
|
Annotations: map[string]string{"type": "setup"},
|
|
|
|
}
|
2018-11-08 13:37:55 +00:00
|
|
|
diagCmd = &cobra.Command{
|
|
|
|
Use: "diag",
|
|
|
|
Short: "Diagnostic Tool support",
|
|
|
|
RunE: cmdDiag,
|
|
|
|
}
|
2019-01-16 21:09:13 +00:00
|
|
|
dashboardCmd = &cobra.Command{
|
|
|
|
Use: "dashboard",
|
|
|
|
Short: "Display a dashbaord",
|
|
|
|
RunE: dashCmd,
|
|
|
|
}
|
2019-01-23 10:39:03 +00:00
|
|
|
runCfg StorageNodeFlags
|
|
|
|
setupCfg StorageNodeFlags
|
2019-01-18 17:08:45 +00:00
|
|
|
|
2019-01-16 21:52:17 +00:00
|
|
|
dashboardCfg struct {
|
2019-01-25 17:46:09 +00:00
|
|
|
Address string `default:":28967" help:"address for dashboard service"`
|
|
|
|
ExternalAddress string `default:":28967" help:"address that your node is listening on if using a tunneling service"`
|
|
|
|
BootstrapAddr string `default:"bootstrap.storj.io:8888" help:"address of server the storage node was bootstrapped against"`
|
2019-01-16 21:52:17 +00:00
|
|
|
}
|
2019-01-07 09:48:16 +00:00
|
|
|
|
2018-11-08 13:37:55 +00:00
|
|
|
diagCfg struct {
|
|
|
|
}
|
2018-07-26 15:21:35 +01:00
|
|
|
|
2019-01-22 12:35:48 +00:00
|
|
|
defaultConfDir = fpath.ApplicationDir("storj", "storagenode")
|
|
|
|
// TODO: this path should be defined somewhere else
|
|
|
|
defaultIdentityDir = fpath.ApplicationDir("storj", "identity", "storagenode")
|
|
|
|
defaultDiagDir string
|
|
|
|
confDir string
|
|
|
|
identityDir string
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
)
|
|
|
|
|
2018-12-17 15:38:38 +00:00
|
|
|
const (
|
2019-01-14 15:15:26 +00:00
|
|
|
defaultServerAddr = ":28967"
|
2018-12-17 15:38:38 +00:00
|
|
|
)
|
|
|
|
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
func init() {
|
2019-01-22 12:35:48 +00:00
|
|
|
confDirParam := cfgstruct.FindConfigDirParam()
|
|
|
|
if confDirParam != "" {
|
|
|
|
defaultConfDir = confDirParam
|
|
|
|
}
|
2019-01-22 14:34:40 +00:00
|
|
|
identityDirParam := cfgstruct.FindIdentityDirParam()
|
2019-01-22 12:35:48 +00:00
|
|
|
if identityDirParam != "" {
|
|
|
|
defaultIdentityDir = identityDirParam
|
2018-12-14 21:14:59 +00:00
|
|
|
}
|
|
|
|
|
2019-01-14 21:19:15 +00:00
|
|
|
rootCmd.PersistentFlags().StringVar(&confDir, "config-dir", defaultConfDir, "main directory for storagenode configuration")
|
2019-01-17 02:41:43 +00:00
|
|
|
err := rootCmd.PersistentFlags().SetAnnotation("config-dir", "setup", []string{"true"})
|
|
|
|
if err != nil {
|
|
|
|
zap.S().Error("Failed to set 'setup' annotation for 'config-dir'")
|
|
|
|
}
|
2019-01-22 12:35:48 +00:00
|
|
|
rootCmd.PersistentFlags().StringVar(&identityDir, "identity-dir", defaultIdentityDir, "main directory for storagenode identity credentials")
|
|
|
|
err = rootCmd.PersistentFlags().SetAnnotation("identity-dir", "setup", []string{"true"})
|
|
|
|
if err != nil {
|
|
|
|
zap.S().Error("Failed to set 'setup' annotation for 'config-dir'")
|
|
|
|
}
|
2018-12-14 21:14:59 +00:00
|
|
|
|
2018-12-11 18:40:54 +00:00
|
|
|
defaultDiagDir = filepath.Join(defaultConfDir, "storage")
|
2018-08-13 16:33:21 +01:00
|
|
|
rootCmd.AddCommand(runCmd)
|
|
|
|
rootCmd.AddCommand(setupCmd)
|
2019-01-02 15:12:07 +00:00
|
|
|
rootCmd.AddCommand(configCmd)
|
2018-11-08 13:37:55 +00:00
|
|
|
rootCmd.AddCommand(diagCmd)
|
2019-01-16 21:09:13 +00:00
|
|
|
rootCmd.AddCommand(dashboardCmd)
|
2019-01-22 14:34:40 +00:00
|
|
|
cfgstruct.Bind(runCmd.Flags(), &runCfg, cfgstruct.ConfDir(defaultConfDir), cfgstruct.IdentityDir(defaultIdentityDir))
|
|
|
|
cfgstruct.BindSetup(setupCmd.Flags(), &setupCfg, cfgstruct.ConfDir(defaultConfDir), cfgstruct.IdentityDir(defaultIdentityDir))
|
|
|
|
cfgstruct.BindSetup(configCmd.Flags(), &setupCfg, cfgstruct.ConfDir(defaultConfDir), cfgstruct.IdentityDir(defaultIdentityDir))
|
|
|
|
cfgstruct.Bind(diagCmd.Flags(), &diagCfg, cfgstruct.ConfDir(defaultDiagDir), cfgstruct.IdentityDir(defaultIdentityDir))
|
2019-01-18 17:08:45 +00:00
|
|
|
cfgstruct.Bind(dashboardCmd.Flags(), &dashboardCfg, cfgstruct.ConfDir(defaultDiagDir))
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
func cmdRun(cmd *cobra.Command, args []string) (err error) {
|
2019-01-23 10:39:03 +00:00
|
|
|
log := zap.L()
|
|
|
|
|
2019-01-25 14:54:54 +00:00
|
|
|
identity, err := runCfg.Identity.Load()
|
2019-01-23 10:39:03 +00:00
|
|
|
if err != nil {
|
2019-01-18 10:36:58 +00:00
|
|
|
zap.S().Fatal(err)
|
|
|
|
}
|
|
|
|
|
2019-01-23 10:39:03 +00:00
|
|
|
if err := runCfg.Verify(log); err != nil {
|
|
|
|
log.Sugar().Error("Invalid configuration: ", err)
|
|
|
|
return err
|
2018-12-18 17:37:56 +00:00
|
|
|
}
|
2019-01-23 10:39:03 +00:00
|
|
|
|
2019-01-15 15:02:54 +00:00
|
|
|
ctx := process.Ctx(cmd)
|
2019-01-25 14:54:54 +00:00
|
|
|
if err := process.InitMetricsWithCertPath(ctx, nil, runCfg.Identity.CertPath); err != nil {
|
2019-01-23 10:39:03 +00:00
|
|
|
zap.S().Error("Failed to initialize telemetry batcher: ", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
db, err := storagenodedb.New(storagenodedb.Config{
|
|
|
|
Storage: runCfg.Storage.Path,
|
|
|
|
Info: filepath.Join(runCfg.Storage.Path, "piecestore.db"),
|
|
|
|
Kademlia: runCfg.Kademlia.DBPath,
|
|
|
|
})
|
|
|
|
if err != nil {
|
2019-01-24 20:28:06 +00:00
|
|
|
return errs.New("Error starting master database on storagenode: %+v", err)
|
2019-01-23 10:39:03 +00:00
|
|
|
}
|
|
|
|
|
2019-01-24 20:28:06 +00:00
|
|
|
err = db.CreateTables()
|
|
|
|
if err != nil {
|
|
|
|
return errs.New("Error creating tables for master database on storagenode: %+v", err)
|
|
|
|
}
|
2019-01-23 19:58:44 +00:00
|
|
|
|
2019-01-23 10:39:03 +00:00
|
|
|
peer, err := storagenode.New(log, identity, db, runCfg.Config)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
2019-01-15 15:02:54 +00:00
|
|
|
}
|
2018-12-18 17:37:56 +00:00
|
|
|
|
2019-01-23 10:39:03 +00:00
|
|
|
runError := peer.Run(ctx)
|
|
|
|
closeError := peer.Close()
|
|
|
|
|
2019-01-23 19:58:44 +00:00
|
|
|
return errs.Combine(runError, closeError, db.Close())
|
2018-08-13 16:33:21 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
func cmdSetup(cmd *cobra.Command, args []string) (err error) {
|
2019-01-14 21:19:15 +00:00
|
|
|
setupDir, err := filepath.Abs(confDir)
|
2018-08-13 16:33:21 +01:00
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
2019-01-14 15:57:58 +00:00
|
|
|
valid, _ := fpath.IsValidSetupDir(setupDir)
|
|
|
|
if !valid {
|
|
|
|
return fmt.Errorf("storagenode configuration already exists (%v)", setupDir)
|
2018-12-20 13:27:51 +00:00
|
|
|
}
|
|
|
|
|
2018-12-14 21:14:59 +00:00
|
|
|
err = os.MkdirAll(setupDir, 0700)
|
2018-08-13 16:33:21 +01:00
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
overrides := map[string]interface{}{
|
2019-01-15 15:43:10 +00:00
|
|
|
"log.level": "info",
|
|
|
|
}
|
|
|
|
serverAddress := cmd.Flag("server.address")
|
|
|
|
if !serverAddress.Changed {
|
|
|
|
overrides[serverAddress.Name] = defaultServerAddr
|
2018-08-13 16:33:21 +01:00
|
|
|
}
|
|
|
|
|
2019-01-14 21:19:15 +00:00
|
|
|
configFile := filepath.Join(setupDir, "config.yaml")
|
2019-01-15 13:55:33 +00:00
|
|
|
if setupCfg.SaveAllDefaults {
|
|
|
|
err = process.SaveConfigWithAllDefaults(cmd.Flags(), configFile, overrides)
|
|
|
|
} else {
|
|
|
|
err = process.SaveConfig(cmd.Flags(), configFile, overrides)
|
|
|
|
}
|
2019-01-14 21:19:15 +00:00
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
if setupCfg.EditConf {
|
|
|
|
return fpath.EditFile(configFile)
|
|
|
|
}
|
|
|
|
|
|
|
|
return err
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
}
|
|
|
|
|
2019-01-02 15:12:07 +00:00
|
|
|
func cmdConfig(cmd *cobra.Command, args []string) (err error) {
|
2019-01-14 21:19:15 +00:00
|
|
|
setupDir, err := filepath.Abs(confDir)
|
2019-01-02 15:12:07 +00:00
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
//run setup if we can't access the config file
|
|
|
|
conf := filepath.Join(setupDir, "config.yaml")
|
|
|
|
if _, err := os.Stat(conf); err != nil {
|
2019-01-14 21:19:15 +00:00
|
|
|
return cmdSetup(cmd, args)
|
2019-01-02 15:12:07 +00:00
|
|
|
}
|
2019-01-14 21:19:15 +00:00
|
|
|
|
2019-01-02 15:12:07 +00:00
|
|
|
return fpath.EditFile(conf)
|
|
|
|
}
|
|
|
|
|
2018-11-08 13:37:55 +00:00
|
|
|
func cmdDiag(cmd *cobra.Command, args []string) (err error) {
|
2019-01-14 21:19:15 +00:00
|
|
|
diagDir, err := filepath.Abs(confDir)
|
2018-11-08 13:37:55 +00:00
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
// check if the directory exists
|
2018-12-14 21:14:59 +00:00
|
|
|
_, err = os.Stat(diagDir)
|
2018-11-08 13:37:55 +00:00
|
|
|
if err != nil {
|
2018-12-14 21:14:59 +00:00
|
|
|
fmt.Println("Storagenode directory doesn't exist", diagDir)
|
2018-11-08 13:37:55 +00:00
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
// open the sql db
|
2018-12-28 10:01:02 +00:00
|
|
|
dbpath := filepath.Join(diagDir, "storage", "piecestore.db")
|
2019-01-11 11:26:39 +00:00
|
|
|
db, err := psdb.Open(context.Background(), nil, dbpath)
|
2018-11-08 13:37:55 +00:00
|
|
|
if err != nil {
|
|
|
|
fmt.Println("Storagenode database couldnt open:", dbpath)
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
//get all bandwidth aggrements entries already ordered
|
|
|
|
bwAgreements, err := db.GetBandwidthAllocations()
|
|
|
|
if err != nil {
|
2019-01-02 18:47:34 +00:00
|
|
|
fmt.Println("storage node 'bandwidth_agreements' table read error:", dbpath)
|
2018-11-08 13:37:55 +00:00
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
// Agreement is a struct that contains a bandwidth agreement and the associated signature
|
|
|
|
type SatelliteSummary struct {
|
2019-01-10 11:41:57 +00:00
|
|
|
TotalBytes int64
|
|
|
|
PutActionCount int64
|
|
|
|
GetActionCount int64
|
|
|
|
GetAuditActionCount int64
|
|
|
|
GetRepairActionCount int64
|
|
|
|
PutRepairActionCount int64
|
|
|
|
TotalTransactions int64
|
2018-11-08 13:37:55 +00:00
|
|
|
// additional attributes add here ...
|
|
|
|
}
|
|
|
|
|
|
|
|
// attributes per satelliteid
|
2018-11-29 18:39:27 +00:00
|
|
|
summaries := make(map[storj.NodeID]*SatelliteSummary)
|
|
|
|
satelliteIDs := storj.NodeIDList{}
|
2018-11-08 13:37:55 +00:00
|
|
|
|
|
|
|
for _, rbaVal := range bwAgreements {
|
|
|
|
for _, rbaDataVal := range rbaVal {
|
|
|
|
// deserializing rbad you get payerbwallocation, total & storage node id
|
|
|
|
rbad := &pb.RenterBandwidthAllocation_Data{}
|
|
|
|
if err := proto.Unmarshal(rbaDataVal.Agreement, rbad); err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
// deserializing pbad you get satelliteID, uplinkID, max size, exp, serial# & action
|
|
|
|
pbad := &pb.PayerBandwidthAllocation_Data{}
|
|
|
|
if err := proto.Unmarshal(rbad.GetPayerAllocation().GetData(), pbad); err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
2018-11-29 18:39:27 +00:00
|
|
|
summary, ok := summaries[pbad.SatelliteId]
|
2018-11-08 13:37:55 +00:00
|
|
|
if !ok {
|
2018-11-29 18:39:27 +00:00
|
|
|
summaries[pbad.SatelliteId] = &SatelliteSummary{}
|
|
|
|
satelliteIDs = append(satelliteIDs, pbad.SatelliteId)
|
|
|
|
summary = summaries[pbad.SatelliteId]
|
2018-11-08 13:37:55 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// fill the summary info
|
|
|
|
summary.TotalBytes += rbad.GetTotal()
|
|
|
|
summary.TotalTransactions++
|
2019-01-10 11:41:57 +00:00
|
|
|
switch pbad.GetAction() {
|
|
|
|
case pb.PayerBandwidthAllocation_PUT:
|
2018-11-08 13:37:55 +00:00
|
|
|
summary.PutActionCount++
|
2019-01-10 11:41:57 +00:00
|
|
|
case pb.PayerBandwidthAllocation_GET:
|
2018-11-08 13:37:55 +00:00
|
|
|
summary.GetActionCount++
|
2019-01-10 11:41:57 +00:00
|
|
|
case pb.PayerBandwidthAllocation_GET_AUDIT:
|
|
|
|
summary.GetAuditActionCount++
|
|
|
|
case pb.PayerBandwidthAllocation_GET_REPAIR:
|
|
|
|
summary.GetRepairActionCount++
|
|
|
|
case pb.PayerBandwidthAllocation_PUT_REPAIR:
|
|
|
|
summary.PutRepairActionCount++
|
2018-11-08 13:37:55 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// initialize the table header (fields)
|
|
|
|
const padding = 3
|
|
|
|
w := tabwriter.NewWriter(os.Stdout, 0, 0, padding, ' ', tabwriter.AlignRight|tabwriter.Debug)
|
2019-01-10 11:41:57 +00:00
|
|
|
fmt.Fprintln(w, "SatelliteID\tTotal\t# Of Transactions\tPUT Action\tGET Action\tGET (Audit) Action\tGET (Repair) Action\tPUT (Repair) Action\t")
|
2018-11-08 13:37:55 +00:00
|
|
|
|
|
|
|
// populate the row fields
|
2018-11-29 18:39:27 +00:00
|
|
|
sort.Sort(satelliteIDs)
|
2018-11-08 13:37:55 +00:00
|
|
|
for _, satelliteID := range satelliteIDs {
|
|
|
|
summary := summaries[satelliteID]
|
2019-01-10 11:41:57 +00:00
|
|
|
fmt.Fprint(w, satelliteID, "\t", summary.TotalBytes, "\t", summary.TotalTransactions, "\t",
|
|
|
|
summary.PutActionCount, "\t", summary.GetActionCount, "\t", summary.GetAuditActionCount,
|
|
|
|
"\t", summary.GetRepairActionCount, "\t", summary.PutRepairActionCount, "\t\n")
|
2018-11-08 13:37:55 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// display the data
|
|
|
|
err = w.Flush()
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
2019-01-16 21:09:13 +00:00
|
|
|
func dashCmd(cmd *cobra.Command, args []string) (err error) {
|
|
|
|
ctx := context.Background()
|
|
|
|
|
2019-01-25 14:54:54 +00:00
|
|
|
ident, err := runCfg.Identity.Load()
|
2019-01-18 17:08:45 +00:00
|
|
|
if err != nil {
|
|
|
|
zap.S().Fatal(err)
|
|
|
|
} else {
|
|
|
|
zap.S().Info("Node ID: ", ident.ID)
|
|
|
|
}
|
|
|
|
|
|
|
|
tc := transport.NewClient(ident)
|
|
|
|
n := &pb.Node{
|
|
|
|
Address: &pb.NodeAddress{
|
2019-01-25 17:46:09 +00:00
|
|
|
Address: dashboardCfg.Address,
|
2019-01-18 17:08:45 +00:00
|
|
|
Transport: 0,
|
|
|
|
},
|
|
|
|
Type: pb.NodeType_STORAGE,
|
|
|
|
}
|
|
|
|
|
|
|
|
lc, err := psclient.NewLiteClient(ctx, tc, n)
|
2019-01-16 21:09:13 +00:00
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
stream, err := lc.Dashboard(ctx)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
2019-01-25 17:46:09 +00:00
|
|
|
online, err := getConnectionStatus(ctx, tc, ident)
|
|
|
|
if err != nil {
|
|
|
|
zap.S().Error("error getting connection status %s", err.Error())
|
|
|
|
}
|
|
|
|
|
2019-01-16 21:09:13 +00:00
|
|
|
for {
|
|
|
|
data, err := stream.Recv()
|
|
|
|
if err == io.EOF {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
clr()
|
|
|
|
heading := color.New(color.FgGreen, color.Bold)
|
|
|
|
|
|
|
|
_, _ = heading.Printf("\nStorage Node Dashboard Stats\n")
|
|
|
|
_, _ = heading.Printf("\n===============================\n")
|
|
|
|
|
2019-01-18 10:41:25 +00:00
|
|
|
fmt.Fprintf(color.Output, "Node ID: %s\n", color.YellowString(data.GetNodeId()))
|
2019-01-16 21:09:13 +00:00
|
|
|
|
2019-01-25 17:46:09 +00:00
|
|
|
if online {
|
2019-01-16 21:09:13 +00:00
|
|
|
fmt.Fprintf(color.Output, "%s ", color.GreenString("ONLINE"))
|
|
|
|
} else {
|
|
|
|
fmt.Fprintf(color.Output, "%s ", color.RedString("OFFLINE"))
|
|
|
|
}
|
|
|
|
|
|
|
|
uptime, err := ptypes.Duration(data.GetUptime())
|
|
|
|
if err != nil {
|
|
|
|
color.Red(" %+v \n", err)
|
|
|
|
} else {
|
|
|
|
color.Yellow(" %s \n", uptime)
|
|
|
|
}
|
|
|
|
|
|
|
|
fmt.Fprintf(color.Output, "Node Connections: %+v\n", whiteInt(data.GetNodeConnections()))
|
|
|
|
|
2019-01-18 10:41:25 +00:00
|
|
|
color.Green("\nIO\t\t\tAvailable\t\t\tUsed\n--\t\t\t---------\t\t\t----")
|
2019-01-16 21:09:13 +00:00
|
|
|
stats := data.GetStats()
|
|
|
|
if stats != nil {
|
2019-01-18 10:41:25 +00:00
|
|
|
fmt.Fprintf(color.Output, "Bandwidth\t\t%+v\t\t\t%+v\n", whiteInt(stats.GetAvailableBandwidth()), whiteInt(stats.GetUsedBandwidth()))
|
|
|
|
fmt.Fprintf(color.Output, "Disk\t\t\t%+v\t\t\t%+v\n", whiteInt(stats.GetAvailableSpace()), whiteInt(stats.GetUsedSpace()))
|
2019-01-16 21:09:13 +00:00
|
|
|
} else {
|
|
|
|
color.Yellow("Loading...")
|
|
|
|
}
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func whiteInt(value int64) string {
|
|
|
|
return color.WhiteString(fmt.Sprintf("%+v", value))
|
|
|
|
}
|
|
|
|
|
|
|
|
// clr clears the screen so it can be redrawn
|
|
|
|
func clr() {
|
|
|
|
var clear = make(map[string]func())
|
|
|
|
clear["linux"] = func() {
|
|
|
|
cmd := exec.Command("clear")
|
|
|
|
cmd.Stdout = os.Stdout
|
|
|
|
err := cmd.Run()
|
|
|
|
if err != nil {
|
|
|
|
_ = fmt.Errorf("Linux clear screen command returned an error %+v", err)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
clear["darwin"] = func() {
|
|
|
|
cmd := exec.Command("clear")
|
|
|
|
cmd.Stdout = os.Stdout
|
|
|
|
err := cmd.Run()
|
|
|
|
if err != nil {
|
|
|
|
_ = fmt.Errorf("MacOS clear screen command returned an error %+v", err)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
clear["windows"] = func() {
|
|
|
|
cmd := exec.Command("cmd", "/c", "cls")
|
|
|
|
cmd.Stdout = os.Stdout
|
|
|
|
err := cmd.Run()
|
|
|
|
if err != nil {
|
|
|
|
_ = fmt.Errorf("Windows clear screen command returned an error %+v", err)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
value, ok := clear[runtime.GOOS]
|
|
|
|
if ok {
|
|
|
|
value()
|
|
|
|
} else {
|
|
|
|
panic("Your platform is unsupported! I can't clear terminal screen :(")
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-01-25 17:46:09 +00:00
|
|
|
func getConnectionStatus(ctx context.Context, tc transport.Client, id *identity.FullIdentity) (bool, error) {
|
|
|
|
bn := &pb.Node{
|
|
|
|
Address: &pb.NodeAddress{
|
|
|
|
Address: dashboardCfg.BootstrapAddr,
|
|
|
|
Transport: 0,
|
|
|
|
},
|
|
|
|
Type: pb.NodeType_BOOTSTRAP,
|
|
|
|
}
|
|
|
|
|
|
|
|
inspector, err := newInspectorClient(ctx, tc, bn)
|
|
|
|
if err != nil {
|
|
|
|
return false, err
|
|
|
|
}
|
|
|
|
|
|
|
|
resp, err := inspector.kad.PingNode(ctx, &pb.PingNodeRequest{
|
|
|
|
Id: id.ID,
|
|
|
|
Address: dashboardCfg.ExternalAddress,
|
|
|
|
})
|
|
|
|
|
|
|
|
if err != nil {
|
|
|
|
zap.S().Error(err)
|
|
|
|
return false, err
|
|
|
|
}
|
|
|
|
|
|
|
|
if resp.GetOk() {
|
|
|
|
return true, err
|
|
|
|
}
|
|
|
|
|
|
|
|
return false, err
|
|
|
|
}
|
|
|
|
|
|
|
|
func newInspectorClient(ctx context.Context, tc transport.Client, bn *pb.Node) (*Inspector, error) {
|
|
|
|
conn, err := tc.DialNode(ctx, bn)
|
|
|
|
if err != nil {
|
|
|
|
return &Inspector{}, err
|
|
|
|
}
|
|
|
|
|
|
|
|
return &Inspector{
|
|
|
|
kad: pb.NewKadInspectorClient(conn),
|
|
|
|
}, nil
|
|
|
|
|
|
|
|
}
|
|
|
|
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
func main() {
|
2018-08-13 16:33:21 +01:00
|
|
|
process.Exec(rootCmd)
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
}
|