2019-01-24 20:15:10 +00:00
|
|
|
// Copyright (C) 2019 Storj Labs, Inc.
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
// See LICENSE for copying information.
|
|
|
|
|
|
|
|
package process
|
|
|
|
|
|
|
|
import (
|
|
|
|
"context"
|
2019-09-10 08:32:53 +01:00
|
|
|
"errors"
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
"flag"
|
2018-09-09 12:27:09 +01:00
|
|
|
"fmt"
|
2019-01-02 14:41:01 +00:00
|
|
|
"log"
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
"os"
|
2018-10-04 14:52:12 +01:00
|
|
|
"os/signal"
|
2018-12-14 21:14:59 +00:00
|
|
|
"path/filepath"
|
2018-08-08 18:49:23 +01:00
|
|
|
"strings"
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
"sync"
|
2018-10-04 14:52:12 +01:00
|
|
|
"syscall"
|
2019-04-10 07:38:26 +01:00
|
|
|
"time"
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
|
|
|
|
"github.com/spf13/cobra"
|
|
|
|
"github.com/spf13/pflag"
|
|
|
|
"github.com/spf13/viper"
|
2019-04-19 16:34:29 +01:00
|
|
|
"github.com/zeebo/errs"
|
Command line flags features and cleanup (#2068)
* change BindSetup to be an option to Bind
* add process.Bind to allow composite structures
* hack fix for noprefix flags
* used tagged version of structs
Before this PR, some flags were created by calling `cfgstruct.Bind` and having their fields create a flag. Once the flags were parsed, `viper` was used to acquire all the values from them and config files, and the fields in the struct were set through the flag interface.
This doesn't work for slices of things on config structs very well, since it can only set strings, and for a string slice, it turns out that the implementation in `pflag` appends an entry rather than setting it.
This changes three things:
1. Only have a `Bind` call instead of `Bind` and `BindSetup`, and make `BindSetup` an option instead.
2. Add a `process.Bind` call that takes in a `*cobra.Cmd`, binds the struct to the command's flags, and keeps track of that struct in a global map keyed by the command.
3. Use `viper` to get the values and load them into the bound configuration structs instead of using the flags to propagate the changes.
In this way, we can support whatever rich configuration we want in the config yaml files, while still getting command like flags when important.
2019-05-29 18:56:22 +01:00
|
|
|
"github.com/zeebo/structs"
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
"go.uber.org/zap"
|
|
|
|
monkit "gopkg.in/spacemonkeygo/monkit.v2"
|
2019-04-19 16:34:29 +01:00
|
|
|
"gopkg.in/spacemonkeygo/monkit.v2/collect"
|
|
|
|
"gopkg.in/spacemonkeygo/monkit.v2/present"
|
2019-04-10 07:38:26 +01:00
|
|
|
|
Command line flags features and cleanup (#2068)
* change BindSetup to be an option to Bind
* add process.Bind to allow composite structures
* hack fix for noprefix flags
* used tagged version of structs
Before this PR, some flags were created by calling `cfgstruct.Bind` and having their fields create a flag. Once the flags were parsed, `viper` was used to acquire all the values from them and config files, and the fields in the struct were set through the flag interface.
This doesn't work for slices of things on config structs very well, since it can only set strings, and for a string slice, it turns out that the implementation in `pflag` appends an entry rather than setting it.
This changes three things:
1. Only have a `Bind` call instead of `Bind` and `BindSetup`, and make `BindSetup` an option instead.
2. Add a `process.Bind` call that takes in a `*cobra.Cmd`, binds the struct to the command's flags, and keeps track of that struct in a global map keyed by the command.
3. Use `viper` to get the values and load them into the bound configuration structs instead of using the flags to propagate the changes.
In this way, we can support whatever rich configuration we want in the config yaml files, while still getting command like flags when important.
2019-05-29 18:56:22 +01:00
|
|
|
"storj.io/storj/pkg/cfgstruct"
|
2019-11-14 19:46:15 +00:00
|
|
|
"storj.io/storj/private/version"
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
)
|
|
|
|
|
2019-05-22 14:57:12 +01:00
|
|
|
// DefaultCfgFilename is the default filename used for storing a configuration.
|
|
|
|
const DefaultCfgFilename = "config.yaml"
|
|
|
|
|
2019-04-10 07:38:26 +01:00
|
|
|
var (
|
|
|
|
mon = monkit.Package()
|
|
|
|
|
2019-08-05 18:01:20 +01:00
|
|
|
commandMtx sync.Mutex
|
2019-04-10 07:38:26 +01:00
|
|
|
contexts = map[*cobra.Command]context.Context{}
|
2019-09-19 17:37:40 +01:00
|
|
|
cancels = map[*cobra.Command]context.CancelFunc{}
|
2019-08-05 18:01:20 +01:00
|
|
|
configs = map[*cobra.Command][]interface{}{}
|
|
|
|
vipers = map[*cobra.Command]*viper.Viper{}
|
2019-04-10 07:38:26 +01:00
|
|
|
)
|
2018-07-30 08:38:31 +01:00
|
|
|
|
Command line flags features and cleanup (#2068)
* change BindSetup to be an option to Bind
* add process.Bind to allow composite structures
* hack fix for noprefix flags
* used tagged version of structs
Before this PR, some flags were created by calling `cfgstruct.Bind` and having their fields create a flag. Once the flags were parsed, `viper` was used to acquire all the values from them and config files, and the fields in the struct were set through the flag interface.
This doesn't work for slices of things on config structs very well, since it can only set strings, and for a string slice, it turns out that the implementation in `pflag` appends an entry rather than setting it.
This changes three things:
1. Only have a `Bind` call instead of `Bind` and `BindSetup`, and make `BindSetup` an option instead.
2. Add a `process.Bind` call that takes in a `*cobra.Cmd`, binds the struct to the command's flags, and keeps track of that struct in a global map keyed by the command.
3. Use `viper` to get the values and load them into the bound configuration structs instead of using the flags to propagate the changes.
In this way, we can support whatever rich configuration we want in the config yaml files, while still getting command like flags when important.
2019-05-29 18:56:22 +01:00
|
|
|
// Bind sets flags on a command that match the configuration struct
|
|
|
|
// 'config'. It ensures that the config has all of the values loaded into it
|
|
|
|
// when the command runs.
|
|
|
|
func Bind(cmd *cobra.Command, config interface{}, opts ...cfgstruct.BindOpt) {
|
2019-08-05 18:01:20 +01:00
|
|
|
commandMtx.Lock()
|
|
|
|
defer commandMtx.Unlock()
|
Command line flags features and cleanup (#2068)
* change BindSetup to be an option to Bind
* add process.Bind to allow composite structures
* hack fix for noprefix flags
* used tagged version of structs
Before this PR, some flags were created by calling `cfgstruct.Bind` and having their fields create a flag. Once the flags were parsed, `viper` was used to acquire all the values from them and config files, and the fields in the struct were set through the flag interface.
This doesn't work for slices of things on config structs very well, since it can only set strings, and for a string slice, it turns out that the implementation in `pflag` appends an entry rather than setting it.
This changes three things:
1. Only have a `Bind` call instead of `Bind` and `BindSetup`, and make `BindSetup` an option instead.
2. Add a `process.Bind` call that takes in a `*cobra.Cmd`, binds the struct to the command's flags, and keeps track of that struct in a global map keyed by the command.
3. Use `viper` to get the values and load them into the bound configuration structs instead of using the flags to propagate the changes.
In this way, we can support whatever rich configuration we want in the config yaml files, while still getting command like flags when important.
2019-05-29 18:56:22 +01:00
|
|
|
|
|
|
|
cfgstruct.Bind(cmd.Flags(), config, opts...)
|
|
|
|
configs[cmd] = append(configs[cmd], config)
|
|
|
|
}
|
|
|
|
|
2020-01-09 13:38:19 +00:00
|
|
|
// Exec runs a Cobra command. If a "config-dir" flag is defined it will be parsed
|
2018-07-30 08:38:31 +01:00
|
|
|
// and loaded using viper.
|
|
|
|
func Exec(cmd *cobra.Command) {
|
2020-01-09 13:38:19 +00:00
|
|
|
ExecWithCustomConfig(cmd, LoadConfig)
|
|
|
|
}
|
|
|
|
|
|
|
|
// ExecWithCustomConfig runs a Cobra command. Custom configuration can be loaded.
|
|
|
|
func ExecWithCustomConfig(cmd *cobra.Command, loadConfig func(cmd *cobra.Command, vip *viper.Viper) error) {
|
2019-04-10 07:38:26 +01:00
|
|
|
cmd.AddCommand(&cobra.Command{
|
|
|
|
Use: "version",
|
|
|
|
Short: "output the version's build information, if any",
|
|
|
|
RunE: cmdVersion,
|
|
|
|
Annotations: map[string]string{"type": "setup"}})
|
|
|
|
|
2018-10-16 19:48:17 +01:00
|
|
|
exe, err := os.Executable()
|
|
|
|
if err == nil {
|
|
|
|
cmd.Use = exe
|
|
|
|
}
|
|
|
|
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
pflag.CommandLine.AddGoFlagSet(flag.CommandLine)
|
2020-01-09 13:38:19 +00:00
|
|
|
cleanup(cmd, loadConfig)
|
2019-11-05 14:01:26 +00:00
|
|
|
err = cmd.Execute()
|
|
|
|
if err != nil {
|
|
|
|
os.Exit(1)
|
|
|
|
}
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
// Ctx returns the appropriate context.Context for ExecuteWithConfig commands
|
2019-09-19 17:37:40 +01:00
|
|
|
func Ctx(cmd *cobra.Command) (context.Context, context.CancelFunc) {
|
2019-08-05 18:01:20 +01:00
|
|
|
commandMtx.Lock()
|
|
|
|
defer commandMtx.Unlock()
|
|
|
|
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
ctx := contexts[cmd]
|
|
|
|
if ctx == nil {
|
2018-10-04 14:52:12 +01:00
|
|
|
ctx = context.Background()
|
2019-09-19 17:37:40 +01:00
|
|
|
contexts[cmd] = ctx
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
}
|
2019-09-19 17:37:40 +01:00
|
|
|
|
|
|
|
cancel := cancels[cmd]
|
|
|
|
if cancel == nil {
|
|
|
|
ctx, cancel = context.WithCancel(ctx)
|
|
|
|
contexts[cmd] = ctx
|
|
|
|
cancels[cmd] = cancel
|
|
|
|
|
|
|
|
c := make(chan os.Signal, 1)
|
|
|
|
signal.Notify(c, syscall.SIGINT, syscall.SIGTERM)
|
|
|
|
go func() {
|
|
|
|
sig := <-c
|
|
|
|
log.Printf("Got a signal from the OS: %q", sig)
|
|
|
|
signal.Stop(c)
|
|
|
|
cancel()
|
|
|
|
}()
|
|
|
|
}
|
|
|
|
|
|
|
|
return ctx, cancel
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
}
|
|
|
|
|
2019-08-05 18:01:20 +01:00
|
|
|
// Viper returns the appropriate *viper.Viper for the command, creating if necessary.
|
|
|
|
func Viper(cmd *cobra.Command) (*viper.Viper, error) {
|
2020-01-09 13:38:19 +00:00
|
|
|
return ViperWithCustomConfig(cmd, LoadConfig)
|
|
|
|
}
|
|
|
|
|
|
|
|
// ViperWithCustomConfig returns the appropriate *viper.Viper for the command, creating if necessary. Custom
|
|
|
|
// config load logic can be defined with "loadConfig" parameter.
|
|
|
|
func ViperWithCustomConfig(cmd *cobra.Command, loadConfig func(cmd *cobra.Command, vip *viper.Viper) error) (*viper.Viper, error) {
|
2019-08-05 18:01:20 +01:00
|
|
|
commandMtx.Lock()
|
|
|
|
defer commandMtx.Unlock()
|
|
|
|
|
|
|
|
if vip := vipers[cmd]; vip != nil {
|
|
|
|
return vip, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
vip := viper.New()
|
|
|
|
if err := vip.BindPFlags(cmd.Flags()); err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
vip.SetEnvPrefix("storj")
|
|
|
|
vip.SetEnvKeyReplacer(strings.NewReplacer(".", "_", "-", "_"))
|
|
|
|
vip.AutomaticEnv()
|
|
|
|
|
2020-01-09 13:38:19 +00:00
|
|
|
err := loadConfig(cmd, vip)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
vipers[cmd] = vip
|
|
|
|
return vip, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// LoadConfig loads configuration into *viper.Viper from file specified with "config-dir" flag.
|
|
|
|
func LoadConfig(cmd *cobra.Command, vip *viper.Viper) error {
|
2019-08-05 18:01:20 +01:00
|
|
|
cfgFlag := cmd.Flags().Lookup("config-dir")
|
|
|
|
if cfgFlag != nil && cfgFlag.Value.String() != "" {
|
|
|
|
path := filepath.Join(os.ExpandEnv(cfgFlag.Value.String()), DefaultCfgFilename)
|
|
|
|
if cmd.Annotations["type"] != "setup" || fileExists(path) {
|
|
|
|
vip.SetConfigFile(path)
|
|
|
|
if err := vip.ReadInConfig(); err != nil {
|
2020-01-09 13:38:19 +00:00
|
|
|
return err
|
2019-08-05 18:01:20 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2020-01-09 13:38:19 +00:00
|
|
|
return nil
|
2019-08-05 18:01:20 +01:00
|
|
|
}
|
|
|
|
|
2019-05-14 19:15:12 +01:00
|
|
|
var traceOut = flag.String("debug.trace-out", "", "If set, a path to write a process trace SVG to")
|
|
|
|
|
2020-01-09 13:38:19 +00:00
|
|
|
func cleanup(cmd *cobra.Command, loadConfig func(cmd *cobra.Command, vip *viper.Viper) error) {
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
for _, ccmd := range cmd.Commands() {
|
2020-01-09 13:38:19 +00:00
|
|
|
cleanup(ccmd, loadConfig)
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
}
|
|
|
|
if cmd.Run != nil {
|
|
|
|
panic("Please use cobra's RunE instead of Run")
|
|
|
|
}
|
|
|
|
internalRun := cmd.RunE
|
|
|
|
if internalRun == nil {
|
|
|
|
return
|
|
|
|
}
|
2019-04-19 16:34:29 +01:00
|
|
|
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
|
|
|
|
ctx := context.Background()
|
|
|
|
defer mon.TaskNamed("root")(&ctx)(&err)
|
|
|
|
|
2020-01-09 13:38:19 +00:00
|
|
|
vip, err := ViperWithCustomConfig(cmd, loadConfig)
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
2019-08-05 18:01:20 +01:00
|
|
|
commandMtx.Lock()
|
Command line flags features and cleanup (#2068)
* change BindSetup to be an option to Bind
* add process.Bind to allow composite structures
* hack fix for noprefix flags
* used tagged version of structs
Before this PR, some flags were created by calling `cfgstruct.Bind` and having their fields create a flag. Once the flags were parsed, `viper` was used to acquire all the values from them and config files, and the fields in the struct were set through the flag interface.
This doesn't work for slices of things on config structs very well, since it can only set strings, and for a string slice, it turns out that the implementation in `pflag` appends an entry rather than setting it.
This changes three things:
1. Only have a `Bind` call instead of `Bind` and `BindSetup`, and make `BindSetup` an option instead.
2. Add a `process.Bind` call that takes in a `*cobra.Cmd`, binds the struct to the command's flags, and keeps track of that struct in a global map keyed by the command.
3. Use `viper` to get the values and load them into the bound configuration structs instead of using the flags to propagate the changes.
In this way, we can support whatever rich configuration we want in the config yaml files, while still getting command like flags when important.
2019-05-29 18:56:22 +01:00
|
|
|
configValues := configs[cmd]
|
2019-08-05 18:01:20 +01:00
|
|
|
commandMtx.Unlock()
|
Command line flags features and cleanup (#2068)
* change BindSetup to be an option to Bind
* add process.Bind to allow composite structures
* hack fix for noprefix flags
* used tagged version of structs
Before this PR, some flags were created by calling `cfgstruct.Bind` and having their fields create a flag. Once the flags were parsed, `viper` was used to acquire all the values from them and config files, and the fields in the struct were set through the flag interface.
This doesn't work for slices of things on config structs very well, since it can only set strings, and for a string slice, it turns out that the implementation in `pflag` appends an entry rather than setting it.
This changes three things:
1. Only have a `Bind` call instead of `Bind` and `BindSetup`, and make `BindSetup` an option instead.
2. Add a `process.Bind` call that takes in a `*cobra.Cmd`, binds the struct to the command's flags, and keeps track of that struct in a global map keyed by the command.
3. Use `viper` to get the values and load them into the bound configuration structs instead of using the flags to propagate the changes.
In this way, we can support whatever rich configuration we want in the config yaml files, while still getting command like flags when important.
2019-05-29 18:56:22 +01:00
|
|
|
|
|
|
|
var (
|
|
|
|
brokenKeys = map[string]struct{}{}
|
|
|
|
missingKeys = map[string]struct{}{}
|
|
|
|
usedKeys = map[string]struct{}{}
|
2019-08-05 18:01:20 +01:00
|
|
|
allKeys = map[string]struct{}{}
|
Command line flags features and cleanup (#2068)
* change BindSetup to be an option to Bind
* add process.Bind to allow composite structures
* hack fix for noprefix flags
* used tagged version of structs
Before this PR, some flags were created by calling `cfgstruct.Bind` and having their fields create a flag. Once the flags were parsed, `viper` was used to acquire all the values from them and config files, and the fields in the struct were set through the flag interface.
This doesn't work for slices of things on config structs very well, since it can only set strings, and for a string slice, it turns out that the implementation in `pflag` appends an entry rather than setting it.
This changes three things:
1. Only have a `Bind` call instead of `Bind` and `BindSetup`, and make `BindSetup` an option instead.
2. Add a `process.Bind` call that takes in a `*cobra.Cmd`, binds the struct to the command's flags, and keeps track of that struct in a global map keyed by the command.
3. Use `viper` to get the values and load them into the bound configuration structs instead of using the flags to propagate the changes.
In this way, we can support whatever rich configuration we want in the config yaml files, while still getting command like flags when important.
2019-05-29 18:56:22 +01:00
|
|
|
allSettings = vip.AllSettings()
|
|
|
|
)
|
|
|
|
|
|
|
|
// Hacky hack: these two keys are noprefix which breaks all scoping
|
|
|
|
if val, ok := allSettings["api-key"]; ok {
|
2019-08-05 18:01:20 +01:00
|
|
|
allSettings["legacy.client.api-key"] = val
|
Command line flags features and cleanup (#2068)
* change BindSetup to be an option to Bind
* add process.Bind to allow composite structures
* hack fix for noprefix flags
* used tagged version of structs
Before this PR, some flags were created by calling `cfgstruct.Bind` and having their fields create a flag. Once the flags were parsed, `viper` was used to acquire all the values from them and config files, and the fields in the struct were set through the flag interface.
This doesn't work for slices of things on config structs very well, since it can only set strings, and for a string slice, it turns out that the implementation in `pflag` appends an entry rather than setting it.
This changes three things:
1. Only have a `Bind` call instead of `Bind` and `BindSetup`, and make `BindSetup` an option instead.
2. Add a `process.Bind` call that takes in a `*cobra.Cmd`, binds the struct to the command's flags, and keeps track of that struct in a global map keyed by the command.
3. Use `viper` to get the values and load them into the bound configuration structs instead of using the flags to propagate the changes.
In this way, we can support whatever rich configuration we want in the config yaml files, while still getting command like flags when important.
2019-05-29 18:56:22 +01:00
|
|
|
delete(allSettings, "api-key")
|
|
|
|
}
|
|
|
|
if val, ok := allSettings["satellite-addr"]; ok {
|
2019-08-05 18:01:20 +01:00
|
|
|
allSettings["legacy.client.satellite-addr"] = val
|
Command line flags features and cleanup (#2068)
* change BindSetup to be an option to Bind
* add process.Bind to allow composite structures
* hack fix for noprefix flags
* used tagged version of structs
Before this PR, some flags were created by calling `cfgstruct.Bind` and having their fields create a flag. Once the flags were parsed, `viper` was used to acquire all the values from them and config files, and the fields in the struct were set through the flag interface.
This doesn't work for slices of things on config structs very well, since it can only set strings, and for a string slice, it turns out that the implementation in `pflag` appends an entry rather than setting it.
This changes three things:
1. Only have a `Bind` call instead of `Bind` and `BindSetup`, and make `BindSetup` an option instead.
2. Add a `process.Bind` call that takes in a `*cobra.Cmd`, binds the struct to the command's flags, and keeps track of that struct in a global map keyed by the command.
3. Use `viper` to get the values and load them into the bound configuration structs instead of using the flags to propagate the changes.
In this way, we can support whatever rich configuration we want in the config yaml files, while still getting command like flags when important.
2019-05-29 18:56:22 +01:00
|
|
|
delete(allSettings, "satellite-addr")
|
|
|
|
}
|
|
|
|
|
|
|
|
for _, config := range configValues {
|
|
|
|
// Decode and all of the resulting keys into our sets
|
|
|
|
res := structs.Decode(allSettings, config)
|
|
|
|
for key := range res.Used {
|
|
|
|
usedKeys[key] = struct{}{}
|
2019-08-05 18:01:20 +01:00
|
|
|
allKeys[key] = struct{}{}
|
Command line flags features and cleanup (#2068)
* change BindSetup to be an option to Bind
* add process.Bind to allow composite structures
* hack fix for noprefix flags
* used tagged version of structs
Before this PR, some flags were created by calling `cfgstruct.Bind` and having their fields create a flag. Once the flags were parsed, `viper` was used to acquire all the values from them and config files, and the fields in the struct were set through the flag interface.
This doesn't work for slices of things on config structs very well, since it can only set strings, and for a string slice, it turns out that the implementation in `pflag` appends an entry rather than setting it.
This changes three things:
1. Only have a `Bind` call instead of `Bind` and `BindSetup`, and make `BindSetup` an option instead.
2. Add a `process.Bind` call that takes in a `*cobra.Cmd`, binds the struct to the command's flags, and keeps track of that struct in a global map keyed by the command.
3. Use `viper` to get the values and load them into the bound configuration structs instead of using the flags to propagate the changes.
In this way, we can support whatever rich configuration we want in the config yaml files, while still getting command like flags when important.
2019-05-29 18:56:22 +01:00
|
|
|
}
|
|
|
|
for key := range res.Missing {
|
|
|
|
missingKeys[key] = struct{}{}
|
2019-08-05 18:01:20 +01:00
|
|
|
allKeys[key] = struct{}{}
|
Command line flags features and cleanup (#2068)
* change BindSetup to be an option to Bind
* add process.Bind to allow composite structures
* hack fix for noprefix flags
* used tagged version of structs
Before this PR, some flags were created by calling `cfgstruct.Bind` and having their fields create a flag. Once the flags were parsed, `viper` was used to acquire all the values from them and config files, and the fields in the struct were set through the flag interface.
This doesn't work for slices of things on config structs very well, since it can only set strings, and for a string slice, it turns out that the implementation in `pflag` appends an entry rather than setting it.
This changes three things:
1. Only have a `Bind` call instead of `Bind` and `BindSetup`, and make `BindSetup` an option instead.
2. Add a `process.Bind` call that takes in a `*cobra.Cmd`, binds the struct to the command's flags, and keeps track of that struct in a global map keyed by the command.
3. Use `viper` to get the values and load them into the bound configuration structs instead of using the flags to propagate the changes.
In this way, we can support whatever rich configuration we want in the config yaml files, while still getting command like flags when important.
2019-05-29 18:56:22 +01:00
|
|
|
}
|
|
|
|
for key := range res.Broken {
|
|
|
|
brokenKeys[key] = struct{}{}
|
2019-08-05 18:01:20 +01:00
|
|
|
allKeys[key] = struct{}{}
|
Command line flags features and cleanup (#2068)
* change BindSetup to be an option to Bind
* add process.Bind to allow composite structures
* hack fix for noprefix flags
* used tagged version of structs
Before this PR, some flags were created by calling `cfgstruct.Bind` and having their fields create a flag. Once the flags were parsed, `viper` was used to acquire all the values from them and config files, and the fields in the struct were set through the flag interface.
This doesn't work for slices of things on config structs very well, since it can only set strings, and for a string slice, it turns out that the implementation in `pflag` appends an entry rather than setting it.
This changes three things:
1. Only have a `Bind` call instead of `Bind` and `BindSetup`, and make `BindSetup` an option instead.
2. Add a `process.Bind` call that takes in a `*cobra.Cmd`, binds the struct to the command's flags, and keeps track of that struct in a global map keyed by the command.
3. Use `viper` to get the values and load them into the bound configuration structs instead of using the flags to propagate the changes.
In this way, we can support whatever rich configuration we want in the config yaml files, while still getting command like flags when important.
2019-05-29 18:56:22 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-08-14 21:41:41 +01:00
|
|
|
// Propagate keys that are missing to flags, and remove any used keys
|
|
|
|
// from the missing set.
|
|
|
|
for key := range missingKeys {
|
2019-08-05 18:01:20 +01:00
|
|
|
if f := cmd.Flags().Lookup(key); f != nil {
|
|
|
|
val := vip.GetString(key)
|
|
|
|
err := f.Value.Set(val)
|
|
|
|
f.Changed = val != f.DefValue
|
|
|
|
if err != nil {
|
|
|
|
brokenKeys[key] = struct{}{}
|
|
|
|
} else {
|
|
|
|
usedKeys[key] = struct{}{}
|
|
|
|
}
|
|
|
|
} else if f := flag.Lookup(key); f != nil {
|
|
|
|
err := f.Value.Set(vip.GetString(key))
|
|
|
|
if err != nil {
|
|
|
|
brokenKeys[key] = struct{}{}
|
|
|
|
} else {
|
|
|
|
usedKeys[key] = struct{}{}
|
|
|
|
}
|
|
|
|
}
|
2019-08-14 21:41:41 +01:00
|
|
|
}
|
|
|
|
for key := range missingKeys {
|
2019-05-31 22:15:50 +01:00
|
|
|
if _, ok := usedKeys[key]; ok {
|
Command line flags features and cleanup (#2068)
* change BindSetup to be an option to Bind
* add process.Bind to allow composite structures
* hack fix for noprefix flags
* used tagged version of structs
Before this PR, some flags were created by calling `cfgstruct.Bind` and having their fields create a flag. Once the flags were parsed, `viper` was used to acquire all the values from them and config files, and the fields in the struct were set through the flag interface.
This doesn't work for slices of things on config structs very well, since it can only set strings, and for a string slice, it turns out that the implementation in `pflag` appends an entry rather than setting it.
This changes three things:
1. Only have a `Bind` call instead of `Bind` and `BindSetup`, and make `BindSetup` an option instead.
2. Add a `process.Bind` call that takes in a `*cobra.Cmd`, binds the struct to the command's flags, and keeps track of that struct in a global map keyed by the command.
3. Use `viper` to get the values and load them into the bound configuration structs instead of using the flags to propagate the changes.
In this way, we can support whatever rich configuration we want in the config yaml files, while still getting command like flags when important.
2019-05-29 18:56:22 +01:00
|
|
|
delete(missingKeys, key)
|
2019-05-31 22:15:50 +01:00
|
|
|
}
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
}
|
|
|
|
|
2019-12-03 13:56:49 +00:00
|
|
|
logger, err := NewLogger()
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
2018-12-27 09:38:11 +00:00
|
|
|
|
2019-01-17 02:41:43 +00:00
|
|
|
if vip.ConfigFileUsed() != "" {
|
|
|
|
path, err := filepath.Abs(vip.ConfigFileUsed())
|
|
|
|
if err != nil {
|
|
|
|
path = vip.ConfigFileUsed()
|
|
|
|
logger.Debug("unable to resolve path", zap.Error(err))
|
|
|
|
}
|
|
|
|
|
|
|
|
logger.Sugar().Info("Configuration loaded from: ", path)
|
|
|
|
}
|
2018-12-27 09:38:11 +00:00
|
|
|
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
defer func() { _ = logger.Sync() }()
|
|
|
|
defer zap.ReplaceGlobals(logger)()
|
|
|
|
defer zap.RedirectStdLog(logger)()
|
|
|
|
|
|
|
|
// okay now that logging is working, inform about the broken keys
|
2019-02-14 13:06:15 +00:00
|
|
|
if cmd.Annotations["type"] != "helper" {
|
Command line flags features and cleanup (#2068)
* change BindSetup to be an option to Bind
* add process.Bind to allow composite structures
* hack fix for noprefix flags
* used tagged version of structs
Before this PR, some flags were created by calling `cfgstruct.Bind` and having their fields create a flag. Once the flags were parsed, `viper` was used to acquire all the values from them and config files, and the fields in the struct were set through the flag interface.
This doesn't work for slices of things on config structs very well, since it can only set strings, and for a string slice, it turns out that the implementation in `pflag` appends an entry rather than setting it.
This changes three things:
1. Only have a `Bind` call instead of `Bind` and `BindSetup`, and make `BindSetup` an option instead.
2. Add a `process.Bind` call that takes in a `*cobra.Cmd`, binds the struct to the command's flags, and keeps track of that struct in a global map keyed by the command.
3. Use `viper` to get the values and load them into the bound configuration structs instead of using the flags to propagate the changes.
In this way, we can support whatever rich configuration we want in the config yaml files, while still getting command like flags when important.
2019-05-29 18:56:22 +01:00
|
|
|
for key := range missingKeys {
|
2019-02-14 13:06:15 +00:00
|
|
|
logger.Sugar().Infof("Invalid configuration file key: %s", key)
|
|
|
|
}
|
2018-07-30 08:38:31 +01:00
|
|
|
}
|
Command line flags features and cleanup (#2068)
* change BindSetup to be an option to Bind
* add process.Bind to allow composite structures
* hack fix for noprefix flags
* used tagged version of structs
Before this PR, some flags were created by calling `cfgstruct.Bind` and having their fields create a flag. Once the flags were parsed, `viper` was used to acquire all the values from them and config files, and the fields in the struct were set through the flag interface.
This doesn't work for slices of things on config structs very well, since it can only set strings, and for a string slice, it turns out that the implementation in `pflag` appends an entry rather than setting it.
This changes three things:
1. Only have a `Bind` call instead of `Bind` and `BindSetup`, and make `BindSetup` an option instead.
2. Add a `process.Bind` call that takes in a `*cobra.Cmd`, binds the struct to the command's flags, and keeps track of that struct in a global map keyed by the command.
3. Use `viper` to get the values and load them into the bound configuration structs instead of using the flags to propagate the changes.
In this way, we can support whatever rich configuration we want in the config yaml files, while still getting command like flags when important.
2019-05-29 18:56:22 +01:00
|
|
|
for key := range brokenKeys {
|
2018-07-30 08:38:31 +01:00
|
|
|
logger.Sugar().Infof("Invalid configuration file value for key: %s", key)
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
err = initDebug(logger, monkit.Default)
|
|
|
|
if err != nil {
|
2019-09-10 08:32:53 +01:00
|
|
|
withoutStack := errors.New(err.Error())
|
|
|
|
logger.Debug("failed to start debug endpoints", zap.Error(withoutStack))
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
}
|
|
|
|
|
2019-04-19 16:34:29 +01:00
|
|
|
var workErr error
|
|
|
|
work := func(ctx context.Context) {
|
2019-08-05 18:01:20 +01:00
|
|
|
commandMtx.Lock()
|
2019-04-19 16:34:29 +01:00
|
|
|
contexts[cmd] = ctx
|
2019-08-05 18:01:20 +01:00
|
|
|
commandMtx.Unlock()
|
2019-04-19 16:34:29 +01:00
|
|
|
defer func() {
|
2019-08-05 18:01:20 +01:00
|
|
|
commandMtx.Lock()
|
2019-04-19 16:34:29 +01:00
|
|
|
delete(contexts, cmd)
|
2019-09-19 17:37:40 +01:00
|
|
|
delete(cancels, cmd)
|
2019-08-05 18:01:20 +01:00
|
|
|
commandMtx.Unlock()
|
2019-04-19 16:34:29 +01:00
|
|
|
}()
|
|
|
|
|
|
|
|
workErr = internalRun(cmd, args)
|
|
|
|
}
|
|
|
|
|
|
|
|
if *traceOut != "" {
|
|
|
|
fh, err := os.Create(*traceOut)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
2019-08-05 19:28:41 +01:00
|
|
|
if strings.HasSuffix(*traceOut, ".json") {
|
|
|
|
err = present.SpansToJSON(fh, collect.CollectSpans(ctx, work))
|
|
|
|
} else {
|
|
|
|
err = present.SpansToSVG(fh, collect.CollectSpans(ctx, work))
|
|
|
|
}
|
2019-04-19 16:34:29 +01:00
|
|
|
err = errs.Combine(err, fh.Close())
|
|
|
|
if err != nil {
|
|
|
|
logger.Error("failed to write svg", zap.Error(err))
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
work(ctx)
|
|
|
|
}
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
|
2019-04-19 16:34:29 +01:00
|
|
|
err = workErr
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
if err != nil {
|
2019-08-16 15:53:22 +01:00
|
|
|
// This function call os.Exit(1)
|
|
|
|
logger.Fatal("Unrecoverable error", zap.Error(err))
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
}
|
2019-08-16 15:53:22 +01:00
|
|
|
|
|
|
|
return nil
|
captplanet (#159)
* captplanet
I kind of went overboard this weekend.
The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!
To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.
Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.
A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
* Kademlia
* OverlayCache
* PointerDB
* StatDB
* PieceStore
* etc.
A Provider is a collection of Responsibilities that
share an Identity, such as:
* The heavy client
* The farmer
* The gateway
An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.
Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.
To make a Responsibility, one should create a "config"
struct, such as:
```
type Config struct {
RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```
To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.
cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.
Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
* cmd/captplanet/farmer/main.go (a new farmer binary)
* cmd/captplanet/hc/main.go (a new heavy client binary)
* cmd/captplanet/gw/main.go (a new minio gateway binary)
Usage:
```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```
Configuration is placed by default in `~/.storj/capt/`
Other changes:
* introduces new config structs for currently existing
Responsibilities that conform to the new Responsibility
interface. Please see the `pkg/*/config.go` files for
examples.
* integrates the PointerDB API key with other global
configuration via flags, instead of through environment
variables through viper like it's been doing. (ultimately
this should also change to use the PointerDB config
struct but this is an okay shortterm solution).
* changes the Overlay cache to use a URL for database
configuration instead of separate redis and bolt config
settings.
* stubs out some peer identity skeleton code (but not the
meat).
* Fixes the SegmentStore to use the overlay client and
pointerdb clients instead of gRPC client code directly
* Leaves a very clear spot where we need to tie the object to
stream to segment store together. There's sort of a "golden
spike" opportunity to connect all the train tracks together
at the bottom of pkg/miniogw/config.go, labeled with a
bunch of TODOs.
Future stuff:
* I now prefer this design over the original
pkg/process.Service thing I had been pushing before (sorry!)
* The experience of trying to have multiple farmers
configurable concurrently led me to prefer config structs
over global flags (I finally came around) or using viper
directly. I think global flags are okay sometimes but in
general going forward we should try and get all relevant
config into config structs.
* If you all like this direction, I think we can go delete my
old Service interfaces and a bunch of flags and clean up a
bunch of stuff.
* If you don't like this direction, it's no sweat at all, and
despite how much code there is here I'm not very tied to any
of this! Considering a lot of this was written between midnight
and 6 am, it might not be any good!
* bind tests
2018-07-24 17:08:28 +01:00
|
|
|
}
|
|
|
|
}
|
2019-04-10 07:38:26 +01:00
|
|
|
|
|
|
|
func cmdVersion(cmd *cobra.Command, args []string) (err error) {
|
|
|
|
if version.Build.Release {
|
|
|
|
fmt.Println("Release build")
|
|
|
|
} else {
|
|
|
|
fmt.Println("Development build")
|
|
|
|
}
|
|
|
|
|
2019-10-21 11:50:59 +01:00
|
|
|
if !version.Build.Version.IsZero() {
|
2019-04-10 07:38:26 +01:00
|
|
|
fmt.Println("Version:", version.Build.Version.String())
|
|
|
|
}
|
|
|
|
if !version.Build.Timestamp.IsZero() {
|
|
|
|
fmt.Println("Build timestamp:", version.Build.Timestamp.Format(time.RFC822))
|
|
|
|
}
|
|
|
|
if version.Build.CommitHash != "" {
|
|
|
|
fmt.Println("Git commit:", version.Build.CommitHash)
|
|
|
|
}
|
|
|
|
return err
|
|
|
|
}
|