storj/cmd/storagenode/main.go

334 lines
9.4 KiB
Go
Raw Normal View History

2019-01-24 20:15:10 +00:00
// Copyright (C) 2019 Storj Labs, Inc.
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
// See LICENSE for copying information.
package main
import (
"fmt"
"os"
"path/filepath"
"sort"
"text/tabwriter"
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
"github.com/spf13/cobra"
"github.com/zeebo/errs"
"go.uber.org/zap"
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
"storj.io/storj/internal/fpath"
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
"storj.io/storj/pkg/cfgstruct"
"storj.io/storj/pkg/pb"
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
"storj.io/storj/pkg/process"
2018-11-30 13:40:13 +00:00
"storj.io/storj/pkg/storj"
"storj.io/storj/storagenode"
"storj.io/storj/storagenode/storagenodedb"
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
)
// StorageNodeFlags defines storage node configuration
type StorageNodeFlags struct {
EditConf bool `default:"false" help:"open config in default editor"`
SaveAllDefaults bool `default:"false" help:"save all default values to config.yaml file" setup:"true"`
storagenode.Config
}
// Inspector holds the kad client for node inspection
type Inspector struct {
kad pb.KadInspectorClient
}
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
var (
rootCmd = &cobra.Command{
Use: "storagenode",
Short: "StorageNode",
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
}
runCmd = &cobra.Command{
Use: "run",
Short: "Run the storagenode",
RunE: cmdRun,
}
setupCmd = &cobra.Command{
Use: "setup",
Short: "Create config files",
RunE: cmdSetup,
Annotations: map[string]string{"type": "setup"},
}
configCmd = &cobra.Command{
Use: "config",
Short: "Edit config files",
RunE: cmdConfig,
Annotations: map[string]string{"type": "setup"},
}
diagCmd = &cobra.Command{
Use: "diag",
Short: "Diagnostic Tool support",
RunE: cmdDiag,
Annotations: map[string]string{"type": "helper"},
}
dashboardCmd = &cobra.Command{
Use: "dashboard",
Short: "Display a dashbaord",
RunE: dashCmd,
Annotations: map[string]string{"type": "helper"},
}
runCfg StorageNodeFlags
setupCfg StorageNodeFlags
diagCfg storagenode.Config
dashboardCfg struct {
Address string `default:":28967" help:"address for dashboard service"`
ExternalAddress string `default:":28967" help:"address that your node is listening on if using a tunneling service"`
BootstrapAddr string `default:"bootstrap.storj.io:8888" help:"address of server the storage node was bootstrapped against"`
}
defaultConfDir = fpath.ApplicationDir("storj", "storagenode")
// TODO: this path should be defined somewhere else
defaultIdentityDir = fpath.ApplicationDir("storj", "identity", "storagenode")
defaultDiagDir string
confDir string
identityDir string
useColor bool
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
)
const (
defaultServerAddr = ":28967"
)
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
func init() {
confDirParam := cfgstruct.FindConfigDirParam()
if confDirParam != "" {
defaultConfDir = confDirParam
}
2019-01-22 14:34:40 +00:00
identityDirParam := cfgstruct.FindIdentityDirParam()
if identityDirParam != "" {
defaultIdentityDir = identityDirParam
}
rootCmd.PersistentFlags().StringVar(&confDir, "config-dir", defaultConfDir, "main directory for storagenode configuration")
err := rootCmd.PersistentFlags().SetAnnotation("config-dir", "setup", []string{"true"})
if err != nil {
zap.S().Error("Failed to set 'setup' annotation for 'config-dir'")
}
rootCmd.PersistentFlags().StringVar(&identityDir, "identity-dir", defaultIdentityDir, "main directory for storagenode identity credentials")
err = rootCmd.PersistentFlags().SetAnnotation("identity-dir", "setup", []string{"true"})
if err != nil {
zap.S().Error("Failed to set 'setup' annotation for 'config-dir'")
}
rootCmd.PersistentFlags().BoolVar(&useColor, "color", false, "use color in user interface")
defaultDiagDir = filepath.Join(defaultConfDir, "storage")
rootCmd.AddCommand(runCmd)
rootCmd.AddCommand(setupCmd)
rootCmd.AddCommand(configCmd)
rootCmd.AddCommand(diagCmd)
rootCmd.AddCommand(dashboardCmd)
2019-01-22 14:34:40 +00:00
cfgstruct.Bind(runCmd.Flags(), &runCfg, cfgstruct.ConfDir(defaultConfDir), cfgstruct.IdentityDir(defaultIdentityDir))
cfgstruct.BindSetup(setupCmd.Flags(), &setupCfg, cfgstruct.ConfDir(defaultConfDir), cfgstruct.IdentityDir(defaultIdentityDir))
cfgstruct.BindSetup(configCmd.Flags(), &setupCfg, cfgstruct.ConfDir(defaultConfDir), cfgstruct.IdentityDir(defaultIdentityDir))
cfgstruct.Bind(diagCmd.Flags(), &diagCfg, cfgstruct.ConfDir(defaultConfDir), cfgstruct.IdentityDir(defaultIdentityDir))
cfgstruct.Bind(dashboardCmd.Flags(), &dashboardCfg, cfgstruct.ConfDir(defaultDiagDir))
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
}
func databaseConfig(config storagenode.Config) storagenodedb.Config {
return storagenodedb.Config{
Storage: config.Storage.Path,
Info: filepath.Join(config.Storage.Path, "piecestore.db"),
Kademlia: config.Kademlia.DBPath,
}
}
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
func cmdRun(cmd *cobra.Command, args []string) (err error) {
log := zap.L()
identity, err := runCfg.Identity.Load()
if err != nil {
zap.S().Fatal(err)
}
if err := runCfg.Verify(log); err != nil {
log.Sugar().Error("Invalid configuration: ", err)
return err
}
ctx := process.Ctx(cmd)
if err := process.InitMetricsWithCertPath(ctx, nil, runCfg.Identity.CertPath); err != nil {
zap.S().Error("Failed to initialize telemetry batcher: ", err)
}
db, err := storagenodedb.New(databaseConfig(runCfg.Config))
if err != nil {
2019-01-24 20:28:06 +00:00
return errs.New("Error starting master database on storagenode: %+v", err)
}
defer func() {
err = errs.Combine(err, db.Close())
}()
2019-01-24 20:28:06 +00:00
err = db.CreateTables()
if err != nil {
return errs.New("Error creating tables for master database on storagenode: %+v", err)
}
2019-01-23 19:58:44 +00:00
peer, err := storagenode.New(log, identity, db, runCfg.Config)
if err != nil {
return err
}
runError := peer.Run(ctx)
closeError := peer.Close()
return errs.Combine(runError, closeError)
}
func cmdSetup(cmd *cobra.Command, args []string) (err error) {
setupDir, err := filepath.Abs(confDir)
if err != nil {
return err
}
valid, _ := fpath.IsValidSetupDir(setupDir)
if !valid {
return fmt.Errorf("storagenode configuration already exists (%v)", setupDir)
}
err = os.MkdirAll(setupDir, 0700)
if err != nil {
return err
}
overrides := map[string]interface{}{
"log.level": "info",
}
serverAddress := cmd.Flag("server.address")
if !serverAddress.Changed {
overrides[serverAddress.Name] = defaultServerAddr
}
configFile := filepath.Join(setupDir, "config.yaml")
if setupCfg.SaveAllDefaults {
err = process.SaveConfigWithAllDefaults(cmd.Flags(), configFile, overrides)
} else {
err = process.SaveConfig(cmd.Flags(), configFile, overrides)
}
if err != nil {
return err
}
if setupCfg.EditConf {
return fpath.EditFile(configFile)
}
return err
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
}
func cmdConfig(cmd *cobra.Command, args []string) (err error) {
setupDir, err := filepath.Abs(confDir)
if err != nil {
return err
}
//run setup if we can't access the config file
conf := filepath.Join(setupDir, "config.yaml")
if _, err := os.Stat(conf); err != nil {
return cmdSetup(cmd, args)
}
return fpath.EditFile(conf)
}
func cmdDiag(cmd *cobra.Command, args []string) (err error) {
diagDir, err := filepath.Abs(confDir)
if err != nil {
return err
}
// check if the directory exists
_, err = os.Stat(diagDir)
if err != nil {
fmt.Println("Storagenode directory doesn't exist", diagDir)
return err
}
db, err := storagenodedb.New(databaseConfig(diagCfg))
if err != nil {
return errs.New("Error starting master database on storagenode: %v", err)
}
defer func() {
err = errs.Combine(err, db.Close())
}()
//get all bandwidth aggrements entries already ordered
bwAgreements, err := db.PSDB().GetBandwidthAllocations()
if err != nil {
fmt.Printf("storage node 'bandwidth_agreements' table read error: %v\n", err)
return err
}
// Agreement is a struct that contains a bandwidth agreement and the associated signature
type SatelliteSummary struct {
TotalBytes int64
PutActionCount int64
GetActionCount int64
GetAuditActionCount int64
GetRepairActionCount int64
PutRepairActionCount int64
TotalTransactions int64
// additional attributes add here ...
}
// attributes per satelliteid
summaries := make(map[storj.NodeID]*SatelliteSummary)
satelliteIDs := storj.NodeIDList{}
for _, rbaVal := range bwAgreements {
for _, rbaDataVal := range rbaVal {
rba := rbaDataVal.Agreement
pba := rba.PayerAllocation
summary, ok := summaries[pba.SatelliteId]
if !ok {
summaries[pba.SatelliteId] = &SatelliteSummary{}
satelliteIDs = append(satelliteIDs, pba.SatelliteId)
summary = summaries[pba.SatelliteId]
}
// fill the summary info
summary.TotalBytes += rba.Total
summary.TotalTransactions++
switch pba.Action {
case pb.BandwidthAction_PUT:
summary.PutActionCount++
case pb.BandwidthAction_GET:
summary.GetActionCount++
case pb.BandwidthAction_GET_AUDIT:
summary.GetAuditActionCount++
case pb.BandwidthAction_GET_REPAIR:
summary.GetRepairActionCount++
case pb.BandwidthAction_PUT_REPAIR:
summary.PutRepairActionCount++
}
}
}
// initialize the table header (fields)
const padding = 3
w := tabwriter.NewWriter(os.Stdout, 0, 0, padding, ' ', tabwriter.AlignRight|tabwriter.Debug)
fmt.Fprintln(w, "SatelliteID\tTotal\t# Of Transactions\tPUT Action\tGET Action\tGET (Audit) Action\tGET (Repair) Action\tPUT (Repair) Action\t")
// populate the row fields
sort.Sort(satelliteIDs)
for _, satelliteID := range satelliteIDs {
summary := summaries[satelliteID]
fmt.Fprint(w, satelliteID, "\t", summary.TotalBytes, "\t", summary.TotalTransactions, "\t",
summary.PutActionCount, "\t", summary.GetActionCount, "\t", summary.GetAuditActionCount,
"\t", summary.GetRepairActionCount, "\t", summary.PutRepairActionCount, "\t\n")
}
// display the data
err = w.Flush()
return err
}
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
func main() {
process.Exec(rootCmd)
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
}