storj/cmd/storagenode/main.go

330 lines
8.9 KiB
Go
Raw Normal View History

// Copyright (C) 2020 Storj Labs, Inc.
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
// See LICENSE for copying information.
package main
import (
"fmt"
"os"
"path/filepath"
"sort"
"text/tabwriter"
2019-04-26 06:17:18 +01:00
"time"
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
"github.com/spf13/cobra"
"github.com/zeebo/errs"
"go.uber.org/zap"
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
"storj.io/common/fpath"
"storj.io/common/memory"
"storj.io/common/storj"
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
"storj.io/storj/pkg/cfgstruct"
"storj.io/storj/pkg/process"
"storj.io/storj/pkg/revocation"
"storj.io/storj/private/version"
"storj.io/storj/storagenode"
"storj.io/storj/storagenode/storagenodedb"
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
)
// StorageNodeFlags defines storage node configuration
type StorageNodeFlags struct {
EditConf bool `default:"false" help:"open config in default editor"`
storagenode.Config
Deprecated
}
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
var (
rootCmd = &cobra.Command{
Use: "storagenode",
Short: "StorageNode",
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
}
runCmd = &cobra.Command{
Use: "run",
Short: "Run the storagenode",
RunE: cmdRun,
}
setupCmd = &cobra.Command{
Use: "setup",
Short: "Create config files",
RunE: cmdSetup,
Annotations: map[string]string{"type": "setup"},
}
configCmd = &cobra.Command{
Use: "config",
Short: "Edit config files",
RunE: cmdConfig,
Annotations: map[string]string{"type": "setup"},
}
diagCmd = &cobra.Command{
Use: "diag",
Short: "Diagnostic Tool support",
RunE: cmdDiag,
Annotations: map[string]string{"type": "helper"},
}
dashboardCmd = &cobra.Command{
Use: "dashboard",
2019-05-07 21:00:20 +01:00
Short: "Display a dashboard",
RunE: cmdDashboard,
Annotations: map[string]string{"type": "helper"},
}
gracefulExitInitCmd = &cobra.Command{
Use: "exit-satellite",
Short: "Initiate graceful exit",
RunE: cmdGracefulExitInit,
Annotations: map[string]string{"type": "helper"},
}
gracefulExitStatusCmd = &cobra.Command{
Use: "exit-status",
Short: "Display graceful exit status",
RunE: cmdGracefulExitStatus,
Annotations: map[string]string{"type": "helper"},
}
runCfg StorageNodeFlags
setupCfg StorageNodeFlags
diagCfg storagenode.Config
dashboardCfg struct {
2019-03-22 13:27:59 +00:00
Address string `default:"127.0.0.1:7778" help:"address for dashboard service"`
}
defaultDiagDir string
confDir string
identityDir string
useColor bool
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
)
const (
defaultServerAddr = ":28967"
defaultPrivateServerAddr = "127.0.0.1:7778"
)
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
func init() {
defaultConfDir := fpath.ApplicationDir("storj", "storagenode")
defaultIdentityDir := fpath.ApplicationDir("storj", "identity", "storagenode")
defaultDiagDir = filepath.Join(defaultConfDir, "storage")
cfgstruct.SetupFlag(zap.L(), rootCmd, &confDir, "config-dir", defaultConfDir, "main directory for storagenode configuration")
cfgstruct.SetupFlag(zap.L(), rootCmd, &identityDir, "identity-dir", defaultIdentityDir, "main directory for storagenode identity credentials")
defaults := cfgstruct.DefaultsFlag(rootCmd)
rootCmd.PersistentFlags().BoolVar(&useColor, "color", false, "use color in user interface")
rootCmd.AddCommand(runCmd)
rootCmd.AddCommand(setupCmd)
rootCmd.AddCommand(configCmd)
rootCmd.AddCommand(diagCmd)
rootCmd.AddCommand(dashboardCmd)
rootCmd.AddCommand(gracefulExitInitCmd)
rootCmd.AddCommand(gracefulExitStatusCmd)
process.Bind(runCmd, &runCfg, defaults, cfgstruct.ConfDir(confDir), cfgstruct.IdentityDir(identityDir))
process.Bind(setupCmd, &setupCfg, defaults, cfgstruct.ConfDir(confDir), cfgstruct.IdentityDir(identityDir), cfgstruct.SetupMode())
process.Bind(configCmd, &setupCfg, defaults, cfgstruct.ConfDir(confDir), cfgstruct.IdentityDir(identityDir), cfgstruct.SetupMode())
process.Bind(diagCmd, &diagCfg, defaults, cfgstruct.ConfDir(confDir), cfgstruct.IdentityDir(identityDir))
process.Bind(dashboardCmd, &dashboardCfg, defaults, cfgstruct.ConfDir(defaultDiagDir))
process.Bind(gracefulExitInitCmd, &diagCfg, defaults, cfgstruct.ConfDir(defaultDiagDir))
process.Bind(gracefulExitStatusCmd, &diagCfg, defaults, cfgstruct.ConfDir(defaultDiagDir))
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
}
func databaseConfig(config storagenode.Config) storagenodedb.Config {
return storagenodedb.Config{
Remove Kademlia dependencies from Satellite and Storagenode (#2966) What: cmd/inspector/main.go: removes kad commands internal/testplanet/planet.go: Waits for contact chore to finish satellite/contact/nodesservice.go: creates an empty nodes service implementation satellite/contact/service.go: implements Local and FetchInfo methods & adds external address config value satellite/discovery/service.go: replaces kad.FetchInfo with contact.FetchInfo in Refresh() & removes Discover() satellite/peer.go: sets up contact service and endpoints storagenode/console/service.go: replaces nodeID with contact.Local() storagenode/contact/chore.go: replaces routing table with contact service storagenode/contact/nodesservice.go: creates empty implementation for ping and request info nodes service & implements RequestInfo method storagenode/contact/service.go: creates a service to return the local node and update its own capacity storagenode/monitor/monitor.go: uses contact service in place of routing table storagenode/operator.go: moves operatorconfig from kad into its own setup storagenode/peer.go: sets up contact service, chore, pingstats and endpoints satellite/overlay/config.go: changes NodeSelectionConfig.OnlineWindow default to 4hr to allow for accurate repair selection Removes kademlia setups in: cmd/storagenode/main.go cmd/storj-sim/network.go internal/testplane/planet.go internal/testplanet/satellite.go internal/testplanet/storagenode.go satellite/peer.go scripts/test-sim-backwards.sh scripts/testdata/satellite-config.yaml.lock storagenode/inspector/inspector.go storagenode/peer.go storagenode/storagenodedb/database.go Why: Replacing Kademlia Please describe the tests: • internal/testplanet/planet_test.go: TestBasic: assert that the storagenode can check in with the satellite without any errors TestContact: test that all nodes get inserted into both satellites' overlay cache during testplanet setup • satellite/contact/contact_test.go: TestFetchInfo: Tests that the FetchInfo method returns the correct info • storagenode/contact/contact_test.go: TestNodeInfoUpdated: tests that the contact chore updates the node information TestRequestInfoEndpoint: tests that the Request info endpoint returns the correct info Please describe the performance impact: Node discovery should be at least slightly more performant since each node connects directly to each satellite and no longer needs to wait for bootstrapping. It probably won't be faster in real time on start up since each node waits a random amount of time (less than 1 hr) to initialize its first connection (jitter).
2019-09-19 20:56:34 +01:00
Storage: config.Storage.Path,
Info: filepath.Join(config.Storage.Path, "piecestore.db"),
Info2: filepath.Join(config.Storage.Path, "info.db"),
Pieces: config.Storage.Path,
}
}
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
func cmdRun(cmd *cobra.Command, args []string) (err error) {
// inert constructors only ====
ctx, _ := process.Ctx(cmd)
log := zap.L()
runCfg.Debug.Address = *process.DebugAddrFlag
mapDeprecatedConfigs(log)
identity, err := runCfg.Identity.Load()
if err != nil {
zap.S().Fatal(err)
}
if err := runCfg.Verify(log); err != nil {
log.Sugar().Error("Invalid configuration: ", err)
return err
}
db, err := storagenodedb.New(log.Named("db"), databaseConfig(runCfg.Config))
if err != nil {
2019-01-24 20:28:06 +00:00
return errs.New("Error starting master database on storagenode: %+v", err)
}
defer func() {
err = errs.Combine(err, db.Close())
}()
revocationDB, err := revocation.NewDBFromCfg(runCfg.Server.Config)
if err != nil {
return errs.New("Error creating revocation database: %+v", err)
}
defer func() {
err = errs.Combine(err, revocationDB.Close())
}()
peer, err := storagenode.New(log, identity, db, revocationDB, runCfg.Config, version.Build)
2019-01-24 20:28:06 +00:00
if err != nil {
return err
2019-01-24 20:28:06 +00:00
}
2019-01-23 19:58:44 +00:00
// okay, start doing stuff ====
_, err = peer.Version.Service.CheckVersion(ctx)
if err != nil {
return err
}
2019-07-31 15:38:44 +01:00
if err := process.InitMetricsWithCertPath(ctx, log, nil, runCfg.Identity.CertPath); err != nil {
zap.S().Warn("Failed to initialize telemetry batcher: ", err)
}
err = db.CreateTables(ctx)
if err != nil {
return errs.New("Error creating tables for master database on storagenode: %+v", err)
}
preflightEnabled, err := cmd.Flags().GetBool("preflight.database-check")
if err != nil {
return errs.New("Cannot retrieve preflight.database-check flag: %+v", err)
}
if preflightEnabled {
err = db.Preflight(ctx)
if err != nil {
return errs.New("Error during preflight check for storagenode databases: %+v", err)
}
}
if err := peer.Storage2.CacheService.Init(ctx); err != nil {
zap.S().Error("Failed to initialize CacheService: ", err)
}
runError := peer.Run(ctx)
closeError := peer.Close()
return errs.Combine(runError, closeError)
}
func cmdSetup(cmd *cobra.Command, args []string) (err error) {
setupDir, err := filepath.Abs(confDir)
if err != nil {
return err
}
valid, _ := fpath.IsValidSetupDir(setupDir)
if !valid {
return fmt.Errorf("storagenode configuration already exists (%v)", setupDir)
}
err = os.MkdirAll(setupDir, 0700)
if err != nil {
return err
}
overrides := map[string]interface{}{
"log.level": "info",
}
serverAddress := cmd.Flag("server.address")
if !serverAddress.Changed {
overrides[serverAddress.Name] = defaultServerAddr
}
serverPrivateAddress := cmd.Flag("server.private-address")
if !serverPrivateAddress.Changed {
overrides[serverPrivateAddress.Name] = defaultPrivateServerAddr
}
configFile := filepath.Join(setupDir, "config.yaml")
err = process.SaveConfig(cmd, configFile, process.SaveConfigWithOverrides(overrides))
if err != nil {
return err
}
if setupCfg.EditConf {
return fpath.EditFile(configFile)
}
return err
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
}
func cmdConfig(cmd *cobra.Command, args []string) (err error) {
setupDir, err := filepath.Abs(confDir)
if err != nil {
return err
}
//run setup if we can't access the config file
conf := filepath.Join(setupDir, "config.yaml")
if _, err := os.Stat(conf); err != nil {
return cmdSetup(cmd, args)
}
return fpath.EditFile(conf)
}
func cmdDiag(cmd *cobra.Command, args []string) (err error) {
ctx, _ := process.Ctx(cmd)
2019-04-26 06:17:18 +01:00
diagDir, err := filepath.Abs(confDir)
if err != nil {
return err
}
// check if the directory exists
_, err = os.Stat(diagDir)
if err != nil {
2019-04-26 06:17:18 +01:00
fmt.Println("storage node directory doesn't exist", diagDir)
return err
}
db, err := storagenodedb.New(zap.L().Named("db"), databaseConfig(diagCfg))
if err != nil {
2019-04-26 06:17:18 +01:00
return errs.New("Error starting master database on storage node: %v", err)
}
defer func() {
err = errs.Combine(err, db.Close())
}()
2019-04-26 06:17:18 +01:00
summaries, err := db.Bandwidth().SummaryBySatellite(ctx, time.Time{}, time.Now())
if err != nil {
2019-04-26 06:17:18 +01:00
fmt.Printf("unable to get bandwidth summary: %v\n", err)
return err
}
2019-04-26 06:17:18 +01:00
satellites := storj.NodeIDList{}
for id := range summaries {
satellites = append(satellites, id)
}
sort.Sort(satellites)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 3, ' ', tabwriter.AlignRight|tabwriter.Debug)
defer func() { err = errs.Combine(err, w.Flush()) }()
fmt.Fprint(w, "Satellite\tTotal\tPut\tGet\tDelete\tAudit Get\tRepair Get\tRepair Put\n")
for _, id := range satellites {
summary := summaries[id]
fmt.Fprintf(w, "%v\t%v\t%v\t%v\t%v\t%v\t%v\t%v\n",
id,
memory.Size(summary.Total()),
memory.Size(summary.Put),
memory.Size(summary.Get),
memory.Size(summary.Delete),
memory.Size(summary.GetAudit),
memory.Size(summary.GetRepair),
memory.Size(summary.PutRepair),
)
}
return nil
}
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
func main() {
process.ExecCustomDebug(rootCmd)
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
}