storj/satellite/metainfo/config.go

204 lines
8.7 KiB
Go
Raw Normal View History

2019-01-24 20:15:10 +00:00
// Copyright (C) 2019 Storj Labs, Inc.
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
// See LICENSE for copying information.
package metainfo
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
import (
"fmt"
"strconv"
"strings"
"time"
"storj.io/common/memory"
"storj.io/common/uuid"
"storj.io/storj/satellite/metabase"
"storj.io/uplink/private/eestream"
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
)
const (
// BoltPointerBucket is the string representing the bucket used for `PointerEntries` in BoltDB.
2019-01-23 19:58:44 +00:00
BoltPointerBucket = "pointers"
)
// RSConfig is a configuration struct that keeps details about default
// redundancy strategy information.
//
// Can be used as a flag.
type RSConfig struct {
ErasureShareSize memory.Size
Min int
Repair int
Success int
Total int
}
// Type implements pflag.Value.
func (RSConfig) Type() string { return "metainfo.RSConfig" }
// String is required for pflag.Value.
func (rs *RSConfig) String() string {
return fmt.Sprintf("%d/%d/%d/%d-%s",
rs.Min,
rs.Repair,
rs.Success,
rs.Total,
rs.ErasureShareSize.String())
}
// Set sets the value from a string in the format k/m/o/n-size (min/repair/optimal/total-erasuresharesize).
func (rs *RSConfig) Set(s string) error {
// Split on dash. Expect two items. First item is RS numbers. Second item is memory.Size.
info := strings.Split(s, "-")
if len(info) != 2 {
return Error.New("Invalid default RS config (expect format k/m/o/n-ShareSize, got %s)", s)
}
rsNumbersString := info[0]
shareSizeString := info[1]
// Attempt to parse "-size" part of config.
shareSizeInt, err := memory.ParseString(shareSizeString)
if err != nil {
return Error.New("Invalid share size in RS config: '%s', %w", shareSizeString, err)
}
shareSize := memory.Size(shareSizeInt)
// Split on forward slash. Expect exactly four positive non-decreasing integers.
rsNumbers := strings.Split(rsNumbersString, "/")
if len(rsNumbers) != 4 {
return Error.New("Invalid default RS numbers (wrong size, expect 4): %s", rsNumbersString)
}
minValue := 1
values := []int{}
for _, nextValueString := range rsNumbers {
nextValue, err := strconv.Atoi(nextValueString)
if err != nil {
return Error.New("Invalid default RS numbers (should all be valid integers): %s, %w", rsNumbersString, err)
}
if nextValue < minValue {
return Error.New("Invalid default RS numbers (should be non-decreasing): %s", rsNumbersString)
}
values = append(values, nextValue)
minValue = nextValue
}
rs.ErasureShareSize = shareSize
rs.Min = values[0]
rs.Repair = values[1]
rs.Success = values[2]
rs.Total = values[3]
return nil
}
// RedundancyStrategy creates eestream.RedundancyStrategy from config values.
func (rs *RSConfig) RedundancyStrategy() (eestream.RedundancyStrategy, error) {
fec, err := eestream.NewFEC(rs.Min, rs.Total)
if err != nil {
return eestream.RedundancyStrategy{}, err
}
erasureScheme := eestream.NewRSScheme(fec, rs.ErasureShareSize.Int())
return eestream.NewRedundancyStrategy(erasureScheme, rs.Repair, rs.Success)
}
// RateLimiterConfig is a configuration struct for endpoint rate limiting.
type RateLimiterConfig struct {
Enabled bool `help:"whether rate limiting is enabled." releaseDefault:"true" devDefault:"true"`
Rate float64 `help:"request rate per project per second." releaseDefault:"100" devDefault:"100" testDefault:"1000"`
CacheCapacity int `help:"number of projects to cache." releaseDefault:"10000" devDefault:"10" testDefault:"100"`
CacheExpiration time.Duration `help:"how long to cache the projects limiter." releaseDefault:"10m" devDefault:"10s"`
}
// UploadLimiterConfig is a configuration struct for endpoint upload limiting.
type UploadLimiterConfig struct {
Enabled bool `help:"whether rate limiting is enabled." releaseDefault:"true" devDefault:"true"`
SingleObjectLimit time.Duration `help:"how often we can upload to the single object (the same location) per API instance" default:"1s" devDefault:"1ms"`
CacheCapacity int `help:"number of object locations to cache." releaseDefault:"10000" devDefault:"10" testDefault:"100"`
}
// ProjectLimitConfig is a configuration struct for default project limits.
type ProjectLimitConfig struct {
MaxBuckets int `help:"max bucket count for a project." default:"100" testDefault:"10"`
}
// Config is a configuration struct that is everything you need to start a metainfo.
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
type Config struct {
DatabaseURL string `help:"the database connection string to use" default:"postgres://"`
MinRemoteSegmentSize memory.Size `default:"1240" testDefault:"0" help:"minimum remote segment size"` // TODO: fix tests to work with 1024
MaxInlineSegmentSize memory.Size `default:"4KiB" help:"maximum inline segment size"`
// we have such default value because max value for ObjectKey is 1024(1 Kib) but EncryptedObjectKey
// has encryption overhead 16 bytes. So overall size is 1024 + 16 * 16.
MaxEncryptedObjectKeyLength int `default:"4000" help:"maximum encrypted object key length"`
MaxSegmentSize memory.Size `default:"64MiB" help:"maximum segment size"`
MaxMetadataSize memory.Size `default:"2KiB" help:"maximum segment metadata size"`
MaxCommitInterval time.Duration `default:"48h" testDefault:"1h" help:"maximum time allowed to pass between creating and committing a segment"`
MinPartSize memory.Size `default:"5MiB" testDefault:"0" help:"minimum allowed part size (last part has no minimum size limit)"`
MaxNumberOfParts int `default:"10000" help:"maximum number of parts object can contain"`
Overlay bool `default:"true" help:"toggle flag if overlay is enabled"`
RS RSConfig `releaseDefault:"29/35/80/110-256B" devDefault:"4/6/8/10-256B" help:"redundancy scheme configuration in the format k/m/o/n-sharesize"`
RateLimiter RateLimiterConfig `help:"rate limiter configuration"`
UploadLimiter UploadLimiterConfig `help:"object upload limiter configuration"`
ProjectLimits ProjectLimitConfig `help:"project limit configuration"`
// TODO remove this flag when server-side copy implementation will be finished
ServerSideCopy bool `help:"enable code for server-side copy, deprecated. please leave this to true." default:"true"`
ServerSideCopyDisabled bool `help:"disable already enabled server-side copy. this is because once server side copy is enabled, delete code should stay changed, even if you want to disable server side copy" default:"false"`
UseBucketLevelObjectVersioning bool `help:"enable the use of bucket level object versioning" default:"false"`
// flag to simplify testing by enabling bucket level versioning feature only for specific projects
UseBucketLevelObjectVersioningProjects []string `help:"list of projects which will have UseBucketLevelObjectVersioning feature flag enabled" default:"" hidden:"true"`
// TODO remove when we benchmarking are done and decision is made.
TestListingQuery bool `default:"false" help:"test the new query for non-recursive listing"`
// flag will be effective only if UseBucketLevelObjectVersioning or UseBucketLevelObjectVersioningProjects are enabled
TestEnableBucketVersioning bool `default:"false" help:"if enabled all new created buckets will have versioning enabled (use only for testing)" hidden:"true"`
captplanet (#159) * captplanet I kind of went overboard this weekend. The major goal of this changeset is to provide an environment for local development where all of the various services can be easily run together. Developing on Storj v3 should be as easy as running a setup command and a run command! To do this, this changeset introduces a new tool called captplanet, which combines the powers of the Overlay Cache, the PointerDB, the PieceStore, Kademlia, the Minio Gateway, etc. Running 40 farmers and a heavy client inside the same process forced a rethinking of the "services" that we had. To avoid confusion by reusing prior terms, this changeset introduces two new types: Providers and Responsibilities. I wanted to avoid as many merge conflicts as possible, so I left the existing Services and code for now, but if people like this route we can clean up the duplication. A Responsibility is a collection of gRPC methods and corresponding state. The following systems are examples of Responsibilities: * Kademlia * OverlayCache * PointerDB * StatDB * PieceStore * etc. A Provider is a collection of Responsibilities that share an Identity, such as: * The heavy client * The farmer * The gateway An Identity is a public/private key pair, a node id, etc. Farmers all need different Identities, so captplanet needs to support running multiple concurrent Providers with different Identities. Each Responsibility and Provider should allow for configuration of multiple copies on its own so creating Responsibilities and Providers use a new workflow. To make a Responsibility, one should create a "config" struct, such as: ``` type Config struct { RepairThreshold int `help:"If redundancy falls below this number of pieces, repair is triggered" default:"30"` SuccessThreshold int `help:"If redundancy is above this number then no additional uploads are needed" default:"40"` } ``` To use "config" structs, this changeset introduces another new library called 'cfgstruct', which allows for the configuration of arbitrary structs through flagsets, and thus through cobra and viper. cfgstruct relies on Go's "struct tags" feature to document help information and default values. Config structs can be configured via cfgstruct.Bind for binding the struct to a flagset. Because this configuration system makes setup and configuration easier *in general*, additional commands are provided that allow for easy standup of separate Providers. Please make sure to check out: * cmd/captplanet/farmer/main.go (a new farmer binary) * cmd/captplanet/hc/main.go (a new heavy client binary) * cmd/captplanet/gw/main.go (a new minio gateway binary) Usage: ``` $ go install -v storj.io/storj/cmd/captplanet $ captplanet setup $ captplanet run ``` Configuration is placed by default in `~/.storj/capt/` Other changes: * introduces new config structs for currently existing Responsibilities that conform to the new Responsibility interface. Please see the `pkg/*/config.go` files for examples. * integrates the PointerDB API key with other global configuration via flags, instead of through environment variables through viper like it's been doing. (ultimately this should also change to use the PointerDB config struct but this is an okay shortterm solution). * changes the Overlay cache to use a URL for database configuration instead of separate redis and bolt config settings. * stubs out some peer identity skeleton code (but not the meat). * Fixes the SegmentStore to use the overlay client and pointerdb clients instead of gRPC client code directly * Leaves a very clear spot where we need to tie the object to stream to segment store together. There's sort of a "golden spike" opportunity to connect all the train tracks together at the bottom of pkg/miniogw/config.go, labeled with a bunch of TODOs. Future stuff: * I now prefer this design over the original pkg/process.Service thing I had been pushing before (sorry!) * The experience of trying to have multiple farmers configurable concurrently led me to prefer config structs over global flags (I finally came around) or using viper directly. I think global flags are okay sometimes but in general going forward we should try and get all relevant config into config structs. * If you all like this direction, I think we can go delete my old Service interfaces and a bunch of flags and clean up a bunch of stuff. * If you don't like this direction, it's no sweat at all, and despite how much code there is here I'm not very tied to any of this! Considering a lot of this was written between midnight and 6 am, it might not be any good! * bind tests
2018-07-24 17:08:28 +01:00
}
// Metabase constructs Metabase configuration based on Metainfo configuration with specific application name.
func (c Config) Metabase(applicationName string) metabase.Config {
return metabase.Config{
ApplicationName: applicationName,
MinPartSize: c.MinPartSize,
MaxNumberOfParts: c.MaxNumberOfParts,
ServerSideCopy: c.ServerSideCopy,
}
}
// ExtendedConfig extended config keeps additional helper fields and methods around Config.
type ExtendedConfig struct {
Config
useBucketLevelObjectVersioningProjects []uuid.UUID
}
// NewExtendedConfig creates new instance of extended config.
func NewExtendedConfig(config Config) (_ ExtendedConfig, err error) {
extendedConfig := ExtendedConfig{Config: config}
for _, projectIDString := range config.UseBucketLevelObjectVersioningProjects {
projectID, err := uuid.FromString(projectIDString)
if err != nil {
return ExtendedConfig{}, err
}
extendedConfig.useBucketLevelObjectVersioningProjects = append(extendedConfig.useBucketLevelObjectVersioningProjects, projectID)
}
return extendedConfig, nil
}
// UseBucketLevelObjectVersioningByProject checks if UseBucketLevelObjectVersioning should be enabled for specific project.
func (ec ExtendedConfig) UseBucketLevelObjectVersioningByProject(projectID uuid.UUID) bool {
// if its globally enabled don't look at projects
if ec.UseBucketLevelObjectVersioning {
return true
}
for _, p := range ec.useBucketLevelObjectVersioningProjects {
if p == projectID {
return true
}
}
return false
}