storj/pkg/overlay/cache.go
JT Olio 5f6607935b
captplanet (#159)
* captplanet

I kind of went overboard this weekend.

The major goal of this changeset is to provide an environment
for local development where all of the various services can
be easily run together. Developing on Storj v3 should be as
easy as running a setup command and a run command!

To do this, this changeset introduces a new tool called
captplanet, which combines the powers of the Overlay Cache,
the PointerDB, the PieceStore, Kademlia, the Minio Gateway,
etc.

Running 40 farmers and a heavy client inside the same process
forced a rethinking of the "services" that we had. To
avoid confusion by reusing prior terms, this changeset
introduces two new types: Providers and Responsibilities.
I wanted to avoid as many merge conflicts as possible, so
I left the existing Services and code for now, but if people
like this route we can clean up the duplication.

A Responsibility is a collection of gRPC methods and
corresponding state. The following systems are examples of
Responsibilities:
 * Kademlia
 * OverlayCache
 * PointerDB
 * StatDB
 * PieceStore
 * etc.

A Provider is a collection of Responsibilities that
share an Identity, such as:
 * The heavy client
 * The farmer
 * The gateway

An Identity is a public/private key pair, a node id, etc.
Farmers all need different Identities, so captplanet
needs to support running multiple concurrent Providers
with different Identities.

Each Responsibility and Provider should allow for configuration
of multiple copies on its own so creating Responsibilities and
Providers use a new workflow.

To make a Responsibility, one should create a "config"
struct, such as:

```
type Config struct {
  RepairThreshold int `help:"If redundancy falls below this number of
pieces, repair is triggered" default:"30"`
  SuccessThreshold int `help:"If redundancy is above this number then
no additional uploads are needed" default:"40"`
}
```

To use "config" structs, this changeset introduces another
new library called 'cfgstruct', which allows for the configuration
of arbitrary structs through flagsets, and thus through cobra and
viper.

cfgstruct relies on Go's "struct tags" feature to document
help information and default values. Config structs can be
configured via cfgstruct.Bind for binding the struct to a flagset.

Because this configuration system makes setup and configuration
easier *in general*, additional commands are provided that allow
for easy standup of separate Providers. Please make sure to
check out:
 * cmd/captplanet/farmer/main.go (a new farmer binary)
 * cmd/captplanet/hc/main.go (a new heavy client binary)
 * cmd/captplanet/gw/main.go (a new minio gateway binary)

Usage:

```
$ go install -v storj.io/storj/cmd/captplanet
$ captplanet setup
$ captplanet run
```

Configuration is placed by default in `~/.storj/capt/`

Other changes:

 * introduces new config structs for currently existing
   Responsibilities that conform to the new Responsibility
   interface. Please see the `pkg/*/config.go` files for
   examples.

 * integrates the PointerDB API key with other global
   configuration via flags, instead of through environment
   variables through viper like it's been doing. (ultimately
   this should also change to use the PointerDB config
   struct but this is an okay shortterm solution).

 * changes the Overlay cache to use a URL for database
   configuration instead of separate redis and bolt config
   settings.

 * stubs out some peer identity skeleton code (but not the
   meat).

 * Fixes the SegmentStore to use the overlay client and
   pointerdb clients instead of gRPC client code directly

 * Leaves a very clear spot where we need to tie the object to
   stream to segment store together. There's sort of a "golden
   spike" opportunity to connect all the train tracks together
   at the bottom of pkg/miniogw/config.go, labeled with a
   bunch of TODOs.

Future stuff:

 * I now prefer this design over the original
   pkg/process.Service thing I had been pushing before (sorry!)

 * The experience of trying to have multiple farmers
   configurable concurrently led me to prefer config structs
   over global flags (I finally came around) or using viper
   directly. I think global flags are okay sometimes but in
   general going forward we should try and get all relevant
   config into config structs.

 * If you all like this direction, I think we can go delete my
   old Service interfaces and a bunch of flags and clean up a
   bunch of stuff.

 * If you don't like this direction, it's no sweat at all, and
   despite how much code there is here I'm not very tied to any
   of this! Considering a lot of this was written between midnight
   and 6 am, it might not be any good!

* bind tests
2018-07-24 10:08:28 -06:00

163 lines
3.8 KiB
Go

// Copyright (C) 2018 Storj Labs, Inc.
// See LICENSE for copying information.
package overlay
import (
"context"
"github.com/gogo/protobuf/proto"
"github.com/zeebo/errs"
"go.uber.org/zap"
"storj.io/storj/pkg/dht"
"storj.io/storj/pkg/kademlia"
"storj.io/storj/protos/overlay"
"storj.io/storj/storage"
"storj.io/storj/storage/boltdb"
"storj.io/storj/storage/redis"
)
// ErrNodeNotFound error standardization
var ErrNodeNotFound = errs.New("Node not found")
// OverlayError creates class of errors for stack traces
var OverlayError = errs.Class("Overlay Error")
// Cache is used to store overlay data in Redis
type Cache struct {
DB storage.KeyValueStore
DHT dht.DHT
}
// NewRedisOverlayCache returns a pointer to a new Cache instance with an initalized connection to Redis.
func NewRedisOverlayCache(address, password string, db int, DHT dht.DHT) (*Cache, error) {
rc, err := redis.NewClient(address, password, db)
if err != nil {
return nil, err
}
return &Cache{
DB: rc,
DHT: DHT,
}, nil
}
// NewBoltOverlayCache returns a pointer to a new Cache instance with an initalized connection to a Bolt db.
func NewBoltOverlayCache(dbPath string, DHT dht.DHT) (*Cache, error) {
bc, err := boltdb.NewClient(zap.L(), dbPath, boltdb.OverlayBucket)
if err != nil {
return nil, err
}
return &Cache{
DB: bc,
DHT: DHT,
}, nil
}
// Get looks up the provided nodeID from the redis cache
func (o *Cache) Get(ctx context.Context, key string) (*overlay.NodeAddress, error) {
b, err := o.DB.Get([]byte(key))
if err != nil {
return nil, err
}
if b.IsZero() {
// TODO: log? return an error?
return nil, nil
}
na := &overlay.NodeAddress{}
if err := proto.Unmarshal(b, na); err != nil {
return nil, err
}
return na, nil
}
// Put adds a nodeID to the redis cache with a binary representation of proto defined NodeAddress
func (o *Cache) Put(nodeID string, value overlay.NodeAddress) error {
data, err := proto.Marshal(&value)
if err != nil {
return err
}
return o.DB.Put([]byte(nodeID), []byte(data))
}
// Bootstrap walks the initialized network and populates the cache
func (o *Cache) Bootstrap(ctx context.Context) error {
nodes, err := o.DHT.GetNodes(ctx, "0", 1280)
if err != nil {
zap.Error(OverlayError.New("Error getting nodes from DHT", err))
}
for _, v := range nodes {
found, err := o.DHT.FindNode(ctx, kademlia.StringToNodeID(v.Id))
if err != nil {
zap.Error(ErrNodeNotFound)
}
addr, err := proto.Marshal(found.Address)
if err != nil {
return err
}
if err := o.DB.Put([]byte(found.Id), addr); err != nil {
return err
}
}
// called after kademlia is bootstrapped
// needs to take RoutingTable and start to persist it into the cache
// take bootstrap node
// get their route table
// loop through nodes in RT and get THEIR route table
// keep going forever and ever
// Other Possibilities: Randomly generate node ID's to ask for?
_, err = o.DHT.GetRoutingTable(ctx)
return err
}
// Refresh walks the network looking for new nodes and pings existing nodes to eliminate stale addresses
func (o *Cache) Refresh(ctx context.Context) error {
// iterate over all nodes
// compare responses to find new nodes
// listen for responses from existing nodes
// if no response from existing, then mark it as offline for time period
// if responds, it refreshes in DHT
_, rtErr := o.DHT.GetRoutingTable(ctx)
if rtErr != nil {
return rtErr
}
_, err := o.DHT.GetNodes(ctx, "0", 128)
if err != nil {
return err
}
return nil
}
// Walk iterates over buckets to traverse the network
func (o *Cache) Walk(ctx context.Context) error {
nodes, err := o.DHT.GetNodes(ctx, "0", 128)
if err != nil {
return err
}
for _, v := range nodes {
_, err := o.DHT.FindNode(ctx, kademlia.StringToNodeID(v.Id))
if err != nil {
zap.Error(ErrNodeNotFound)
return err
}
}
return nil
}