To avoid further name collisions, the very broad named package gets moved into
the consoleauth package where its also mainly being used.
Change-Id: Ie563c9700adbf0553baca2b7b8ba4a1d9c29d144
The stable uplink library is located at storj.io/uplink.
The stable uplink C library is located at storj.io/uplink-c.
Change-Id: Ia69115d6384b668490efe7e8ab674d7e8104b4f4
This will help to determine how many grpc calls are made to the
satellite.
Also remove the grpc funcs that have been added to upstream.
Change-Id: I91878f4fd10f9bfe601c94222c102eaaf4d35963
* debug
* traces
* cfgstruct
* process
Package `storj/private/version` will be removed as a separate change.
Change-Id: Iadc40faa782e6225513b28218952f02d9c240a9f
This moves grpc related tlsopts methods to private/grpctlsopts.
This allows to remove grpc dependency from tlsopts.
Change-Id: I25090b82b1e7a0633417ad600f8587b0c30ace73
Now that we are trying to identify the root cause of the satellite load limitations (i.e. currently the satellite has a max ability of 400 rps for uploads and we need this to be higher), we are using the golang diagnostic tools to collect insight into what the bottlenecks are. We currently have a debug endpoint to gather some cpu and mem data, but it could be useful to have continuous profiling. GCP stackdriver has support for continuous profiling so lets set that up and see if it is helpful to gather more data.
This PR adds support for [GCP continuous profiler](https://cloud.google.com/profiler) which allows enabling continuous cpu/mem profiling and the stats are sent to stackdriver in google cloud console.
To enable the continuous profiling for a storj component, do the following:
- prereq: the workload must be running in GKE and have Stackdriver Profiling IAM role permissions
- provide the config flag `debug.profilename` in the config.yaml file for the workload (i.e. satellite api process, etc). The profilename should be the workload name, for example "satellite-api".
- once the above config flag is provided, the profiler will be initialized and profiling stats will automatically be sent to GCP project where the workload is running and viewable in the Stackdriver Profile page in the console
The current implementation assumes the workload is running in GKE, however if we find if useful we can add support to enable this from anywhere. But for simplicity, its configured this way assuming the main goal is to enable in production systems.
Change-Id: Ibf8ebe2df7bf06fdd4951ee6a1e48854dd36ad47
this commit updates our monkit dependency to the v3 version where
it outputs in an influx style. this makes discovery much easier
as many tools are built to look at it this way.
graphite and rothko will suffer some due to no longer being a tree
based on dots. hopefully time will exist to update rothko to
index based on the new metric format.
it adds an influx output for the statreceiver so that we can
write to influxdb v1 or v2 directly.
Change-Id: Iae9f9494a6d29cfbd1f932a5e71a891b490415ff
Control Panel allows to control different chores and services.
Currently this adds controlling of cycles.
Change-Id: I734f1676b2a0d883b8f5ba937e93c45ac1a9ce21
By separating Server it allows Peers to directly embed the server
and provide customizations and hooks into rest of the services.
Change-Id: Ic1d68740fd494d2f82c1739bd990849c561b912b
Multiple time.Now calls in a short amount of time
may return the exact same time.
This adds a time.Sleep() to ensure we get different internal
timing.
Change-Id: I533b5eedfdcab13a0197976ccf4b9fa96d4c3bba
We want to make using uplink as easy as possible. That's why we wan't to
avoid requiring setup or import command before normal usage if user
specified --access flag. If this flag is set then rest flags should be
set as defaults.
https://storjlabs.atlassian.net/browse/V3-3490
Change-Id: I95a7bd77a3f00b8d9981fee513e9e77aef298bca
Go will, by default, set tcp keep alives on sockets. But
the kernel does not send keep alives to sockets that have
a non-empty send queue. That can cause connections that
hang forever.
So we set TCP_USER_TIMEOUT on all of the sockets as well.
That option will close any connection that has not received
an ack for any sent data (keep alive or otherwise) in the
configured time period. This places an upper bound on the
amount of time a socket can be stuck due to a client not
acknowleding data.
See https://blog.cloudflare.com/when-tcp-sockets-refuse-to-die/
for more information on what these options do and how they
interact.
Additionally, make sure that we close every connection coming
from the listeners by wrapping them in a type with a finalizer
that closes the connection, much like the os package does for
file handles. It monitors if a connection was closed due to a
finalizer so that we can go and look for the bug if we ever
see a non-zero value.
Change-Id: Idc6c0564224b8dc2e4c9d769e80374ed1fe8cce0