Currently we risk losing pending bandwidth rollup writes even on a clean
shutdown. This change ensures that all pending writes are actually
written to the db when shutting down the satellite.
Change-Id: Ideab62fa9808937d3dce9585c52405d8c8a0e703
Setup command of uplink has to create the configuration directory just
before saving the configuration file for making it more robust than
creating in the initial state of the process.
When creating the directory at the beginning of the process leaves the
possibility to delete such directory during the setup process and leads
to a failure.
Ticket https://storjlabs.atlassian.net/browse/V3-3545
Change-Id: I30db0175e23a597e9675d267b4d7e25d5d4c5119
storagenode database preflight check.
Disable preflight database check by default, and have the option to
enable it. This will allow us to enable it once it is definitely
working.
Also change the name of the config flag for preflight time sync.
Change-Id: Ie2e20f9e25dcb38794eafa7e1505e7c6ff287c99
for storagenode
Ensure that database schema matches latest test migration schema before
allowing the node to start up.
Ensure minimal read/write functionality for each storagenode database
before allowing the node to start up.
This will eliminate many unhandled audit errors we are seeing.
Change-Id: Ic0e628b04a9c35b7a8243f6a81d4683918170ba9
this commit introduces the reported_serials table. its purpose is
to allow for blind writes into it as nodes report in so that we have
minimal contention. in order to continue to accurately account for
used bandwidth, though, we cannot immediately add the settled amount.
if we did, we would have to give up on blind writes.
the table's primary key is structured precisely so that we can quickly
find expired orders and so that we maximally benefit from rocksdb
path prefix compression. we do this by rounding the expires at time
forward to the next day, effectively giving us storagenode petnames
for free. and since there's no secondary index or foreign key
constraints, this design should use significantly less space than
the current used_serials table while also reducing contention.
after inserting the orders into the table, we have a chore that
periodically consumes all of the expired orders in it and inserts
them into the existing rollups tables. this is as if we changed
the nodes to report as the order expired rather than as soon as
possible, so the belief in correctness of the refactor is higher.
since we are able to process large batches of orders (typically
a day's worth), we can use the code to maximally batch inserts into
the rollup tables to make inserts as friendly as possible to
cockroach.
Change-Id: I25d609ca2679b8331979184f16c6d46d4f74c1a6
JIRA: https://storjlabs.atlassian.net/browse/V3-3499
The `uplink share` command does not print the restricted API key and the
restricted encryption access anymore.
Change-Id: Ie4ebe0b27067ee00af97c775f4e06f558b894fe2
We want to make using uplink as easy as possible. That's why we wan't to
avoid requiring setup or import command before normal usage if user
specified --access flag. If this flag is set then rest flags should be
set as defaults.
https://storjlabs.atlassian.net/browse/V3-3490
Change-Id: I95a7bd77a3f00b8d9981fee513e9e77aef298bca
We decided that better name for "scope" will be "access". This change
refactors cmd part of code but don't touch libuplink. For backward
compatibility old configs with "scope" field will be loaded without any
issue. Old flag "scope" won't be supported directly from command line.
https://storjlabs.atlassian.net/browse/V3-3488
Change-Id: I349d6971c798380d147937c91e887edb5e9ae4aa
Remove starting up messages from peers. We expect all of them to start,
if they don't, then they should return an error why they don't start.
The only informative message is when a service is disabled.
When doing initial database setup then each migration step isn't
informative, hence print only a single line with the final version.
Also use shorter log scopes.
Change-Id: Ic8b61411df2eeae2a36d600a0c2fbc97a84a5b93
As per discussed we decided to rate limit how fast we iterate through
the metainfo database in the metainfo loop. This puts in place a
mechanism for rate limiting and burst limiting if need be in the future.
The default for this rate limiting is still no limits so it stays the
same as our previous functionality.
Change-Id: I950f7192962b0e49f082d2c4284e2d52b0a925c7
Fixes Least Authority Issue F:
https://storjlabs.atlassian.net/browse/V3-3409
If the --allowed-path-prefix flag is not set to the `share` command, any
command arguments will be used as allowed path prefixes.
This patch also improves the output of the `share` command to print the
state of all restrictions, so users can confirm they match their
intention.
Change-Id: Id1b4df20b182d3fe04cb2196feea090975fce8b4
flate compression with default settings plays very poorly
together with race, causing test to take a significant amount
of time.
Use pass-through compression to avoid the issue.
Improves test from 2m45s to 17s.
Change-Id: Iadf1381c538736d48e018164697bdfd3356e24b8
This change updates the three satellite report commands that accept date
ranges to parse and treat those dates uniformly.
- End dates are now uniformly exclusive. Exclusive end dates helps
operators avoid one-off errors on month boundaries, as in the operator
does not have to remember how many days are in that month and can just
run the report from the 1st (inclusive) through the 1st (exclusive).
- Fixed the date range validity check which only failed if the start
date came after the end date (it should have failed dates that were
equal since the check happened after adjusting for inclusivity).
Change-Id: Ib2ee1c71ddb916c6e1906834d5ff0dc47d1a5801
- also updated ping chore to pick up trust changes
- fixed small typo in blueprint
- fixed flags for storj-sim
- wired up changes to testplanet
Change-Id: I02982f3a63a1b4150b82a009ee126b25ed51917d
Add randomized test cases for testing the observer processSegment method
when the observer has a time range (from and to) set and there are
objects with segments whose creation date is outside of this range.
Change-Id: Ieac82a21f278f0850b95275bfdd2e8a812cc57a5
the old code to eventually create an api key on a satellite had
some issues. namely, there were some ignored errors, swallowed
errors, incorrect returns of nil errors when there should have
been an error, and it did not handle working against an already
existing database.
this commit fixes the above issues and organizes the code into
a set of methods performing individual steps rather than one big
function. it adds retries and attempts to get existing values
instead of creating them when possible, which means that it will
work if the values already exist. additionally, it removes the
3 second sleep in favor of a bounded retry loop with a small sleep
which improves startup times.
Change-Id: I4de04659e5a62dd3f675fbf3c76f3311c410a03e
After changing how we execute the storagenode-updater process we lost
timestamps in the log.
The fix is to start using zap logging.
The Windows Installer is changed to register the storagenode-updater
service in a way that the Windows Service Manager passes the
--log.output flag instead of the old --log.
The old --log flag is deprecated, but not removed. We will support it
for backward compatibility. This is required as the storagenode-updater
can auto-updated itself, but the Windows Service Manager of this old
installtion will continue passing the old --log flag when starting it.
Change-Id: I690dff27e01335e617aa314032ecbadc4ea8cbd5
Signed-off-by: Kaloyan Raev <kaloyan@storj.io>
Implement some unit test cases for the observer.processSegment method and fix a bug found by these tests.
A production snapshot is more certain but it's huge for having in the repository and run the test with it by the CI.
We want to have tests for the different cases to detect zombie segments and relaying on production data cannot guarantee to have all of them.
NOTE the test has been implemented with random values for not having always the same combination of segments list and the same values avoiding that the implementation gets stale due to the test. The issue about this is that it's harder to understand and we could get only sometimes failures in the CI in case that the implementation has some bug.
The random tests allow to eventually check cases that a static test may now cover because it is not expressed or because it is needed to implement a large number of cases.
Because we are worried that the test implementation is complex and we could have bugs on it despite that the same bugs should exist in both, the implementation and the test, moreover that we have to consider that the implementation and the tests have been written by different people. Because of that, we may replace entirely these random tests by a list of static ones.
for storj-sim to work, we need to avoid schemas in cockroach urls
so we have storj-sim create namespaced databases instead of schemas
and we have the migrate command create the database in the same way
that it would create a schema for postgres. then it works!
a follow up commit will move the creation of the database/schemas
into storj-sim's setup step so that we can avoid doing these icky
creations during normal migration calls. it will also make the
pointerdb have an explicit call to migrate instead of just doing
it every time it's opened.
Change-Id: If69ef5cb96b6866b0438c761bd445afb3597ae5f
* Move the observer implementation and the type definitions related with
it and helper functions to its own file.
* Cluster struct type is used as a key for a ObjectsMap type.
Observer struct type has a field of ObjectsMap.
Cluster has a field for project ID.
Observer processSegment method uses its ObjectMap field for tracking
objects.
However Observer processSegment clears the map once the projectID
diverges from the one kept in the Observer lastProjectID field, meaning
that it isn't needed to keep the projectID as part of the ObjectMap key.
For this reason, ObjectMap can use as a key just only the bucket name
and Cluster struct isn't needed.
Because of such change, the ObjectMap type has been renamed to a more
descriptive name.
* Make the types defined for this specific package not being exported.
* Create a constructor function for observer to encapsulate the map
allocation.
* Don't throw away the entirely buckets objects map when an empty one
is used to reuse part of the allocations.
Encapsulate the clearing up logic into a method.
* Make the analyzeProject function to be a method of observer.
* change satellite.Peer name to Core
* change to Core in testplanet
* missed a few places
* keep shared stuff in peer.go to stay consistent with storj/docs
* separate sadb migration, add version check
* update checkversion to do same validation as migration
* changes per CR
* add sa migration to storj-sim
* add different debug port in storj-sim for migration
* add wait for exit for storj-sim migration
* update sa docker entrypoint to support migration
* storj-sim satellite parts all wait for migration
* upgrade golang-migrate/migrate to v4 because bug
* fix go mod tidy
* rm dup api code from sa peer, update storj-sim
* fix for backwards compat tests
* use env var instead of localhost
* changes per CR
* fix env var name
* skip peer for setup
* set up satellite repair run command
* add separated repair process to storj-sim
* add repairer peer to satellite in testplanet
* move api run cmd into api.go
* add satellite run repair to entrypoint
When the contact chore starts running before the monitor service has
provided any useful capacity data, the first outgoing contact has
not-very-helpful data for the satellite. This change causes the contact
chore to wait until capacity data is available. The wait should be quite
short in all reasonable cases: even when a node starts with a lot of
stored pieces and no cached spaceUsedDB data, new data will have been
calculated and cached by the call to
`peer.Storage2.CacheService.Init(ctx)` in `storagenode.cmdRun()` before
`peer.Run(ctx)`.
Change-Id: Ibc26d5c1fc10a23006c00bc3f13ff6cf71f8bf1d
* update lock file and add comment
* add created at and bytes transferred
* cleanup
* rename db func to GetGracefulExitNodesByTimeFrame
* fix flag
* split into two overlay functions
* := to =
* fix test
* add node not found error class
* fix overlay test
* suggested test changes
* review suggestions
* get exit status from overlay.Get()
* check rows.Err
* fix panic when ExitFinishedAt is nil
* fix comments in cmdGracefulExit
libuplink was incorrectly setting timeouts to 10 seconds still, but
should have been at least 10 minutes. the order sender was setting them
to 1 hour. we don't want timeouts in uplink-side logic as it establishes
a minimum rate on tcp streams.
instead of all of this, just use tcp keep alive. tcp keep alive packets are
sent every 15 seconds and if the peer stops responding the connection
dies. this is enabled by default with go. this will kill tcp connections
when they stop working.
Change-Id: I3d7ad49f71950b3eb43044eedf4b17993116045b