JIRA: https://storjlabs.atlassian.net/browse/V3-3499
The `uplink share` command does not print the restricted API key and the
restricted encryption access anymore.
Change-Id: Ie4ebe0b27067ee00af97c775f4e06f558b894fe2
We want to make using uplink as easy as possible. That's why we wan't to
avoid requiring setup or import command before normal usage if user
specified --access flag. If this flag is set then rest flags should be
set as defaults.
https://storjlabs.atlassian.net/browse/V3-3490
Change-Id: I95a7bd77a3f00b8d9981fee513e9e77aef298bca
We decided that better name for "scope" will be "access". This change
refactors cmd part of code but don't touch libuplink. For backward
compatibility old configs with "scope" field will be loaded without any
issue. Old flag "scope" won't be supported directly from command line.
https://storjlabs.atlassian.net/browse/V3-3488
Change-Id: I349d6971c798380d147937c91e887edb5e9ae4aa
Remove starting up messages from peers. We expect all of them to start,
if they don't, then they should return an error why they don't start.
The only informative message is when a service is disabled.
When doing initial database setup then each migration step isn't
informative, hence print only a single line with the final version.
Also use shorter log scopes.
Change-Id: Ic8b61411df2eeae2a36d600a0c2fbc97a84a5b93
As per discussed we decided to rate limit how fast we iterate through
the metainfo database in the metainfo loop. This puts in place a
mechanism for rate limiting and burst limiting if need be in the future.
The default for this rate limiting is still no limits so it stays the
same as our previous functionality.
Change-Id: I950f7192962b0e49f082d2c4284e2d52b0a925c7
Fixes Least Authority Issue F:
https://storjlabs.atlassian.net/browse/V3-3409
If the --allowed-path-prefix flag is not set to the `share` command, any
command arguments will be used as allowed path prefixes.
This patch also improves the output of the `share` command to print the
state of all restrictions, so users can confirm they match their
intention.
Change-Id: Id1b4df20b182d3fe04cb2196feea090975fce8b4
flate compression with default settings plays very poorly
together with race, causing test to take a significant amount
of time.
Use pass-through compression to avoid the issue.
Improves test from 2m45s to 17s.
Change-Id: Iadf1381c538736d48e018164697bdfd3356e24b8
This change updates the three satellite report commands that accept date
ranges to parse and treat those dates uniformly.
- End dates are now uniformly exclusive. Exclusive end dates helps
operators avoid one-off errors on month boundaries, as in the operator
does not have to remember how many days are in that month and can just
run the report from the 1st (inclusive) through the 1st (exclusive).
- Fixed the date range validity check which only failed if the start
date came after the end date (it should have failed dates that were
equal since the check happened after adjusting for inclusivity).
Change-Id: Ib2ee1c71ddb916c6e1906834d5ff0dc47d1a5801
- also updated ping chore to pick up trust changes
- fixed small typo in blueprint
- fixed flags for storj-sim
- wired up changes to testplanet
Change-Id: I02982f3a63a1b4150b82a009ee126b25ed51917d
Add randomized test cases for testing the observer processSegment method
when the observer has a time range (from and to) set and there are
objects with segments whose creation date is outside of this range.
Change-Id: Ieac82a21f278f0850b95275bfdd2e8a812cc57a5
the old code to eventually create an api key on a satellite had
some issues. namely, there were some ignored errors, swallowed
errors, incorrect returns of nil errors when there should have
been an error, and it did not handle working against an already
existing database.
this commit fixes the above issues and organizes the code into
a set of methods performing individual steps rather than one big
function. it adds retries and attempts to get existing values
instead of creating them when possible, which means that it will
work if the values already exist. additionally, it removes the
3 second sleep in favor of a bounded retry loop with a small sleep
which improves startup times.
Change-Id: I4de04659e5a62dd3f675fbf3c76f3311c410a03e
After changing how we execute the storagenode-updater process we lost
timestamps in the log.
The fix is to start using zap logging.
The Windows Installer is changed to register the storagenode-updater
service in a way that the Windows Service Manager passes the
--log.output flag instead of the old --log.
The old --log flag is deprecated, but not removed. We will support it
for backward compatibility. This is required as the storagenode-updater
can auto-updated itself, but the Windows Service Manager of this old
installtion will continue passing the old --log flag when starting it.
Change-Id: I690dff27e01335e617aa314032ecbadc4ea8cbd5
Signed-off-by: Kaloyan Raev <kaloyan@storj.io>
Implement some unit test cases for the observer.processSegment method and fix a bug found by these tests.
A production snapshot is more certain but it's huge for having in the repository and run the test with it by the CI.
We want to have tests for the different cases to detect zombie segments and relaying on production data cannot guarantee to have all of them.
NOTE the test has been implemented with random values for not having always the same combination of segments list and the same values avoiding that the implementation gets stale due to the test. The issue about this is that it's harder to understand and we could get only sometimes failures in the CI in case that the implementation has some bug.
The random tests allow to eventually check cases that a static test may now cover because it is not expressed or because it is needed to implement a large number of cases.
Because we are worried that the test implementation is complex and we could have bugs on it despite that the same bugs should exist in both, the implementation and the test, moreover that we have to consider that the implementation and the tests have been written by different people. Because of that, we may replace entirely these random tests by a list of static ones.
for storj-sim to work, we need to avoid schemas in cockroach urls
so we have storj-sim create namespaced databases instead of schemas
and we have the migrate command create the database in the same way
that it would create a schema for postgres. then it works!
a follow up commit will move the creation of the database/schemas
into storj-sim's setup step so that we can avoid doing these icky
creations during normal migration calls. it will also make the
pointerdb have an explicit call to migrate instead of just doing
it every time it's opened.
Change-Id: If69ef5cb96b6866b0438c761bd445afb3597ae5f
* Move the observer implementation and the type definitions related with
it and helper functions to its own file.
* Cluster struct type is used as a key for a ObjectsMap type.
Observer struct type has a field of ObjectsMap.
Cluster has a field for project ID.
Observer processSegment method uses its ObjectMap field for tracking
objects.
However Observer processSegment clears the map once the projectID
diverges from the one kept in the Observer lastProjectID field, meaning
that it isn't needed to keep the projectID as part of the ObjectMap key.
For this reason, ObjectMap can use as a key just only the bucket name
and Cluster struct isn't needed.
Because of such change, the ObjectMap type has been renamed to a more
descriptive name.
* Make the types defined for this specific package not being exported.
* Create a constructor function for observer to encapsulate the map
allocation.
* Don't throw away the entirely buckets objects map when an empty one
is used to reuse part of the allocations.
Encapsulate the clearing up logic into a method.
* Make the analyzeProject function to be a method of observer.