Placement can be null in DB and we need adjust scanning this column
from DB.
Additionally this change sets application name for DB connection.
Change-Id: I3c7d6294f4a3e5e441160b2fd4aeafffe705ec76
We need migrate all existing segment copies to contain all the same
metadata as original segment. So far we were not duplicating stored
pieces but we are changing this behavior right now. We will use this
tool after enabling new way of doing server side copies.
Fixes https://github.com/storj/storj/issues/5890
Change-Id: Ia9ca12486f3c527abd28949eb438d1c4c7138d55
placement.AllowedCountry is the old way to specify placement, with the new approach we can use a more generic (dynamic method), which can check full node information instead of just the country code.
The 90% of this patch is just search and replace:
* we need to use NodeFilters instead of placement.AllowedCountry
* which means, we need an initialized PlacementRules available everywhere
* which means we need to configure the placement rules
The remaining 10% is the placement.go, where we introduced a new type of configuration (lightweight expression language) to define any kind of placement without code change.
Change-Id: Ie644b0b1840871b0e6bbcf80c6b50a947503d7df
All the files in uploadselection are (in fact) related to generic node selection, and used not only for upload,
but for download, repair, etc...
Change-Id: Ie4098318a6f8f0bbf672d432761e87047d3762ab
We use two different Node types in `overlay` and `uploadnodeselection` and converting back and forth.
Using the same object would allow us to use a unified node selection interface everywhere.
Change-Id: Ie71e29d60184ee0e5b4547eb54325f09c418f73c
We would like to remove segments loop so we need to refactor
our tools to use ranged loop.
To simplify change ranged loop is used with single range only.
https://github.com/storj/storj/issues/5237
Change-Id: I94d96d54f9d0e37b06def4f4fc16b71c5b79baba
A chore responsible for purging data from the console DB has been
implemented. Currently, it removes old records for unverified user
accounts. We plan to extend this functionality to include expired
project member invitations in the future.
Resolves#5790
References #5816
Change-Id: I1f3ef62fc96c10a42a383804b3b1d2846d7813f7
The assignemnt to `err = nil` is not used in the rest of the code,
however, this was a protective err = nil assignment.
Change-Id: Id70fb2a2e68b91e2481952d865334e603ca41188
Remove generate-missing-project-salt migration tool code and related
tests. This migration has already been run and this code is no longer
needed.
Issue https://github.com/storj/storj-private/issues/163
Change-Id: I4e36dcd95a07c5305c597113a7fd08148e100ccc
It was surprising that `satellite auditor` complained about SMTP mail settings, even if it's not supposed to sending any mail.
Looks like we can remove the mail service dependency, as it's not a hard requirement for overlay.Service.
Change-Id: I29a52eeff3f967ddb2d74a09458dc0ee2f051bd7
Previously we were exposing the testing facilities via interface casting
the necessary parts, however, when things are not part of the main
satellite.DB interface they need to be manually propagated. Rather than
relying on using hidden methods lets expose things as long as they don't
create a direct dependency to the database driver.
Change-Id: I2eb7d8b60f4b64de1320c2d32581f7be267c0f57
Move global variables to be local for each test to reduce the likelihood
of unexpected bugs. Also parallelize the different db tests and clean up
unnecessary lines/checks.
Change-Id: I9dc3894d0945430908b10af5aeeba2f9246caf2a
Satellite DB tests will print into logs (WARN) if full table scan will
be detected. Test won't be failed automatically. That's because currently
we have multiple queries which are doing full table scan and it's not
trivial to change.
We may change that behavior when we will figure out how to skip
specific query from detection or we will fix all problematic queries.
https://github.com/storj/storj/issues/5471
Change-Id: Icafe782257a0d353e8bcdf6fa8a19c20b1091a0b
This tool is being removed because it has served its purpose and was blocking another removal from being verified.
Change-Id: Ie888aa7ae1b153a34210af3a5d5a3682b381ba82
Add migration tool (and test) to update salt column in projects table
with the SHA-256 hash of the project ID when null
Issue https://github.com/storj/storj-private/issues/66
Change-Id: Ib8d484ac8d6ee25859064d803e2ac8fb46b45921
We were reading in a segment's stream ID and position, and assuming that
was enough for the downloader. But of course, the downloader needs
AliasPieces filled in. So now we request each segment record from the
metabase and fill in the VerifySegment records entirely.
Change-Id: If85236388eb99a65e2cb739aa976bd49ee2b2c89
This will allow us to retry some specific segments from
segments-retry.csv with particularly high counts of "retry" pieces.
Change-Id: I48fd419cc0350a3be4c9e77ce8d28871565b7f97
* use the same DB application name for satellite and metabase
* use noop orders DB implementation to avoid storing allocated bandwidth
in DB
Change-Id: I20e88c694d38240fe1a20c45719e210cfb76402c
We have to wait until the slowest node is done being tested before we
can move on to the next of segments. Since the slowest node can be
arbitrarily slow, we'll set a timeout and treat too-slow nodes as
temporarily offline.
Change-Id: I80fe865dd4e8f826700430fb0140c2d3aefca381
When we are verifying pieces by downloading the first byte, if we
encounter a timeout, treat the node as if we failed to connect to it,
and log the error once instead of twice.
Change-Id: I70602d554183c98f1213f3ffb1bfec41100ea0e7
This csv file was being closed as soon as the service was created.
All subsequent writes to the closed file handle produced errors,
which were logged but otherwise ignored.
Instead, we would like the file to remain open and writable, until
the service is destroyed.
Change-Id: Ib29944d25b2f5b2d0f90fdbdcde44fea8d769321
Previously, if any pieces are still on disqualified nodes, this tool
would treat those pieces as fine (if the disqualified node is still
online) or temporarily unavailable (if the disqualified node is
offline). Instead, we should treat such pieces as lost.
This also fixes a slight problem with the code that handles a broken
alias. This is not likely to happen, but if we do see an alias that is
not in the alias map, we return an error instead of nil.
Change-Id: Ib4e2e729ef0535dd7bd9ce2f621680d9f959891c
Because it was originally intended to work on only a few pieces from
each segment at a time, and would frequently have reset its list of
online nodes, segment-verify has been taking nodes out of its
onlineNodes set and never putting them back. This means that over a long
run in Check=0 mode, we end up treating more and more nodes as offline,
permanently. This trend obfuscates the number of missing pieces that
each segment really has, because we don't check pieces on offline nodes.
This commit changes the onlineNodes set to an "offlineNodes" set, with
an expiration time on the offline-ness quality. So nodes are added to
the offlineNodes set when we see they are offline, and then we only
treat them as offline for the next 30 minutes (configurable). After that
point, we will try connecting to them again.
Change-Id: I14f0332de25cdc6ef655f923739bcb4df71e079e
The WithExists methods previously were not writing problematic pieces to
problem-pieces.csv. With this change, they will.
Change-Id: I51eadd3d8f4299e1efa787c9266a7aacfa525eb3
When this branch is followed, `audit.OutcomeFailure` is returned, and
`MarkNotFound()` is immediately called again (in
`(*NodeVerifier).Verify()`). Calling `MarkNotFound()` twice for the same
piece is not correct.
Change-Id: I1a2764bc32ed015628fcd9353ac3307f269b4bbd
It may help to know how much faster these methods are than the
alternative (asking nodes for each piece in turn).
Change-Id: Ieb7c963f62b662f72c84a49de8a09c065c14f782
It was ok as it was, but since we want to keep a close eye on progress
while the tool is running, it will help to have results written to the
output file immediately instead of after the buffer is full or the
program exits.
Change-Id: Ie027f05771a637afb06969ec775cd32b142b7635
When Check == 0 (check all pieces), there is nearly always a piece left
in the retry count, so most segments get logged in segments-retry.csv.
This change makes it so we require retry>5 before adding to
segments-retry.csv (only in the check==0 case).
Change-Id: Iaea523c27eb777e3c248c27c7ef5effe77ae54cf
* better error handling when Exists method is not avaialble on SN
* more optimal processing of response from Exists method
Change-Id: I6d61c09473e9f5ab76a4601720e8bd520767f4c2
This adds the capability to the segment-verify tool of checking all
pieces of every indicated segment.
Pieces which could not be accessed (i.e. we couldn't get a single
byte from them) are recorded in a csv file.
I haven't been able to test this in any very meaningful way, yet, but I
am comforted by the fact that the worst things it could possibly do are
(a) download pieces too many times, and (b) miss downloading some
pieces.
Change-Id: I3aba30921572c974993363eb36d0fd5b3ae97907
Provides the `segment-verify run buckets` command for verifying segments within a list of buckets.
Bucket list is a csv file with `project_id,bucket_name` to be checked.
https://github.com/storj/storj-private/issues/101
Change-Id: I3d25c27b56fcab4a6a1aebb6f87514d6c97de3ff
Add nodeevents.DB to satellite overlay service so we can insert node
events into the nodeevents DB.
Change-Id: I642c0ccc9941ecdb08cb22d5c8cf701959a55156
New flag 'MultipleVersions' was not correctly passed from metainfo
configuration to metabase configuration. Because configuration was
set correctly for unit tests we didn't catch it and issue was found
while testing on QA satellite.
This change reduce number of places where new metabase flags needs
to be propagated from metainfo configuration to avoid problems with
setting new flags in the future.
Fixes https://github.com/storj/storj/issues/5274
Change-Id: I74bc122649febefd87f665be2fba628f6bfd9044
It's quite likely to hit some timelimit, rather than giving up
immediately, let's retry after the throttle amount.
Change-Id: I20944b058d771f5d4bfa0eea7a2c26cefcd74739
We want to send emails to SNOs. Node status changes go through the
overlay service, so it's a good place to add the mail service.
Add the mailservice.Service, satellite address, and satellite name to
overlay service. Also add feature flag --overlay.send-node-emails
Change-Id: I3bd2cb3bf22f9724954ce2374f8b651b902b3a24