This test involves a satellite with dev defaults (DistinctIP=no) being
upgraded past commit 2522ff09b6, which
means we need to run the dev-defaults-satellite-upgrade migration SQL
to avoid getting DistinctIP=yes behavior (which breaks the tests).
Change-Id: I29fb596d1ffa568dad635d98cfe9abacd3aaa48f
Only API peer needs access to order DB (and rollups cache) because it's
only place where we are creating orders for PUT and GET operations. For
other peers like auditor and repairer we can set noop implementation to
reduce number of dependencies needed for them.
Change-Id: Ic32d1879f0b97ffc4516f401898e31e95ae892e4
It was surprising that `satellite auditor` complained about SMTP mail settings, even if it's not supposed to sending any mail.
Looks like we can remove the mail service dependency, as it's not a hard requirement for overlay.Service.
Change-Id: I29a52eeff3f967ddb2d74a09458dc0ee2f051bd7
quic is still configurable based on the quic rollout
environment variables in storj.io/common. this stops
using a method removed in:
https://review.dev.storj.io/c/storj/uplink/+/9815
Change-Id: Ibfe28cfb19e5672630970b9e2c8c6ac0c98d4822
I use `uplink share` command but I always fail to set the --not-before parameter.
* Usually I try +2d when I see in the help that +2h is possible --> fail
* When it fails, I try to set explicit date, like 2012-12-23 --> fail
This patch makes it possible to use:
* day duration (like +3d)
* shorter date definition (like `2023-12-12` or `2023-12-12T12:40`)
Change-Id: I2243b36f59c8929eb0473c4bb4fed19220890c71
The tests were using global variables for keeping the mock state, which
was indexed by the satellite ID. However, the satellite ID-s are
deterministic and it's possible for two tests end up using the same
mocks.
Instead make the mock creation not depend on the satellite ID and
instead require it being configured via paymentsconfig.
This fixes TestAutoFreezeChore failure.
Change-Id: I531d3550a934fbb36cff2973be96fd43b7edc44a
This code is essentially replacement for eestream.CalcPieceSize. To call
eestream.CalcPieceSize we need eestream.RedundancyStrategy which is not
trivial to get as it requires infectious.FEC. For example infectious.FEC
creation is visible on GE loop observer CPU profile because we were
doing this for each segment in DB.
New method was added to storj.Redundancy and here we are just wiring it
with metabase Segment.
BenchmarkSegmentPieceSize
BenchmarkSegmentPieceSize/eestream.CalcPieceSize
BenchmarkSegmentPieceSize/eestream.CalcPieceSize-8 5822 189189 ns/op 9776 B/op 8 allocs/op
BenchmarkSegmentPieceSize/segment.PieceSize
BenchmarkSegmentPieceSize/segment.PieceSize-8 94721329 11.49 ns/op 0 B/op 0 allocs/op
Change-Id: I5a8b4237aedd1424c54ed0af448061a236b00295
This change removes the trailing slash from the account activation and
password recovery URLs, making them consistent with the rest. The URLs'
previous forms are still supported, however, in order to not invalidate
emails containing them.
Resolvesstorj/customer-issues#491
Change-Id: Ie774a87698d8e9edd1836611968fc3911c6cc56f
Peer for generating bloom filters will be able to use ranged loop.
As an addition some cleanup were made:
* remove unused parts of GC BF peer (identity, version control)
* added missing Close method for ranged loop service
* some additional tests added
https://github.com/storj/storj/issues/5545
Change-Id: I9a3d85f5fffd2ebc7f2bf7ed024220117ab2be29
Previously we were exposing the testing facilities via interface casting
the necessary parts, however, when things are not part of the main
satellite.DB interface they need to be manually propagated. Rather than
relying on using hidden methods lets expose things as long as they don't
create a direct dependency to the database driver.
Change-Id: I2eb7d8b60f4b64de1320c2d32581f7be267c0f57
Users with a partner package plan should be unable to replace their
plan's coupon. This change enforces this behavior by rejecting coupon
application attempts from users that meet this criteria.
Change-Id: I6383d19f2c7fbd9e1a2826473b2f867ea8a8ea3e
listing "/" on windows was not returning files from
the root because it was adding an extra separator
unconditionally. the docs for filepath.Clean say
The returned path ends in a slash only if it represents
a root directory, such as "/" on Unix or `C:\` on Windows.
so we need to add the slash only if it doesn't already have
one to avoid the double slash problem while still ensuring
the path ends with a slash.
Change-Id: I98afc1f1a06bb06035c7647ecb0da3214080162d
Move global variables to be local for each test to reduce the likelihood
of unexpected bugs. Also parallelize the different db tests and clean up
unnecessary lines/checks.
Change-Id: I9dc3894d0945430908b10af5aeeba2f9246caf2a
Satellite DB tests will print into logs (WARN) if full table scan will
be detected. Test won't be failed automatically. That's because currently
we have multiple queries which are doing full table scan and it's not
trivial to change.
We may change that behavior when we will figure out how to skip
specific query from detection or we will fix all problematic queries.
https://github.com/storj/storj/issues/5471
Change-Id: Icafe782257a0d353e8bcdf6fa8a19c20b1091a0b
This change causes the bucket's partner info to be used rather than the
user's when calculating project usage prices. This ensures that users
who own differently-partnered buckets will be charged correctly for
usage based on the specific bucket they are utilizing.
according to the bucket's partner.
Related to storj/storj-private#90
Change-Id: Ieeedfcc5451e254216918dcc9f096758be6a8961
`storage.KeyValueStore` requires ordered iteration, which redis
doesn't support natively. This would require loading all the keys
into memory and then processing them, rather than iterating over them
one-by-one.
This adds a temporary `IterateUnordered` to handle the migrations
more gracefully.
Change-Id: I55b763500523077c7ab8fdfad175c32cc7788e47
This tool is being removed because it has served its purpose and was blocking another removal from being verified.
Change-Id: Ie888aa7ae1b153a34210af3a5d5a3682b381ba82
Add migration tool (and test) to update salt column in projects table
with the SHA-256 hash of the project ID when null
Issue https://github.com/storj/storj-private/issues/66
Change-Id: Ib8d484ac8d6ee25859064d803e2ac8fb46b45921
Add a flag to enable/disable analytics so uplink can be run
non-interactively. Also when run non-interactively for the first time
it will not error any more but instead default to disable analytics.
Part of https://github.com/storj/storj/issues/5126
Change-Id: I07ac8a040664334efcb4e2536f26c330c1751a6f
Add a docker image for uplink-cli and push it to docker hub.
We used to have this before the change to uplinkng. I'm not
sure if the pushing works, we'll see after merge.
To test, build an image with `make uplink-image`, read the tag from the
output and run normal uplink-cli commands using
`docker run -it storjlabs/uplink:df9bbceca-uplink-docker-go1.18.8-amd64 [command]`
Part of https://github.com/storj/uplink/issues/109
Change-Id: I8a10aab2b778951ff42a22ba2f252c581eb66b65
We were reading in a segment's stream ID and position, and assuming that
was enough for the downloader. But of course, the downloader needs
AliasPieces filled in. So now we request each segment record from the
metabase and fill in the VerifySegment records entirely.
Change-Id: If85236388eb99a65e2cb739aa976bd49ee2b2c89
This will allow us to retry some specific segments from
segments-retry.csv with particularly high counts of "retry" pieces.
Change-Id: I48fd419cc0350a3be4c9e77ce8d28871565b7f97
This change allows for overriding project usage prices for a specific
partner so that users who sign up with that partner do not need their
invoices to be manually adjusted.
Relates to storj/storj-private#90
Change-Id: Ia54a9cc7c2f8064922bbb15861f974e5dea82d5a
* use the same DB application name for satellite and metabase
* use noop orders DB implementation to avoid storing allocated bandwidth
in DB
Change-Id: I20e88c694d38240fe1a20c45719e210cfb76402c
On BSD, the storagenode-updater falls back to renaming the storagenode without
doing anything to restart the service.
Like the approach we have for linux, this change finds the process ID of the
storagenode using pgrep and sends an interrupt signal to the process.
Closes https://github.com/storj/storj/issues/5333
Change-Id: Icced90ea3e831528804784c2170a3b8b14952e8c
We have to wait until the slowest node is done being tested before we
can move on to the next of segments. Since the slowest node can be
arbitrarily slow, we'll set a timeout and treat too-slow nodes as
temporarily offline.
Change-Id: I80fe865dd4e8f826700430fb0140c2d3aefca381
When we are verifying pieces by downloading the first byte, if we
encounter a timeout, treat the node as if we failed to connect to it,
and log the error once instead of twice.
Change-Id: I70602d554183c98f1213f3ffb1bfec41100ea0e7
This csv file was being closed as soon as the service was created.
All subsequent writes to the closed file handle produced errors,
which were logged but otherwise ignored.
Instead, we would like the file to remain open and writable, until
the service is destroyed.
Change-Id: Ib29944d25b2f5b2d0f90fdbdcde44fea8d769321
The copyFile method has some safeguards to ensure that the multipart
write is aborted. This is accomplished by always calling abort on the
MultiWriteHandle when the copy is finished, whether or not there was a
failure or it was successfully committed. If the copy was committed,
then this RPC is a no-op on the metainfo server.
Regardless, the calls to abort to constitute an additional RPC to the
satellite for no benefit. This is exacerbated by the fact that the code
currently ends up calling abort twice.
This change updates the libuplink-backed MultiWriteHandle implementation
to not call abort if the write is committed and vice-versa. This
eliminates the two wasteful RPC calls.
Change-Id: I13679234f6f473e9a93179e6791fb57eac512f25
added in storj-sim rangedloop for each satellite, to verify it works for metrics oveserver,
removed identity from rangedloop peer as we never use it, added logs on service run, added loop
to service instead of endless for loop, interval value to config
Closes: https://github.com/storj/storj/issues/5414
Change-Id: Ibc3b06071b68feda4a35b45da2bbe36e22a02fc8
Previously, if any pieces are still on disqualified nodes, this tool
would treat those pieces as fine (if the disqualified node is still
online) or temporarily unavailable (if the disqualified node is
offline). Instead, we should treat such pieces as lost.
This also fixes a slight problem with the code that handles a broken
alias. This is not likely to happen, but if we do see an alias that is
not in the alias map, we return an error instead of nil.
Change-Id: Ib4e2e729ef0535dd7bd9ce2f621680d9f959891c
Because it was originally intended to work on only a few pieces from
each segment at a time, and would frequently have reset its list of
online nodes, segment-verify has been taking nodes out of its
onlineNodes set and never putting them back. This means that over a long
run in Check=0 mode, we end up treating more and more nodes as offline,
permanently. This trend obfuscates the number of missing pieces that
each segment really has, because we don't check pieces on offline nodes.
This commit changes the onlineNodes set to an "offlineNodes" set, with
an expiration time on the offline-ness quality. So nodes are added to
the offlineNodes set when we see they are offline, and then we only
treat them as offline for the next 30 minutes (configurable). After that
point, we will try connecting to them again.
Change-Id: I14f0332de25cdc6ef655f923739bcb4df71e079e
The WithExists methods previously were not writing problematic pieces to
problem-pieces.csv. With this change, they will.
Change-Id: I51eadd3d8f4299e1efa787c9266a7aacfa525eb3
When this branch is followed, `audit.OutcomeFailure` is returned, and
`MarkNotFound()` is immediately called again (in
`(*NodeVerifier).Verify()`). Calling `MarkNotFound()` twice for the same
piece is not correct.
Change-Id: I1a2764bc32ed015628fcd9353ac3307f269b4bbd
It may help to know how much faster these methods are than the
alternative (asking nodes for each piece in turn).
Change-Id: Ieb7c963f62b662f72c84a49de8a09c065c14f782
It was ok as it was, but since we want to keep a close eye on progress
while the tool is running, it will help to have results written to the
output file immediately instead of after the buffer is full or the
program exits.
Change-Id: Ie027f05771a637afb06969ec775cd32b142b7635
This change is similar to
https://review.dev.storj.io/c/storj/storj/+/7687 but applied when
uploading from stdin with parallelism > 1.
Currently, the paralellism from stdin scales up to 3 or 4, but not
greater than that. If we buffer the content from stdin more aggressively
the parallelism scales to higher levels and reaches the performance of
reading directly from a file.
Change-Id: I1f447686a88074882709992ee6d52dd262e220fb
When Check == 0 (check all pieces), there is nearly always a piece left
in the retry count, so most segments get logged in segments-retry.csv.
This change makes it so we require retry>5 before adding to
segments-retry.csv (only in the check==0 case).
Change-Id: Iaea523c27eb777e3c248c27c7ef5effe77ae54cf