During an update to the billing DB, there is a special case failure that can occur if multiple updates to the table happen concurrently. In this case, the update would normally fail silently due to the balance constraint during update, and the subsequent insert for a new record fails because the user already exists in the table. The solution for this case, is to simply retry the insert with some limit to prevent infinite loops.
Change-Id: Ibe70fec2c386c25bd2484fe91f49a6a962357706
- Previously unused struct Endpoint.Request now defines the form
of the request body.
- Path parameters (e.g. "id" in "/delete/{id}") are defined in
the Endpoint.PathParams field.
- Endpoint.Params has been renamed to Endpoint.QueryParams to
eliminate confusion.
Change-Id: Ifef51ca2f362c33086f0e43e936d50b0fdd18aa1
Logs out all current user sessions when a password is changed through both the
forgot password and change password methods.
Change-Id: Iaf9b4969aa45441591524906af326b9dec17939f
Split out the function to delete a batch of objects from a bucket, so
that we get metrics which give a rough indication how long this operation
takes.
Part of https://github.com/storj/storj/issues/4957
Change-Id: I20a4ed5894217f4cd0b2f25aee297f0ecda57ab5
Don't terminate the expired objects loop or the zombie objects loop when
there is a DB error when selecting the objects for deleting them because
it isn't critical and the loops will pick them up again in the next
iteration.
The exception is if the DB rows scan method returns an error because
that's a symptom of the passed arguments to the method don't match with
the columns order, number, or type of the query, or there is invalid
data in the DB.
Don't also terminate these loops if the there is a DB error when
deleting the objects because the loops will pick them up in the next
iteration.
Because we don't return those errors now for not terminating the loop,
we have to log them.
Change-Id: I86bcf83d619345255840ae8f3db61620f044d2af
We have enabled the new project dashboard in production. Change the
default to true so that we do not need an explicit configuration in
prod.
Change-Id: I0f93773965283e7b0682f6586685224281cbf78c
We log metainfo object operations and it looks that the log's message
convention is `Object {operation}`, however the `Object Download`
operation didn't match with the actual operation and the one that was
representing it had was `Download Object`.
This commit changes the log's message for the download object operation
according to the other object operations log messages format and fixes
the log message for the Get Object operation.
For finding this I executed the following command at the root of the
repository to obtain the list of lines where we log object operations.
$> ag 'log\.Info\(".*Object.*",' --no-color git:(main)
satellite/metainfo/endpoint_object.go
179: endpoint.log.Info("Object Upload", zap.Stringer("Project ID", keyInfo.ProjectID), zap.String("operation", "put"), zap.String("type", "object"))
336: endpoint.log.Info("Object Download", zap.Stringer("Project ID", keyInfo.ProjectID), zap.String("operation", "get"), zap.String("type", "object"))
557: endpoint.log.Info("Download Object", zap.Stringer("Project ID", keyInfo.ProjectID), zap.String("operation", "download"), zap.String("type", "object"))
791: endpoint.log.Info("Object List", zap.Stringer("Project ID", keyInfo.ProjectID), zap.String("operation", "list"), zap.String("type", "object"))
979: endpoint.log.Info("Object Delete", zap.Stringer("Project ID", keyInfo.ProjectID), zap.String("operation", "delete"), zap.String("type", "object"))
`ag` is a command-line tool similar to `grep`
Change-Id: I9072c5967eb42c397a2c64761d843675dd4991ec
We had a lot of flaky test failures from TestAuth. The error message (WHICH IS NOT VISIBLE IN JEKNINS, only in tests.json):
```
FAIL: TestAuth_Register_NameSpecialChars/Postgres (1.04s)
panic: runtime error: index out of range [0] with length 0 [recovered]
panic: runtime error: index out of range [0] with length 0
goroutine 3473 [running]:
testing.tRunner.func1.2({0x235fe40, 0xc000fe6a08})
/usr/local/go/src/testing/testing.go:1209 +0x36c
testing.tRunner.func1()
/usr/local/go/src/testing/testing.go:1212 +0x3b6
panic({0x235fe40, 0xc000fe6a08})
/usr/local/go/src/runtime/panic.go:1047 +0x266
storj.io/storj/satellite/console/consoleweb/consoleapi_test.TestAuth_Register_NameSpecialChars.func1(0xc001a281a0, 0x289d650, 0xc001a30000)
/var/lib/jenkins/workspace/storj-gerrit-verify/satellite/console/consoleweb/consoleapi/auth_test.go:773 +0x785
storj.io/storj/private/testplanet.Run.func1.1({0x289c770, 0xc0001b8008})
/var/lib/jenkins/workspace/storj-gerrit-verify/private/testplanet/run.go:67 +0x732
storj.io/storj/private/testmonkit.RunWith({0x289c770, 0xc0001b8008}, {0x28d89b0, 0xc001a281a0}, {0x1, {0x0, 0x0}, {0x0, 0x0, 0x0}}, ...)
```
The root cause:
testplanet uses a simulated mail sender which clicks to all the registration links by default (async).
These tests creat links and check the unverified users, but without enough luck the mail sender may already clicks to the link which makes the user verified.
Change-Id: I17cd6bf4ae3e7adc223ec693976bb609370f0c44
Added string length limits for registration partner and promo params.
Limitation added both on client and server sides.
Issue: https://github.com/storj/storj-private/issues/44
Change-Id: Ifae04caad1775e0a8ca72ae7f9abcf0ea5fb564b
Implemented Recaptcha and Hcaptcha for login screen.
Slightly refactored registration page implementation.
Made 2 different login/registration captcha configs on server side to easily swap between captchas independently.
Issue: https://github.com/storj/storj/issues/4982
Change-Id: I362bd5db2d59010e90a22301893bc3e1d860293a
In an effort to distribute load on the reputation database, the
reputation write cache scheduled nodes to be written at a time offset by
the local nodeID. The idea was that no two repair workers would have the
same nodeID, so they would not tend to write to the same row at the same
time.
Instead, since all satellite processes share the same satellite ID
(duh), this caused _all_ workers to try and write to the same row at the
same time _always_. This was not ideal.
This change uses a random number instead of the satellite ID. The random
number is sourced from the number of nanoseconds since the Unix epoch.
As long as workers are not started at the exact same nanosecond, they
ought to get well-distributed offsets.
Change-Id: I149bdaa6ca1ee6043cfedcf1489dd9d3e3c7a163
This change makes the authentication middleware reject any requests
that are not properly authenticated to prevent them from being
passed into endpoint-specific handlers.
Change-Id: I1f6b74f68fc7354e47fb825a128bad968129f420
Update the billing table to use generated IDs, to include the source and status fields, and to add metadata jsonb field that can be used for fields specific to a particular type of billing transaction. Additionally a new table was added to keep track of user balances
Change-Id: Ieb3a63aafd8fe21fc3386bafd43d52081b7d2838
satellite/{payments/billing,satellitedb}: refactor billing DB
Update the billing table to use generated IDs, to include the source and status fields, and to add metadata jsonb field that can be used for fields specific to a particular type of billing transaction. Additionally a new table was added to keep track of user balances
Change-Id: Ieb3a63aafd8fe21fc3386bafd43d52081b7d2838
metabasetest package utils can be used by both tests and benchmarks
if we will use interface TestingT from require package. This change
adjusts metabasettest.CreateObject method
Change-Id: I3c138e2ef9873b804ab5b3402804efa409397a9f
`len(m.attached)` was used without locking.
Also, use gofumpt to make whitespace usage more consistent.
Change-Id: Ifa9deedc8451f0c54e84d6ac3c2bdc1807688989
When a someone tries to create an account with an email that is already
associated with a verified account, send them an email with options to
sign in, create an account on another satellite, or reset password.
Change-Id: I844144d88b7356bd7064c4840c9441347a5368b0
Adding an interval_end_time column to the accounting_rollups table
to keep the last interval_end_time for each daily storage tallies.
Updates https://github.com/storj/storj/issues/4178
Change-Id: If7a8210c5e9fe2fc9df84b137a8b6e3db2471c58
removed segment limit validation and checks in metainfo endpoint and accounting/projectusage
since feature is live and has always has segment limitation now
Resolves: https://github.com/storj/storj/issues/4470
Change-Id: I8cf87cbbc40ac61262f9f05e52573d3ae6410611
With pointerdb listing objects operation was optimized to skip
objects from prefixes for non recursive listing. This change it
adopting this optimiaztion from old code.
Main change is to drop current page results if we detect a prefix
that needs to be skipped and jump with next listing query after this
prefix by setting cursor to "some-prefix(byte('/')+1) which is
effectively "some-prefix0".
Benchmark:
name old time/op new time/op delta
NonRecursiveListing/Postgres/listing_no_prefix-8 960µs ±11% 257µs ±12% -73.19% (p=0.008 n=5+5)
NonRecursiveListing/Postgres/listing_with_prefix-8 945µs ±11% 671µs ±12% -28.97% (p=0.008 n=5+5)
NonRecursiveListing/Cockroach/listing_no_prefix-8 4.31ms ± 8% 1.19ms ± 7% -72.44% (p=0.008 n=5+5)
NonRecursiveListing/Cockroach/listing_with_prefix-8 4.97ms ± 8% 3.35ms ±15% -32.67% (p=0.008 n=5+5)
Fixes https://github.com/storj/team-metainfo/issues/115
Change-Id: Iafdf3600d058abbaf441f792d32a7fc17cc08696
Previously there was no realtime administration of the storage usage
during copies. Now there is.
Closes https://github.com/storj/storj/issues/4719
Change-Id: I0d536bf551d16208116c3aceac89ed590ec473bf
When signing up, a user can opt in to having sales contact them. This
change alters the way this flag is passed to Hubspot and Segment.
Hubspot sends a form submission request to create the user, followed by
a "custom behavioral event" with some additional user info.
Segment sends an "Identify" call followed by a "create user" event.
This change moves "have_sales_contact" to the form submission for
Hubspot, and the Identify call for Segment.
This simplifies the process of applying this field to a contact/user in
Segment and Hubspot.
Change-Id: I5e6871b3926a76fb24f97fb2d835a26720275072
When a user's bandwidth/storage limits are manually set to exceed the
paid tier defaults, attempting to update their project via the satellite
UI (e.g. to change the name/description) would result in an error.
This change modifies the limit checks for updating a project to remove
this issue.
https://github.com/storj/storj/issues/4892
Change-Id: I48853a3289b0ac51587f268a18c1b25743123fcf
Piece deletion service was using KnownReliable method from
overlaycache to get nodes addresses to send delete request.
KnownReliable was always hitting DB because this method was
not using cache. This change is using new DownloadSelectionCache
to avoid direct DB calls.
Change is not perfect because DownloadSelectionCache is not as
precise as KnownReliable method and can select few more nodes
to which we will send delete request but difference should be
small and we can improve it later.
Updates https://github.com/storj/storj/issues/4959
Change-Id: I4c3d91089a18ac35ebcb469a56536c33f76e44ea
Currently we have a significant number of tallies that need to be
deleted together. Add a limit (by default 10k) to how many will
be deleted at the same time.
Change-Id: If530383f19b4d3bb83ed5fe956610a2e52f130a1
An object copy/move is done by 2 DRPC calls. It's possible a new object was uploaded ore moved to the source location between these calls. For copy, in that case the segments end up with the wrong keys. This change adds an explicit check for that by comparing the streamId supplied by the user with the streamId in the database.
Fixes https://github.com/storj/storj/issues/4930
Change-Id: Id600456ce78fb4069b93644828a0b3eb85e23e16
We need to provide the ability to see bucket attribution on the gateway side
so customers can validate if bucket is attributed to them. Extendet metainfo.ListBuckets
request with UserAgent.
Fixes https://github.com/storj/storj/issues/4965
Change-Id: I5624874a7faa14cda06183ad44013e9ebb385b63
Trace all the requests that the HTTP API endpoints receive.
We want to trace them with Monkit because we want to break them down by
request type and response code for seeing if they succeeded or failed.
Also log them with DEBUG level with the IP client.
Change-Id: Ia7b013351c788f131e775818f27091f3014ea861
We retry a GET_REPAIR operation in one case, and one case only (as far
as I can determine): when we are trying to connect to a node using its
last known working IP and port combination rather than its supplied
hostname, and we think the operation failed the first time because of a
Dial failure.
However, logs collected from storage node operators along with logs
collected from satellites are strongly indicating that we are retrying
GET_REPAIR operations in some cases even when we succeeded in connecting
to the node the first time. This results in the node complaining loudly
about being given a duplicate order limit (as it should), whereupon the
satellite counts that as an unknown error and potentially penalizes the
node.
See discussion at
https://forum.storj.io/t/get-repair-error-used-serial-already-exists-in-store/17922/36
.
Investigation into this problem has revealed that
`!piecestore.CloseError.Has(err)` may not be the best way of determining
whether a problem occurred during Dial. In fact, it is probably
downright Wrong. Handling of errors on a stream is somewhat complicated,
but it would appear that there are several paths by which an RPC error
originating on the remote side might show up during the Close() call,
and would thus be labeled as a "CloseError".
This change creates a new error class, repairer.ErrDialFailed, with
which we will now wrap errors that _really definitely_ occurred during
a Dial call. We will use this class to determine whether or not to retry
a GET_REPAIR operation. The error will still also be wrapped with
whatever wrapper classes it used to be wrapped with, so the potential
for breakage here should be minimal.
Refs: https://github.com/storj/storj/issues/4687
Change-Id: Ifdd3deadc8258f34cf3fbc42aff393fa545794eb
Added new email html template.
It is sent when user tries to reset password with unknown or unverified account.
Made a couple of minor config changes.
Issue: https://github.com/storj/storj/issues/4913
Change-Id: I730f48b3478e302d1e38e1f8a27c75f66a8ba6fd
Some nodes were added to the nodes table due to a bug in quic
based storagenode contact code. This is a tool to clean up these nodes.
Delete with batch-size 1k seems to take ~400ms on local CockroachDB.
Change-Id: Ic0c1180528c27952e19c431fc9cc327292a10a5f
Add payments method to payments to DepositWallets interface.
Exposes payments retrieval API for a particular wallet to
other systems such as console billing API.
Change-Id: Ifcb3a35514aab50be00f6360007954980b5d8b38
Use DownloadSelectionCache to avoid querying database for every
download.
This change only addresses downloads from users. The download selection
cache is not currently used for audit and repair.
Change-Id: I96a49e121dac0b4204f97592a63131edabd73fb5
Adding go.mod into node_modules is not sufficient, because npm install
wipes them quite often out. Similarly, when running npm install locally
it will remove it, causing the git state to be dirty.
Rather than having them committed, add them after running npm install.
Change-Id: Iaf21a9c6e198dc31fe50345ec5dee85b44617176
This just cleanup change to unblock libuplink to reorganize types
which are aliases to storj types.
Change-Id: Id3edf13f1b0aef52d7606d545aa7a6594cf8d13f
Add go.mod to node_modules folder, that way Go compiler doesn't
need to scan the node_module directories for any Go code.
Change-Id: I747909416490c847d6b4bfa3438fea66660fcd53
It seems the tests relied on time.Now(), which might cause some
discrepancies in calculations. Use a fixed time.Now() rather than
recalculating.
As a sidefix, remove "Test" prefix from t.Run. These are unnecessary.
Change-Id: I1de903fcf0fcf46fc8e3acf2463e17239b8e3cc6
The MinDownloadTimeout 950ms and delay of 1s were quiet close, possibly
causing flaky behavior in TestVerifierSlowDownload.
Change-Id: I4f6c1554a118b21427357642abe39986fd0af38d
Classify errors related to invalid tokens for activating user accounts
for returning 400 status code rather than 500 status code.
Don't log all the errors with "error" level, only the ones related to
internal server errors and the rest log them with "debug" level because
they pollute the production satellite errors with errors that are
misguiding.
Change-Id: Id2bd737edba8550ce08965b51b8bf2540bd13ca4
Previously copying an object to it's ancestor location (copy of copy)
broke the object and all copies.
This fixes this by calling the existing delete method rather than a
custom one when there is an existing object at the copy destination.
The check for existing object at destination has been moved to an
earlier point in FinishCopy.
metabase.DeleteObject exposes a transaction parameter so that it can be
reused within metabase.
Closes https://github.com/storj/storj/issues/4707
Uplink test at https://review.dev.storj.io/c/storj/uplink/+/7557
Change-Id: I418fc3337fa9f30146ccc1db456af168ae41c326
- instead of closing over the outer err variable, potentially
overwriting some errors or something, declare local variables.
- double check that we got the number of rows we expected to get
and error otherwise. this prevents a possible source of inserting
bogus rows into the database.
Change-Id: I30662be2727afe0a90e4215a182fedc2648d1169
Part of the delete query cause a full table scan of segment_copies. This
slowed down the system. This change should have the same semantics but
improved performance.
Part of https://github.com/storj/storj/issues/4898
Change-Id: I4afe23df05467eafc9c91591f47a7251a0f3dd31
Read the source object and write the destination object in the same
transaction, to prevent breaking the object because it was deleted
simultaneously.
This is probably the root cause of the metainfo loop halting from
2022-06-21 onwards, where 2 objects lost their root_piece_id during
copying.
Part of https://github.com/storj/storj/issues/4930
Change-Id: I9c45d56a7bfb48ecd5f4906ee1cca42922901e90
Returns only user's own projects when we hit GET user endpoint.
Fix for this issue
https://github.com/storj/storj/issues/4820
Change-Id: I546268fa3e5983a72f11f998803da5455c0035b4
Satellite caches the project bandwidth in Redis when it doesn't have it
because was not set or the key expired, however, it doesn't perform the
check and set if not exists in a transaction. It also uses the increase
function which increases the value if it exists otherwise it sets it.
This provokes that multiple concurrent request to the same project may
increase the total project by multiples of the bandwidth usage
registered in the database rather than setting it because they may check
if the key exists before any other has executed the increase and then
the first one executing it will set the value but the others will
increased causing that Redis has a wrong bandwidth usage value which is
N magnitude of the real one and making the satellite to deny the
downloading if it surpasses the project limit.
This commit changes the "update"" project bandwidth usage by an "insert"
but using a Redis function that only sets the value if the key doesn't
exists for solving the increase issue but also not overriding the value
due to may contain updates of other downloading requests which aren't
already registered in the DB.
Change-Id: I33e2fe462930b2fdb4061fc94002bd3544476f94
This test had an effective config.Reputation.AuditCount = 0, meaning all
nodes that had _any_ positive audit results were considered vetted.
Because of that, only one node in the test setup was "new". And that
node was marked as being in GE, so could not be returned by node
selection.
The reason the tests still worked is because of the node selection rule
that says "if there are no new nodes at all, just get all reputable
nodes to satisfy the request".
This commit makes it so half of the nodes are vetted and half new, which
makes the test somewhat more interesting (and means we aren't
concentrating too much on testing details of behavior when AuditCount is
0).
Change-Id: I09157b7dc20ecaddd2a6e60cfe146e9186e3603b
This change fixes the issue where the API generator would produce
different Go code for the same API definition upon each invocation
due to the random nature of map iteration.
Change-Id: I6770a10faf06311c24f541611c25d0b2b0f8e521
To avoid regression with old versions of uplink objects move we need to
remove FinishMoveObject check for key and nonce, in SQL
in FinishMoveObject do check if metadata is nil then
don't set key and nonce. The same UPDATE clause should return metadata and
if metadata != nil we should do the same validation for key and nonce
to avoid putting broken key and nonce while doing move
Resolves: https://github.com/storj/team-metainfo/issues/108
Change-Id: If723dfad899e9235f53559b71ee1c7fe49deb8b8
The users.Update method in the satellitedb package takes a console.User
as an argument. It reads some of the fields on this struct and assigns
the value to dbx.User_Update_Fields. However, you cannot optionally
update only some of the fields. They all will always be updated. This means
that if you only want to update FullName, you still need to read the
user info from the DB to avoid updating the rest of the fields to zero.
This is not good because concurrent updates can overwrite each other.
This change introduces a new struct type, UpdateUserRequest, which
contains pointers for all the fields that are updated by satellite db
users.Update. Now the update method will check if a field is nil before
assigning the value to be updated in the db, so you only need to set the
field you want updated. For nullable columns, the respective field is a
double pointer. This allows us to update a column to NULL if the outer
pointer is not nil, but the inner pointer is.
Change-Id: I27f842d283c2711e24d51dcab622e57eeb9157f1
This change integrates the session management database functionality
with the web application. Claim-based authentication has been removed
in favor of session token-based authentication.
Change-Id: I62a4f5354a3ed8ca80272814aad2448f901eab1b
This change swaps net.IP.IsPrivateIP usages with custom isPrivateIP to
unbreak the build as we want to build for earlier than Go 1.17.
Change-Id: I44badbb487f35e43b8b0433ad0f3b9c87af718d4
There are multiple entries in the users table with the same email
address. This is because in the past users were able to register
multiple times if the email was not verified. This is no longer
the case. If a user tries to register with an unverified email
already in the DB, we send a verification email instead of
creating another entry. However, since these old entries in the
table with duplicate emails were never cleaned up, the email
reminder chore will send out email verification reminders to them.
A single person will get one separate email per entry in the DB
with their email and where status = 0.
Since the multiple entries with the same email problem was solved
a while ago, just add a constraint to GetUnverifiedNeedingReminder
to only select users created after a cutoff. Once the DB is
migrated to remove the duplicate emails, we can remove the cutoff.
github issue: https://github.com/storj/storj/issues/4853
Change-Id: I07d77d43109bcacc8909df61d4fb49165a99527c
TestRollupNoDeletes is very flaky (passes locally but fails in the main branch build).
The exact reason is not clear, but stopping the loop seems to be async, the following lines may not stop the loops immediatelly which is a potential problem:
```
satellitePeer.Accounting.Rollup.Loop.Pause()
satellitePeer.Accounting.Tally.Loop.Pause()
```
Fortunatelly these test check only the database interfaces. Instead of testplanet.Run we can run only satellitedbtest.Run which is faster and more predictable (no background loops).
Other potential problem: comment claims that the default of DeleteTallies is false:
```
// In testplanet the setting config.Rollup.DeleteTallies defaults to false.
```
But it seems to be true (rollup.go):
```
DeleteTallies bool `help:"option for deleting tallies after they are rolled up" default:"true"`
```
This is also fixed in the patch (as we need set it explicit), but TBH it can be fixed with testplanet, too.
Change-Id: Id7ec80d5c069bed2c556f4d001c71aa23fc5af23
If project.UserAgent is set, use this for bucket.UserAgent on bucket
creation. Otherwise, set bucket attribution as before (getting UserAgent
from request headers).
Tests were updated to create the bucket with a different user, added as
a project member. Otherwise, the tests do not catch the bug.
Change-Id: I7ecf79a8eac5957eed361cbea94823190f58b776
The ApplyUpdates() method on the reputation.DB interface acts like the
similar Update() method, but can allow for applying the changes from
multiple audit events, instead of only one.
This will be necessary for the reputation write cache, which will batch
up changes to each node's reputation in order to flush them
periodically.
Refs: https://github.com/storj/storj/issues/4601
Change-Id: I44cc47767ea2d9423166bb8fed080c8a11182041
Implemented project delete endpoint for REST API.
Added project usage status check service method to indicate if project can be deleted.
Updated project invoice status check method to indicate if project can be deleted.
Change-Id: I57dc96efb072517144252001ab5405446c9cdeb4
prevent network enumeration by rejecting privateIPs in PingMe and
Checkin endpoints
Closesstorj/storj-private#32
Change-Id: I63f00483ff4128ebd5fa9b7b8da826a5706748c9
We don't have metric to track how many pending objects we have in the
system. This change is using tally objects loop to collect pending
objects per bucket and at the end its combining all buckets values
into single metric for all pending objects in a system.
Change-Id: Iac7a6bfb48854f7e70127d275ea8fdd60c4eb8b7
Add storjscan wallets implementation to the satellite. The wallets interface allows you to add and claim new wallets as called by the API. The storjscan specific implementation of this interface uses a wallets DB to associate the user to a wallet address, as well as a storjscan client to request and associate new wallets to the satellite.
Change-Id: I54081edb5545d4e3ee07cf1cce3d3e87cc00c4a1
Update all the NPM dependencies used by the Admin UI.
The dev dependencies correspond to the ones that are currently used by
an svelte app generated with the last svelte-kit version. They
deprecated some configuration options and changed some svelte
directives.
The only non-dev dependency is also updated to the last published.
Change-Id: I5f2192cab41e00efc3239237f8dc8f3d07816b63
script-src-elem is preferred over script-src in certain scenarios.
If it's absent, then the browser always uses script-src. By adding
script-src-elem it ended up blocking google recaptcha.
Change-Id: I9cf96e71e69054c4a034ca189db84fbe8903a59b
Updated daily project usage query to return correct allocated traffic.
If allocated egress has expired then we return settled egress.
If not then we return allocated egress - dead egress.
Fix for this issue
https://github.com/storj/storj/issues/4563
Change-Id: Ia15a50d3bb8d8cb1106936e17dbe0f1f5a40fa87
Fixed daily usage query returning single bucket usage.
We sum up bucket usages now.
Also fixed https://github.com/storj/storj/issues/4559.
Change-Id: I2eb6299f1ef500d68150879195011b6fbb5f37ed
TRUNCATE requires table recreation which involves 'online schema change' with crdb.
(with psql it might be fater than DROP, that was the motivation of the original change)
`online schema change` is an async operation with crdb and it's eventually very slow therefore we try to avoid it.
This·reverts·commit·15bed0ed0e81d54fe4ffac9928bdf648f5e06ec6.¬
Change-Id: I93e1ab470962be77e3458d74c8787442c9d7bee0
Add the admin API endpoint for disabling an user's multifacdtor
authentication to the satellite admin UI.
Remove a couple of commented code lines too.
Change-Id: Iaee7efe7a3d4d38bdd6541311447a9726806f0f1
We have a couple of support tickets so far that require us to
disable the mfa on accounts. Since we currently had no other
way than doing a SQL War Crime, it makes sense to add it to the
admin API.
Change-Id: Ib16735c1961380b04345a3495d4eebee5fa0bc41
Currently we have a bug in which we would require that a project of
a paid tier user needs to be two months unused before we can delete it.
This change fixes it and reduces it back to the normal next billing cycle.
Change-Id: I28610b6c45c68943fd4f2621233bccc06cab28a0
An older change plummed the full console config as subconfig of
the admin api configuration in. This bloated the generated satellite
configuration unnecessarily while also allow for confusion/mistakes.
Change-Id: Icf49cc1f147711e37e85f6eac1143fab8ddf1659
when deleting an object that has been copied multiple times, we look for an ancestor_stream_id by taking the min of all copies stream_id.
This change simplifies this process by picking any stream_id as a new ancestor by using 'distinct on'.
Fixes https://github.com/storj/storj/issues/4745
Change-Id: Iffb519b82d2ae2ed73af48fa0e86f87384e0158f
Add billing DB to the satellite. This DB will hold all transactions on the users account and can be used to compute the users current usable account balance.
Change-Id: I056416efc169e5e5e30c9f30cd8bc766b7bc8073
Implemented new service method for generating API keys.
Implemented new endpoint.
Improved multiple endpoint groups handling.
Change-Id: Iba26fbf9123707b5b4c2d5e8c5a35d507404f24a
We are not using this method and most probably we
won't need to list objects with all statuses at once.
Removing for now.
Change-Id: I7aa0468c5f635ee2fb1fe51db382595c6343dd9c
- parallel deletion of 50 objects and their 50 copies (one copy per object)
This test is skipped because it's creating deadlocks that are not automatically retried on postgres
- parallel deletion of 1 object and its 50 copies.
Fixes https://github.com/storj/storj/issues/4745
Change-Id: Id7a28251c06bb12b5edcc88721f60bf7a4bc0492
testplanet executes cockroach and postgress tests parallel, therefore using http.DefaultClient is safe only as long as we don't modify it.
TestActivationRouting modifies it (client.CheckRedirect=...), therefore it should use a local version instead of the default one.
Problem reported by a jenkins build:
```
==================
WARNING: DATA RACE
Write at 0x000003486af0 by goroutine 143:
storj.io/storj/satellite/console/consoleweb_test.TestActivationRouting.func1()
/home/jenkins/workspace/storj-testing-experiments/satellite/console/consoleweb/server_test.go:66 +0x378
storj.io/storj/private/testplanet.Run.func1.1()
...
Previous read at 0x000003486af0 by goroutine 104:
net/http.(*Client).checkRedirect()
/usr/local/go/src/net/http/client.go:494 +0xd73
net/http.(*Client).do()
/usr/local/go/src/net/http/client.go:691 +0xd31
net/http.(*Client).Do()
/usr/local/go/src/net/http/client.go:593 +0x204
storj.io/storj/satellite/console/consoleweb_test.TestActivationRouting.func1.1()
/home/jenkins/workspace/storj-testing-experiments/satellite/console/consoleweb/server_test.go:48 +0x1e5
storj.io/storj/satellite/console/consoleweb_test.TestActivationRouting.func1()
/home/jenkins/workspace/storj-testing-experiments/satellite/console/consoleweb/server_test.go:74 +0x49d
storj.io/storj/private/testplanet.Run.func1.1()
...
```
Change-Id: I73319a5a593e067b906ec1fda70a44ca1e5a49a2
This has been a cause of some confusion, even though the fields are
labeled as being copies of config values.
Having them be under a field explicitly named "Config" makes this
clearer, plus, allows the values to be passed in simply as a copy
of the Config struct from the satellite, rather than copying the fields
individually (which can be error-prone, particularly as the AuditCount
field in UpdateRequest is apparently not the same thing as the
AuditCount field in reputation.Config).
Refs: https://github.com/storj/storj/issues/4601
Change-Id: I386953347b71068596618616934aa28e3245cdc1
Add storjscan wallets DB to the satellite. For now this DB is a one to one mapping of the users account to a storjscan wallet that can be used by the account holder to make payments on their Storj account.
relates to https://github.com/storj/storj/issues/4347
Change-Id: I6e65b15817b90ceb75641244f9bf173c3b4228a7
The two protobuf types are identical except that one is in our common/pb
package, and the other is in internalpb. Since the type is public
already, and there is no difference in the internal one, it seems better
to use the public one for all satellite needs.
There is also another type which is essentially identical, but which is
not a protobuf type, also called "AuditHistory". It looks like we don't
ever actually need to have a separate type from the protobuf one.
This change makes us use "storj/common/pb".AuditHistory for all of our
AuditHistory needs.
Refs: https://github.com/storj/storj/issues/4601
Change-Id: If845fde21bb31c801db6d67ffc9a146d1617b991
logo redirects to homepage on login, signup, forgot password, reset
password, and activate account pages
Change-Id: I992aeae197004d620addd8d515cae1c1ca80a778
Tests are intermittently fail with similar error:
```
--- FAIL: TestDeletePendingObject/Cockroach (15.85s)
test.go:380:
Error Trace: test.go:380
delete_test.go:221
Error: Should be zero, but was metabase.DeleteObjectResult{
Objects: []metabase.Object{
{
ObjectStream: {ProjectID: {0x0f, 0x40, 0x70, 0x41, ...}, BucketName: "txxywyg4", ObjectKey: "\xbb+$\x17\x80\xc6\xcaC\xa3\xdb\xc3z*\xa8\xbe\xaf", Version: 1, ...},
- CreatedAt: s"2022-05-20 14:40:15.995376773 +0200 CEST",
+ CreatedAt: s"2022-05-20 14:40:21.04949 +0200 CEST",
ExpiresAt: nil,
Status: 1,
... // 9 identical fields
},
},
Segments: {{RootPieceID: {0x01, 0x00, 0x00, 0x00, ...}, Pieces: {{...}}}, {RootPieceID: {0x01, 0x00, 0x00, 0x00, ...}, Pieces: {{...}}}},
}
Test: TestDeletePendingObject/Cockroach/with_segments
--- FAIL: TestDeletePendingObject/Cockroach/with_segments (0.68s)
```
Looks like we shouldn't have an assumption that all tests can be finished in 5 seconds, especially not in highly parallel environment.
These tests use `time.Now` at the beginning and compare the time saved in the database (usually filled by the database).
The difference shouldn't be higher than 20 seconds (before this commit: 5 seconds) which assumes that the records are saved in this timeframe...
Change-Id: Ia6f52897d13f88c6857c05d728bf8e72ab863c9b
old bucket creation flow removed
new flow added
name and passphrase splitted into separate views
demo bucket will not be created automatically
bucket creation progress bar added
Change-Id: I2a1d7d77c3038caaafb3c06bdb0ac5dd1ad17599
This functionality will be needed in both packages, so here we move it
into the more general reputation-code package and export it for use in
satellitedb.
This also removes the related UpdateAuditHistory() signature from the
reputation DB interface, since it doesn't have anything to do with the
db. It doesn't need to be a method, either.
Finally, this changes the test for addAudit to be a plain test function
instead of using testplanet.Run(). It didn't need a whole testplanet
setup or any databases.
Refs: https://github.com/storj/storj/issues/4601
Change-Id: I90f6a909e5404f03ad776b95cfa2f248308c57c1
Recently we applied this optimization to metrics observer and time
used by its method dropped from 12m to 3m for us1 (220m segments).
It looks that it make sense to apply the same code to all observers.
Change-Id: I05898aaacbd9bcdf21babc7be9955da1db57bdf2
If TestCommitInlineSegment tests are taking longer time
then zombieDeadline created at the beginning of test
can be too far in the past. Creating zombieDeadline for each case
should avoid flakines.
Change-Id: Ieb011e8e470f6f1c32cf9365c8ae819317de6738
Márton found out that DROP DATABASE is rather slow on CRDB, and it makes
a significant impact when running the whole testsuite. In sum of test
times it's ~2.5h compared to ~2h. And the end-to-end ~20m to ~16m.
This adds a new flag STORJ_TEST_COCKROACH_NODROP for enabling this
behavior in the CI environment.
Fixes https://github.com/storj/dev-enablement/issues/6
Change-Id: I5a6616c32dc6596a96ba3d203f409368307d7438
We can use PieceIDDeriver in all places where we are deriving id from
the same id multiple times. We have serveral such places: gc, segment
deletion, segment validation, order limit creation. Using it should
save some resources.
Change-Id: I24668d516c0f7cea4aec6470614067734149501d
This will let us update our reputation cache when writing through to the
db.
Since the information is already being fetched from the db and returned
to the application, the extra cpu load here should be minimal.
Refs: https://github.com/storj/storj/issues/4601
Change-Id: I2b8619f2c0d541893c7d3e7d33b1863b96775ebd
We want to remind unverified users to verify their emails:
once after 24 hours has passed and again after 5 days has passed.
Add mailservice.Service to satellite core because it is needed by the
chore for sending emails. To add the mailservice.Service to the core,
we create a helper function in satellite/peer.go to avoid duplicating
the code in both api.go and core.go. In addition to the chore, this
change adds methods to users.DB to get unverified users in need of
reminder.
Change-Id: I4e515bdf43f922788b4f965b2efb34fa32288bd1
modify tally to calculate how we need to update segments in the live accounting
cache with UpdateProjectSegmentUsage method. adjust accounting.ExceedsUploadLimits
to use only cache for segment validation, if cache returns 0 or key not found then
we shouldn't reject such project as its possible that we won't have this value before first object iteration
https://github.com/storj/storj/issues/4744
Change-Id: I32c22d7fb71236e354653ba8719e029fc71f04c7
We added nodes.disqualification_reason recently, but we didn't add a
corresponding column in the reputations table (despite having a
corresponding `disqualified` column there).
Without this change, the (very useful and informative) assignments to
updateFields.DisqualificationReason in reputations.go have no effect.
Refs: https://github.com/storj/storj/issues/4601
Change-Id: I77404902ca64b56aed72f1de76b303fe82b76aab
The existing versionCollector metrics can tell us how many times various
metainfo endpoints are called, but they don't tell us how many bytes a
client is transferring. We currently can't collect precise information
on this, but we can collect information on how much planned traffic is
requested via order limits.
The implementation as provided is intended to measure objects sizes
before erasure encoding is taken into account.
Change-Id: I2f1d2a7831630e8439ecf5342e933df259151792
Doing server-side copy operation should not affect user monthly
bandwidth. This test covers that case.
Fixes https://github.com/storj/storj/issues/4717
Change-Id: I84ffab96b84851f395ea3a34d88f7dba424ec440
When a new user registers, we send a verification request to their
email. Currently, if they do not verify their email, we take no further
action. We want to send these users reminders: one after about one day
and one after about 5 days. To do this we will use this new
verification_reminders column.
It will look something like this:
```
SELECT email FROM users
WHERE status = 0
AND (
(verification_reminders = 0 AND created_at < now() - 'INTERVAL 1d')
OR (verification_reminders = 1 AND created_at < now() - 'INTERVAL 5d')
)
```
Change-Id: If0620e08c97e9e337c9563481d665c5bd462693b
Initialy we wanted to put segment usage into cache values retrieved
directly from metabase but it cause performance issues. Now we
will be collecting segment usage from tally object loop and those
values will be put into a cache but becuse we cannot get those values
on demand we shouldn't expire cache value at all because objects
loop requires sustencial amount of time to be executed.
Part of solution for https://github.com/storj/storj/issues/4744
Change-Id: I3b37e23badeecebed0c95064156e85b38038bfe2
We want to send email verification reminders to users from the satellite
core, but some of the functionality required to do so exists in the
satellite console service. We could simply import the console service
into the core to achieve this, but the service requires a lot of
dependencies that would go unused just to be able to send these emails.
Instead, we break out the needed functionality into a new service which
can be imported separately by the console service and the future email
chore.
The consoleauth service creates, signs, and checks the expiration of auth
tokens.
Change-Id: I2ad794b7fd256f8af24c1a8d73a203d508069078
Adds a new configuration for hcaptcha enabled, secretkey, and sitekey.
If both reCAPTCHA and hCaptcha are configured as "enabled", reCAPTCHA
will be used.
Change-Id: I73cc6e133d8da3555e0ed8b2b377cf9eb263e6dc
Fix for this customer issue
https://github.com/storj/customer-issues/issues/34
By this change we fetch bucket usage since its creation instead of using project's createdAt timestamp.
Change-Id: Ic0ea5d169056a5bd64ed143d13954d794da6e1d2
Things that make debugging easier.
* Added logging to automatic link clicking to make it obvious, when it
fails.
* Added monitoring to oidc.
* Made dbx create calls noreturn for oauth_*
Change-Id: I37397b4e84ce5bfd82954aed9c38fdfd52595f24
This will apply an appropriate "subsystem" label to goroutines which are
part of the core, api, repairer, admin, or gc subsystems.
It will also label goroutines whose job it is to watch for slow shutdown
of lifecycle groups (there are a lot of these).
Finally, this will also label goroutines whose job it is to wait on the
toplevel errgroup of a subsystem.
Change-Id: I560b5fff4a0101300d6c9a67609c2d80d7424486
Added account locking on 3 or more login attempts.
Includes both password and MFA failed attempts on login.
Unlock account on successful password reset.
Change-Id: If4899b40ab4a77d531c1f18bfe22cee2cffa72e0
Implement a buffer for inserting repair items into the queue in a batch.
Part of https://github.com/storj/storj/issues/4727
Change-Id: I718472b2f2b1f4993c3d6f15c44923776407155a
TRUNCATE is faster than DELETE when deleting all rows.
As almost every metabase test case calls TestingDeleteAll, this change
should give some slight test speed-up.
Change-Id: Ib477962b6deb93edd60d6db2f1be6ede1b4b2381
Create an error class for the "pending object error" for distinguishing
it from other errors for allowing to return it as a "Not Found" DRPC
status code instead an "Internal" status code.
"Internal" errors are logged in the satellite error so this was
polluting the server logs aside of returning an inappropriate status
code.
Change-Id: I10a81adfc887c030c08a228158adc8815834b23c
Respond with the appropriate HTTP status code when a request to the
analytics trigger event handler receive an authorized request.
A part of fixing the response status code this will stop to log these
response with ERROR level in our satellite logs.
Example of error message found in our satellite logs:
{
"insertId": "0ljf1cfn4xroxfd6",
"jsonPayload": {
"N": "console:endpoint",
"T": "2022-05-06T13:31:35.415Z",
"errorVerbose": "unauthorized: http: named cookie not present\n\tstorj.io/storj/satellite/console.GetAuth:72\n\tstorj.io/storj/satellite/console/consoleweb/consoleapi.(*Analytics).EventTriggered:60\n\tnet/http.HandlerFunc.ServeHTTP:2047\n\tstorj.io/storj/satellite/console/consoleweb.(*Server).withAuth.func1:488\n\tnet/http.HandlerFunc.ServeHTTP:2047\n\tgithub.com/gorilla/mux.(*Router).ServeHTTP:210\n\tstorj.io/storj/satellite/console/consoleweb.(*Server).withRequest.func1:495\n\tnet/http.HandlerFunc.ServeHTTP:2047\n\tnet/http.serverHandler.ServeHTTP:2879\n\tnet/http.(*conn).serve:1930",
"L": "ERROR",
"error": "unauthorized: http: named cookie not present",
"message": "unauthorized: http: named cookie not present",
"code": 500,
"S": "storj.io/storj/satellite/console/consoleweb/consoleapi.serveCustomJSONError\n\t/go/src/storj.io/storj/satellite/console/consoleweb/consoleapi/common.go:37\nstorj.io/storj/satellite/console/consoleweb/consoleapi.serveJSONError\n\t/go/src/storj.io/storj/satellite/console/consoleweb/consoleapi/common.go:23\nstorj.io/storj/satellite/console/consoleweb/consoleapi.(*Analytics).serveJSONError\n\t/go/src/storj.io/storj/satellite/console/consoleweb/consoleapi/analytics.go:75\nstorj.io/storj/satellite/console/consoleweb/consoleapi.(*Analytics).EventTriggered\n\t/go/src/storj.io/storj/satellite/console/consoleweb/consoleapi/analytics.go:62\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2047\nstorj.io/storj/satellite/console/consoleweb.(*Server).withAuth.func1\n\t/go/src/storj.io/storj/satellite/console/consoleweb/server.go:488\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2047\ngithub.com/gorilla/mux.(*Router).ServeHTTP\n\t/go/pkg/mod/github.com/gorilla/mux@v1.8.0/mux.go:210\nstorj.io/storj/satellite/console/consoleweb.(*Server).withRequest.func1\n\t/go/src/storj.io/storj/satellite/console/consoleweb/server.go:495\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2047\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2879\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1930",
"M": "returning error to client"
},
"resource": {
"type": "k8s_container",
"labels": {
"location": "us-central1",
"pod_name": "us-central1-satellite-api-77c47f5c5-dzrpj",
"project_id": "storj-prod",
"namespace_name": "satellite",
"container_name": "satellite",
"cluster_name": "us-central1-gke-manatee"
}
},
"timestamp": "2022-05-06T13:31:35.416050390Z",
"severity": "ERROR",
"labels": {
"k8s-pod/version": "v3",
"k8s-pod/app": "us-central1-satellite-api",
"compute.googleapis.com/resource_name": "gke-us-central1-gke--terraform-202110-97ff1891-t0fv",
"k8s-pod/service": "api",
"k8s-pod/pod-template-hash": "77c47f5c5"
},
"logName": "projects/storj-prod/logs/stderr",
"receiveTimestamp": "2022-05-06T13:31:37.419991630Z"
}
Change-Id: I7cfcfb500b7878c59b1d259683c92e8963e2dc3f
Co-authored-by: Stefan Benten <mail@stefan-benten.de>
return storage and segment totals as a single result, instead of returning only storage
and bandwidth and segment values are filtered out, https://github.com/storj/storj/issues/4744
Change-Id: I624d67ed5205ae21ecd5a2f39775f63ed042e629
TestSegmentInExcludedCountriesRepair and TestSegmentInExcludedCountriesRepairIrreparable are using 20 storage nodes.
This change make them use 7 by adjusting the test redundancy scheme.
Change-Id: I1a44aa8b997d6edcc9a3305fdd0dac57e4d525b5
* Added new feature Flag for new Access Grant Flow.
* Added 3 cards to access grant view for S3, CLI and Access grant to replace old header
* Added new formatting, text and Icon for Access Grant Delete Popup modal
Added documentation.
Replaced PUT request with POST request.
Added inline param support for PATCH request.
Replaced unix timestamps handling with RFC-3339 timestampts handling.
Added 'Bearer' method requirement for Authorization header.
Change-Id: I4faa3864051dd18826c2c583ada53666d4aaec44
When an application wants to interact with resources on behalf of
an end-user, it needs to be granted access. In OAuth, this is done
when a user submits the consent screen.
Change-Id: Id838772f76999f63f5c9dbdda0995697b41c123a
Version collector previously returned errors and logged them in the
calling code. It is cleaner to log inside version collector.
Change-Id: I52cb49a1ef53f3f1f51692ddb26ec095cfd0f100
We were already able to override (or not) metadata with this method
but to be explicit we are introducting new option to control storing
metadata with object. Separate option should be less error prone.
https://github.com/storj/team-metainfo/issues/105
Change-Id: I4c5bce953a633a0009b05c5ca84266ca6ceefc26
"REST API" is a more accurate descriptor of the generated API in the
console package than "account management API". The generated API is very
flexible and will allow us to implement many more endpoints outside the
scope of "account management", and "account management" is not very well
defined to begin with.
Change-Id: Ie87faeaa3c743ef4371eaf0edd2826303d592da7
Extended user update query so prod owner can change user's paid tier status, bandwidth, storage and segment limits.
Change-Id: I82768afd1e50f653a50f7020310ce1e91578d746
it's a bit weird that these code definitions are in storj/storj
instead of storj/crypthopper-go, but as it stands, we should make
sure this package knows about all the codes in use.
Change-Id: I8df4666a015098e2d2e536d2f6c8ca5317a4369c
We implemented server-side copy feature and we would like to
confirm that it is not affecting expired deletion service.
Resolves: https://github.com/storj/storj/issues/4698
Change-Id: Ia8ca27a7ab7764a48a0c85dc7be80a58bfc83729
Last part of backwards compatible db migration to remove "suspended" column.
Removes exeption which removes "suspended" column in tests from `migrate_test.go`.
Adds DB migration to remove "suspended" column from 'nodes' and 'reputations' tables.
Change-Id: I02051279f6f4181e966c567919af0e774583f165
Set disqualification reason when reputations stats are updated on DB.Update.
Added tests for DisqualifyNode and for disqualification cases which happens during Update.
Change-Id: I00130ab5d9722422805159ad2f183c205de60f7e
When an api server is processing a graceful exit (node is connected and
getting lists of pieces to transfer), and the api server is shut down,
it was incorrectly marking all pending graceful exits as complete. The
GE then either passed or failed depending on the ratio of successfully
transferred pieces to unsuccessful pieces. In at least one case, _no_
pieces were transferred at all before the GE was marked a success.
Change-Id: I62cfab54a2296572c2e654eb460b62f772b7a60b
Attribution is attached to bucket usage, but that's more granular than
necessary for the attribution report. This change iterates over the
bucket attributions, parses the user agent, converts the first entry
to lower case, and uses that as the key to a map which holds the
attribution totals for each unique user agent.
Change-Id: Ib2962ba0f57daa8a7298f11fcb1ac44a8bb97875
Implemented new endpoint for project update using apigen.
Implemented new service method compatible with new generated api.
Change-Id: Ic0a7e0bbf3ea942275bd927d6e30cfb7e721e9c1
Migrate free tier users to have default limits if their limits were set to 0.
They were affected by incorrect working of Update user query.
Change-Id: I4c49c8d99b12dba2b9b0ab61b2175085976dcc95
We implemented server-side copy feature and we would like to
confirm that it is not affecting accounting/tally service.
Resolves https://github.com/storj/storj/issues/4697
Change-Id: I3944ea52c0acc68107ec15c1911750dc7d947501
Test case to verify if server-side copy doesn't affect
GE in any negative way.
Fixes https://github.com/storj/storj/issues/4699
Change-Id: I8c385767cca61499d46d9cb8de7318c56e5d7397
Added failed_login_count and login_lockout_expiration columns to users table to control users failed login attempts.
We want to prevent brute forcing of user login so this is the first step.
Change-Id: I06b0b9f5415a1922e08cd9908893b2fd3c26bca0
Use the same query when deleting a single object or multiple.
I have chosen not to deduplicate the row "scan" logic because
it is less complicated code and this change would expand to other
parts of the codebase.
Part of https://github.com/storj/storj/issues/4700
Change-Id: I7a958c78c903b2bddd72ca217971f7e8e02a0d0c
Initial space used for pieces is calcualted, not retrieved
from storage nodes and at the end of test we are deleting
also copies that become ancestors to verify that all data
was removed from storage nodes.
Change-Id: I9804adb9fa488dc0094a67a6e258c144977e7f5d
Before, the VA query was summing the total and dividing by the number of
rows. This gives the average bytes stored per hour, but we charge for
usage with byte-hours. Why not do value attribution the same way?
To do that, we don't divide by the number of rows. We also have object
and segment fees so return segment-hours and object-hours too.
Change-Id: I1f18b7e1b2bae1d3fae1ca3b93bfc24db5b9b0e6
We implemented server-side copy feature and we would like to
confirm that it is not affecting GC.
Fixes https://github.com/storj/storj/issues/4696
Change-Id: Id391f0badf5fce51f9910f0df732d477b07fa7ac