Jenkins doesn't do a very good job with identifying what has been changed.
While it has a syntax to defined patterns, it compares the current build with the previous build (in case of git-verify it can be a totally different branch) instead of checking the HEAD commit.
This patch introduces shell scripts to do this better:
* It doesn't depend on Jenkins any more
* It can be executed locally
* It can detect web changes properly (see the relation change as an example).
Change-Id: I9d37775e3818c08c4aa96ffb78f84d57f28a2c95
As a reminder
* This counters are for data with high-cardinality
* We have strong upper bound for memory limits
* They can be accessed from /top monitoring interface
Example:
```
curl 172.20.0.10:11111/top
since ~ 2022-08-09T07:45:58Z
auth_request_count project=9094cff8-104e-4956-a367-97ea134b7e06 11.000000
auth_request_buckets 1.000000
auth_request_discarded 0.000000
auth_request_count partner=00000000-0000-0000-0000-000000000000 11.000000
auth_request_buckets 1.000000
auth_request_discarded 0.000000
```
Note: discarded 0 --> we didn't hit the memory limit.
Change-Id: I8db09b4aa61bade55cb324b84b7fbcb8f068c179
The new dashboard currently gets stuck on loading and displays an error when
it fails to get usage data. Failure happens on satelliteDb due to a cockroach transaction error
caused by reading data before using AS OF SYSTEM TIME in the same transaction.
This change reverses the order of daily usage queries to avoid this error.
And hides the loaders on the dashboard if/when an error occurs.
see: https://github.com/storj/storj/issues/5012
Change-Id: I06b6ee434f72242f9b7d21dec7aaf39d1d622f1e
I don't know why the go people thought this was a good idea, because
this automatic reformatting is bound to do the wrong thing sometimes,
which is very annoying. But I don't see a way to turn it off, so best to
get this change out of the way.
Change-Id: Ib5dbbca6a6f6fc944d76c9b511b8c904f796e4f3
We made optimization for segment loop observers to avoid
heavy monkit initialization on each call. It was applied to very
often executed methods. Unfortunately we used wrong monkit
method to track function times. Instead mon.Task we used
mon.Func().
https://github.com/spacemonkeygo/monkit#how-it-works
Change-Id: I9ca454dbd828c6b43ba09ca75c341991d2fd73a8
We noticed that in the system we have undeleted very old pending
objects. General rule is to delete them after some inactivity. Turns
out that all those objects are objects migrated to metabase from
previous DB schema. During this migration we didn't set
zombie_deletion_deadline to any value.
This change takes into account pending objects with zombie deletion
deadline set to nil during zombie deletion process.
I also checked accross all production satellites and youngest pending
objects with nil zombie_deletion_deadline are from 2021 so it is safe
to delete them.
Change-Id: Ie2b6a4b4e203c1750cf8408ee281c0631b263082
Change from DEBUG level to INFO level the logs that the trace request
middleware logs because it looks that we don't log in DEBUG level in
production Satellite API pods.
For making that assumption I searched in the last 7 days logs collected
by Google Logging service for all the Satellite API pods in US1 and it
didn't show any line.
Change-Id: I620009d70d59df46d524c8cee93851bd13eceeee
During an update to the billing DB, there is a special case failure that can occur if multiple updates to the table happen concurrently. In this case, the update would normally fail silently due to the balance constraint during update, and the subsequent insert for a new record fails because the user already exists in the table. The solution for this case, is to simply retry the insert with some limit to prevent infinite loops.
Change-Id: Ibe70fec2c386c25bd2484fe91f49a6a962357706
- Previously unused struct Endpoint.Request now defines the form
of the request body.
- Path parameters (e.g. "id" in "/delete/{id}") are defined in
the Endpoint.PathParams field.
- Endpoint.Params has been renamed to Endpoint.QueryParams to
eliminate confusion.
Change-Id: Ifef51ca2f362c33086f0e43e936d50b0fdd18aa1
Logs out all current user sessions when a password is changed through both the
forgot password and change password methods.
Change-Id: Iaf9b4969aa45441591524906af326b9dec17939f
Split out the function to delete a batch of objects from a bucket, so
that we get metrics which give a rough indication how long this operation
takes.
Part of https://github.com/storj/storj/issues/4957
Change-Id: I20a4ed5894217f4cd0b2f25aee297f0ecda57ab5
Don't terminate the expired objects loop or the zombie objects loop when
there is a DB error when selecting the objects for deleting them because
it isn't critical and the loops will pick them up again in the next
iteration.
The exception is if the DB rows scan method returns an error because
that's a symptom of the passed arguments to the method don't match with
the columns order, number, or type of the query, or there is invalid
data in the DB.
Don't also terminate these loops if the there is a DB error when
deleting the objects because the loops will pick them up in the next
iteration.
Because we don't return those errors now for not terminating the loop,
we have to log them.
Change-Id: I86bcf83d619345255840ae8f3db61620f044d2af
We have enabled the new project dashboard in production. Change the
default to true so that we do not need an explicit configuration in
prod.
Change-Id: I0f93773965283e7b0682f6586685224281cbf78c
We log metainfo object operations and it looks that the log's message
convention is `Object {operation}`, however the `Object Download`
operation didn't match with the actual operation and the one that was
representing it had was `Download Object`.
This commit changes the log's message for the download object operation
according to the other object operations log messages format and fixes
the log message for the Get Object operation.
For finding this I executed the following command at the root of the
repository to obtain the list of lines where we log object operations.
$> ag 'log\.Info\(".*Object.*",' --no-color git:(main)
satellite/metainfo/endpoint_object.go
179: endpoint.log.Info("Object Upload", zap.Stringer("Project ID", keyInfo.ProjectID), zap.String("operation", "put"), zap.String("type", "object"))
336: endpoint.log.Info("Object Download", zap.Stringer("Project ID", keyInfo.ProjectID), zap.String("operation", "get"), zap.String("type", "object"))
557: endpoint.log.Info("Download Object", zap.Stringer("Project ID", keyInfo.ProjectID), zap.String("operation", "download"), zap.String("type", "object"))
791: endpoint.log.Info("Object List", zap.Stringer("Project ID", keyInfo.ProjectID), zap.String("operation", "list"), zap.String("type", "object"))
979: endpoint.log.Info("Object Delete", zap.Stringer("Project ID", keyInfo.ProjectID), zap.String("operation", "delete"), zap.String("type", "object"))
`ag` is a command-line tool similar to `grep`
Change-Id: I9072c5967eb42c397a2c64761d843675dd4991ec
We had a lot of flaky test failures from TestAuth. The error message (WHICH IS NOT VISIBLE IN JEKNINS, only in tests.json):
```
FAIL: TestAuth_Register_NameSpecialChars/Postgres (1.04s)
panic: runtime error: index out of range [0] with length 0 [recovered]
panic: runtime error: index out of range [0] with length 0
goroutine 3473 [running]:
testing.tRunner.func1.2({0x235fe40, 0xc000fe6a08})
/usr/local/go/src/testing/testing.go:1209 +0x36c
testing.tRunner.func1()
/usr/local/go/src/testing/testing.go:1212 +0x3b6
panic({0x235fe40, 0xc000fe6a08})
/usr/local/go/src/runtime/panic.go:1047 +0x266
storj.io/storj/satellite/console/consoleweb/consoleapi_test.TestAuth_Register_NameSpecialChars.func1(0xc001a281a0, 0x289d650, 0xc001a30000)
/var/lib/jenkins/workspace/storj-gerrit-verify/satellite/console/consoleweb/consoleapi/auth_test.go:773 +0x785
storj.io/storj/private/testplanet.Run.func1.1({0x289c770, 0xc0001b8008})
/var/lib/jenkins/workspace/storj-gerrit-verify/private/testplanet/run.go:67 +0x732
storj.io/storj/private/testmonkit.RunWith({0x289c770, 0xc0001b8008}, {0x28d89b0, 0xc001a281a0}, {0x1, {0x0, 0x0}, {0x0, 0x0, 0x0}}, ...)
```
The root cause:
testplanet uses a simulated mail sender which clicks to all the registration links by default (async).
These tests creat links and check the unverified users, but without enough luck the mail sender may already clicks to the link which makes the user verified.
Change-Id: I17cd6bf4ae3e7adc223ec693976bb609370f0c44
Added string length limits for registration partner and promo params.
Limitation added both on client and server sides.
Issue: https://github.com/storj/storj-private/issues/44
Change-Id: Ifae04caad1775e0a8ca72ae7f9abcf0ea5fb564b
Implemented Recaptcha and Hcaptcha for login screen.
Slightly refactored registration page implementation.
Made 2 different login/registration captcha configs on server side to easily swap between captchas independently.
Issue: https://github.com/storj/storj/issues/4982
Change-Id: I362bd5db2d59010e90a22301893bc3e1d860293a
In an effort to distribute load on the reputation database, the
reputation write cache scheduled nodes to be written at a time offset by
the local nodeID. The idea was that no two repair workers would have the
same nodeID, so they would not tend to write to the same row at the same
time.
Instead, since all satellite processes share the same satellite ID
(duh), this caused _all_ workers to try and write to the same row at the
same time _always_. This was not ideal.
This change uses a random number instead of the satellite ID. The random
number is sourced from the number of nanoseconds since the Unix epoch.
As long as workers are not started at the exact same nanosecond, they
ought to get well-distributed offsets.
Change-Id: I149bdaa6ca1ee6043cfedcf1489dd9d3e3c7a163
This change makes the authentication middleware reject any requests
that are not properly authenticated to prevent them from being
passed into endpoint-specific handlers.
Change-Id: I1f6b74f68fc7354e47fb825a128bad968129f420
Update the billing table to use generated IDs, to include the source and status fields, and to add metadata jsonb field that can be used for fields specific to a particular type of billing transaction. Additionally a new table was added to keep track of user balances
Change-Id: Ieb3a63aafd8fe21fc3386bafd43d52081b7d2838
satellite/{payments/billing,satellitedb}: refactor billing DB
Update the billing table to use generated IDs, to include the source and status fields, and to add metadata jsonb field that can be used for fields specific to a particular type of billing transaction. Additionally a new table was added to keep track of user balances
Change-Id: Ieb3a63aafd8fe21fc3386bafd43d52081b7d2838
metabasetest package utils can be used by both tests and benchmarks
if we will use interface TestingT from require package. This change
adjusts metabasettest.CreateObject method
Change-Id: I3c138e2ef9873b804ab5b3402804efa409397a9f
`len(m.attached)` was used without locking.
Also, use gofumpt to make whitespace usage more consistent.
Change-Id: Ifa9deedc8451f0c54e84d6ac3c2bdc1807688989
When a someone tries to create an account with an email that is already
associated with a verified account, send them an email with options to
sign in, create an account on another satellite, or reset password.
Change-Id: I844144d88b7356bd7064c4840c9441347a5368b0
Adding an interval_end_time column to the accounting_rollups table
to keep the last interval_end_time for each daily storage tallies.
Updates https://github.com/storj/storj/issues/4178
Change-Id: If7a8210c5e9fe2fc9df84b137a8b6e3db2471c58
removed segment limit validation and checks in metainfo endpoint and accounting/projectusage
since feature is live and has always has segment limitation now
Resolves: https://github.com/storj/storj/issues/4470
Change-Id: I8cf87cbbc40ac61262f9f05e52573d3ae6410611
With pointerdb listing objects operation was optimized to skip
objects from prefixes for non recursive listing. This change it
adopting this optimiaztion from old code.
Main change is to drop current page results if we detect a prefix
that needs to be skipped and jump with next listing query after this
prefix by setting cursor to "some-prefix(byte('/')+1) which is
effectively "some-prefix0".
Benchmark:
name old time/op new time/op delta
NonRecursiveListing/Postgres/listing_no_prefix-8 960µs ±11% 257µs ±12% -73.19% (p=0.008 n=5+5)
NonRecursiveListing/Postgres/listing_with_prefix-8 945µs ±11% 671µs ±12% -28.97% (p=0.008 n=5+5)
NonRecursiveListing/Cockroach/listing_no_prefix-8 4.31ms ± 8% 1.19ms ± 7% -72.44% (p=0.008 n=5+5)
NonRecursiveListing/Cockroach/listing_with_prefix-8 4.97ms ± 8% 3.35ms ±15% -32.67% (p=0.008 n=5+5)
Fixes https://github.com/storj/team-metainfo/issues/115
Change-Id: Iafdf3600d058abbaf441f792d32a7fc17cc08696
Previously there was no realtime administration of the storage usage
during copies. Now there is.
Closes https://github.com/storj/storj/issues/4719
Change-Id: I0d536bf551d16208116c3aceac89ed590ec473bf
When signing up, a user can opt in to having sales contact them. This
change alters the way this flag is passed to Hubspot and Segment.
Hubspot sends a form submission request to create the user, followed by
a "custom behavioral event" with some additional user info.
Segment sends an "Identify" call followed by a "create user" event.
This change moves "have_sales_contact" to the form submission for
Hubspot, and the Identify call for Segment.
This simplifies the process of applying this field to a contact/user in
Segment and Hubspot.
Change-Id: I5e6871b3926a76fb24f97fb2d835a26720275072
When a user's bandwidth/storage limits are manually set to exceed the
paid tier defaults, attempting to update their project via the satellite
UI (e.g. to change the name/description) would result in an error.
This change modifies the limit checks for updating a project to remove
this issue.
https://github.com/storj/storj/issues/4892
Change-Id: I48853a3289b0ac51587f268a18c1b25743123fcf
Piece deletion service was using KnownReliable method from
overlaycache to get nodes addresses to send delete request.
KnownReliable was always hitting DB because this method was
not using cache. This change is using new DownloadSelectionCache
to avoid direct DB calls.
Change is not perfect because DownloadSelectionCache is not as
precise as KnownReliable method and can select few more nodes
to which we will send delete request but difference should be
small and we can improve it later.
Updates https://github.com/storj/storj/issues/4959
Change-Id: I4c3d91089a18ac35ebcb469a56536c33f76e44ea
Currently we have a significant number of tallies that need to be
deleted together. Add a limit (by default 10k) to how many will
be deleted at the same time.
Change-Id: If530383f19b4d3bb83ed5fe956610a2e52f130a1
An object copy/move is done by 2 DRPC calls. It's possible a new object was uploaded ore moved to the source location between these calls. For copy, in that case the segments end up with the wrong keys. This change adds an explicit check for that by comparing the streamId supplied by the user with the streamId in the database.
Fixes https://github.com/storj/storj/issues/4930
Change-Id: Id600456ce78fb4069b93644828a0b3eb85e23e16
We need to provide the ability to see bucket attribution on the gateway side
so customers can validate if bucket is attributed to them. Extendet metainfo.ListBuckets
request with UserAgent.
Fixes https://github.com/storj/storj/issues/4965
Change-Id: I5624874a7faa14cda06183ad44013e9ebb385b63
Trace all the requests that the HTTP API endpoints receive.
We want to trace them with Monkit because we want to break them down by
request type and response code for seeing if they succeeded or failed.
Also log them with DEBUG level with the IP client.
Change-Id: Ia7b013351c788f131e775818f27091f3014ea861
We retry a GET_REPAIR operation in one case, and one case only (as far
as I can determine): when we are trying to connect to a node using its
last known working IP and port combination rather than its supplied
hostname, and we think the operation failed the first time because of a
Dial failure.
However, logs collected from storage node operators along with logs
collected from satellites are strongly indicating that we are retrying
GET_REPAIR operations in some cases even when we succeeded in connecting
to the node the first time. This results in the node complaining loudly
about being given a duplicate order limit (as it should), whereupon the
satellite counts that as an unknown error and potentially penalizes the
node.
See discussion at
https://forum.storj.io/t/get-repair-error-used-serial-already-exists-in-store/17922/36
.
Investigation into this problem has revealed that
`!piecestore.CloseError.Has(err)` may not be the best way of determining
whether a problem occurred during Dial. In fact, it is probably
downright Wrong. Handling of errors on a stream is somewhat complicated,
but it would appear that there are several paths by which an RPC error
originating on the remote side might show up during the Close() call,
and would thus be labeled as a "CloseError".
This change creates a new error class, repairer.ErrDialFailed, with
which we will now wrap errors that _really definitely_ occurred during
a Dial call. We will use this class to determine whether or not to retry
a GET_REPAIR operation. The error will still also be wrapped with
whatever wrapper classes it used to be wrapped with, so the potential
for breakage here should be minimal.
Refs: https://github.com/storj/storj/issues/4687
Change-Id: Ifdd3deadc8258f34cf3fbc42aff393fa545794eb
Added new email html template.
It is sent when user tries to reset password with unknown or unverified account.
Made a couple of minor config changes.
Issue: https://github.com/storj/storj/issues/4913
Change-Id: I730f48b3478e302d1e38e1f8a27c75f66a8ba6fd
Some nodes were added to the nodes table due to a bug in quic
based storagenode contact code. This is a tool to clean up these nodes.
Delete with batch-size 1k seems to take ~400ms on local CockroachDB.
Change-Id: Ic0c1180528c27952e19c431fc9cc327292a10a5f
Add payments method to payments to DepositWallets interface.
Exposes payments retrieval API for a particular wallet to
other systems such as console billing API.
Change-Id: Ifcb3a35514aab50be00f6360007954980b5d8b38
Use DownloadSelectionCache to avoid querying database for every
download.
This change only addresses downloads from users. The download selection
cache is not currently used for audit and repair.
Change-Id: I96a49e121dac0b4204f97592a63131edabd73fb5
Adding go.mod into node_modules is not sufficient, because npm install
wipes them quite often out. Similarly, when running npm install locally
it will remove it, causing the git state to be dirty.
Rather than having them committed, add them after running npm install.
Change-Id: Iaf21a9c6e198dc31fe50345ec5dee85b44617176
This just cleanup change to unblock libuplink to reorganize types
which are aliases to storj types.
Change-Id: Id3edf13f1b0aef52d7606d545aa7a6594cf8d13f
Add go.mod to node_modules folder, that way Go compiler doesn't
need to scan the node_module directories for any Go code.
Change-Id: I747909416490c847d6b4bfa3438fea66660fcd53
It seems the tests relied on time.Now(), which might cause some
discrepancies in calculations. Use a fixed time.Now() rather than
recalculating.
As a sidefix, remove "Test" prefix from t.Run. These are unnecessary.
Change-Id: I1de903fcf0fcf46fc8e3acf2463e17239b8e3cc6
The MinDownloadTimeout 950ms and delay of 1s were quiet close, possibly
causing flaky behavior in TestVerifierSlowDownload.
Change-Id: I4f6c1554a118b21427357642abe39986fd0af38d
Classify errors related to invalid tokens for activating user accounts
for returning 400 status code rather than 500 status code.
Don't log all the errors with "error" level, only the ones related to
internal server errors and the rest log them with "debug" level because
they pollute the production satellite errors with errors that are
misguiding.
Change-Id: Id2bd737edba8550ce08965b51b8bf2540bd13ca4
Previously copying an object to it's ancestor location (copy of copy)
broke the object and all copies.
This fixes this by calling the existing delete method rather than a
custom one when there is an existing object at the copy destination.
The check for existing object at destination has been moved to an
earlier point in FinishCopy.
metabase.DeleteObject exposes a transaction parameter so that it can be
reused within metabase.
Closes https://github.com/storj/storj/issues/4707
Uplink test at https://review.dev.storj.io/c/storj/uplink/+/7557
Change-Id: I418fc3337fa9f30146ccc1db456af168ae41c326
- instead of closing over the outer err variable, potentially
overwriting some errors or something, declare local variables.
- double check that we got the number of rows we expected to get
and error otherwise. this prevents a possible source of inserting
bogus rows into the database.
Change-Id: I30662be2727afe0a90e4215a182fedc2648d1169
Part of the delete query cause a full table scan of segment_copies. This
slowed down the system. This change should have the same semantics but
improved performance.
Part of https://github.com/storj/storj/issues/4898
Change-Id: I4afe23df05467eafc9c91591f47a7251a0f3dd31
Read the source object and write the destination object in the same
transaction, to prevent breaking the object because it was deleted
simultaneously.
This is probably the root cause of the metainfo loop halting from
2022-06-21 onwards, where 2 objects lost their root_piece_id during
copying.
Part of https://github.com/storj/storj/issues/4930
Change-Id: I9c45d56a7bfb48ecd5f4906ee1cca42922901e90
Returns only user's own projects when we hit GET user endpoint.
Fix for this issue
https://github.com/storj/storj/issues/4820
Change-Id: I546268fa3e5983a72f11f998803da5455c0035b4
Satellite caches the project bandwidth in Redis when it doesn't have it
because was not set or the key expired, however, it doesn't perform the
check and set if not exists in a transaction. It also uses the increase
function which increases the value if it exists otherwise it sets it.
This provokes that multiple concurrent request to the same project may
increase the total project by multiples of the bandwidth usage
registered in the database rather than setting it because they may check
if the key exists before any other has executed the increase and then
the first one executing it will set the value but the others will
increased causing that Redis has a wrong bandwidth usage value which is
N magnitude of the real one and making the satellite to deny the
downloading if it surpasses the project limit.
This commit changes the "update"" project bandwidth usage by an "insert"
but using a Redis function that only sets the value if the key doesn't
exists for solving the increase issue but also not overriding the value
due to may contain updates of other downloading requests which aren't
already registered in the DB.
Change-Id: I33e2fe462930b2fdb4061fc94002bd3544476f94
This test had an effective config.Reputation.AuditCount = 0, meaning all
nodes that had _any_ positive audit results were considered vetted.
Because of that, only one node in the test setup was "new". And that
node was marked as being in GE, so could not be returned by node
selection.
The reason the tests still worked is because of the node selection rule
that says "if there are no new nodes at all, just get all reputable
nodes to satisfy the request".
This commit makes it so half of the nodes are vetted and half new, which
makes the test somewhat more interesting (and means we aren't
concentrating too much on testing details of behavior when AuditCount is
0).
Change-Id: I09157b7dc20ecaddd2a6e60cfe146e9186e3603b
This change fixes the issue where the API generator would produce
different Go code for the same API definition upon each invocation
due to the random nature of map iteration.
Change-Id: I6770a10faf06311c24f541611c25d0b2b0f8e521
To avoid regression with old versions of uplink objects move we need to
remove FinishMoveObject check for key and nonce, in SQL
in FinishMoveObject do check if metadata is nil then
don't set key and nonce. The same UPDATE clause should return metadata and
if metadata != nil we should do the same validation for key and nonce
to avoid putting broken key and nonce while doing move
Resolves: https://github.com/storj/team-metainfo/issues/108
Change-Id: If723dfad899e9235f53559b71ee1c7fe49deb8b8
The users.Update method in the satellitedb package takes a console.User
as an argument. It reads some of the fields on this struct and assigns
the value to dbx.User_Update_Fields. However, you cannot optionally
update only some of the fields. They all will always be updated. This means
that if you only want to update FullName, you still need to read the
user info from the DB to avoid updating the rest of the fields to zero.
This is not good because concurrent updates can overwrite each other.
This change introduces a new struct type, UpdateUserRequest, which
contains pointers for all the fields that are updated by satellite db
users.Update. Now the update method will check if a field is nil before
assigning the value to be updated in the db, so you only need to set the
field you want updated. For nullable columns, the respective field is a
double pointer. This allows us to update a column to NULL if the outer
pointer is not nil, but the inner pointer is.
Change-Id: I27f842d283c2711e24d51dcab622e57eeb9157f1
This change integrates the session management database functionality
with the web application. Claim-based authentication has been removed
in favor of session token-based authentication.
Change-Id: I62a4f5354a3ed8ca80272814aad2448f901eab1b
This change swaps net.IP.IsPrivateIP usages with custom isPrivateIP to
unbreak the build as we want to build for earlier than Go 1.17.
Change-Id: I44badbb487f35e43b8b0433ad0f3b9c87af718d4
There are multiple entries in the users table with the same email
address. This is because in the past users were able to register
multiple times if the email was not verified. This is no longer
the case. If a user tries to register with an unverified email
already in the DB, we send a verification email instead of
creating another entry. However, since these old entries in the
table with duplicate emails were never cleaned up, the email
reminder chore will send out email verification reminders to them.
A single person will get one separate email per entry in the DB
with their email and where status = 0.
Since the multiple entries with the same email problem was solved
a while ago, just add a constraint to GetUnverifiedNeedingReminder
to only select users created after a cutoff. Once the DB is
migrated to remove the duplicate emails, we can remove the cutoff.
github issue: https://github.com/storj/storj/issues/4853
Change-Id: I07d77d43109bcacc8909df61d4fb49165a99527c
TestRollupNoDeletes is very flaky (passes locally but fails in the main branch build).
The exact reason is not clear, but stopping the loop seems to be async, the following lines may not stop the loops immediatelly which is a potential problem:
```
satellitePeer.Accounting.Rollup.Loop.Pause()
satellitePeer.Accounting.Tally.Loop.Pause()
```
Fortunatelly these test check only the database interfaces. Instead of testplanet.Run we can run only satellitedbtest.Run which is faster and more predictable (no background loops).
Other potential problem: comment claims that the default of DeleteTallies is false:
```
// In testplanet the setting config.Rollup.DeleteTallies defaults to false.
```
But it seems to be true (rollup.go):
```
DeleteTallies bool `help:"option for deleting tallies after they are rolled up" default:"true"`
```
This is also fixed in the patch (as we need set it explicit), but TBH it can be fixed with testplanet, too.
Change-Id: Id7ec80d5c069bed2c556f4d001c71aa23fc5af23
If project.UserAgent is set, use this for bucket.UserAgent on bucket
creation. Otherwise, set bucket attribution as before (getting UserAgent
from request headers).
Tests were updated to create the bucket with a different user, added as
a project member. Otherwise, the tests do not catch the bug.
Change-Id: I7ecf79a8eac5957eed361cbea94823190f58b776
The ApplyUpdates() method on the reputation.DB interface acts like the
similar Update() method, but can allow for applying the changes from
multiple audit events, instead of only one.
This will be necessary for the reputation write cache, which will batch
up changes to each node's reputation in order to flush them
periodically.
Refs: https://github.com/storj/storj/issues/4601
Change-Id: I44cc47767ea2d9423166bb8fed080c8a11182041
Implemented project delete endpoint for REST API.
Added project usage status check service method to indicate if project can be deleted.
Updated project invoice status check method to indicate if project can be deleted.
Change-Id: I57dc96efb072517144252001ab5405446c9cdeb4