We need to be able to update just remote_pieces column in DB. This is
needed at least for repair process.
Change-Id: I20dcc9b06babfefbbf102f32b1d14946379f26c2
It was designed to detect and remove zombie segments in the PointerDB.
This tool should be not relevant with the MetabaseDB anymore.
Change-Id: I112552203b1329a5a659f69a0043eb1f8dadb551
We migrated satelliteDB off of Postgres and over to CockroachDB (crdb), but there was way too high contention for the injuredsegments table so we had to rollback to Postgres for the repair queue. A couple things contributed to this problem:
1) crdb doesn't support `FOR UPDATE SKIP LOCKED`
2) the original crdb Select query was doing 2 full table scans and not using any indexes
3) the SLC Satellite (where we were doing the migration) was running 48 repair worker processes, each of which run up to 5 goroutines which all are trying to select out of the repair queue and this was causing a ton of contention.
The changes in this PR should help to reduce that contention and improve performance on CRDB.
The changes include:
1) Use an update/set query instead of select/update to capitalize on the new `UPDATE` implicit row locking ability in CRDB.
- Details: As of CRDB v20.2.2, there is implicit row locking with update/set queries (contention reduction and performance gains are described in this blog post: https://www.cockroachlabs.com/blog/when-and-why-to-use-select-for-update-in-cockroachdb/).
2) Remove the `ORDER BY` clause since this was causing a full table scan and also prevented the use of the row locking capability.
- While long term it is very important to `ORDER BY segment_health`, the change here is only suppose to be a temporary bandaid to get us migrated over to CRDB quickly. Since segment_health has been set to infinity for some time now (re: https://review.dev.storj.io/c/storj/storj/+/3224), it seems like it might be ok to continue not making use of this for the short term. However, long term this needs to be fixed with a redesign of the repair workers, possible in the trusted delegated repair design (https://review.dev.storj.io/c/storj/storj/+/2602) or something similar to what is recommended here on how to implement a queue on CRDB https://dev.to/ajwerner/quick-and-easy-exactly-once-distributed-work-queues-using-serializable-transactions-jdp, or migrate to rabbit MQ priority queue or something similar..
This PRs improved query uses the index to avoid full scans and also locks the row its going to update and CRDB retries for us if there are any lock errors.
Change-Id: Id29faad2186627872fbeb0f31536c4f55f860f23
We need to be able to list all buckets in DB without knowing project ID.
This method will be used to list buckets for metainfo loop
implementation based on metabase.
Change-Id: Iac75af0eee4f31e80a15577575a8249cbca787b2
It turns out, that running a docker image build for specific
arches is not possible from amd64 (eg. installing ca-certificates).
Change-Id: I8b8f002b7e532fb4a0c6542d5b573c294c501068
- TestBucketNameValidation
- TestBatch
- TestCommitObjectMetadataSize
- TestIDs
TestOverwriteZombieSegments is removed as not relevant to metabase.
Change-Id: I13cf5abe342089960628f185061303fd4f9d09a4
WHAT:
api keys appearence is replaced with access grants
WHY:
last step of access grants implementing
Change-Id: Ibef391849c7185fa56627b482218c76fb2d31b46
WHAT:
updated overview step of onboarding tour. It shows upload data methods instead of tour steps now
WHY:
new onboarding tour
Change-Id: I7ffe9b2b91c2e17dd0c27e5e80a15301f6de16aa
WHAT:
first step of updating onboarding tour - adding routes
WHY:
onboarding your is being reworked so adding routes will make it easier to operate over it's states
Change-Id: Ide830989e39a6222e975bd2a6106b0efbb3839f9
This also removes the
TestEndpoint_DeleteObjectPieces_ObjectWithoutLastSegment test case as it
does not seem relevant to metabase.
Change-Id: I06a0ecaa8232c10c15e433517a7ba056933bf858
This resolves an issue in Uplink CLI with listing size 0 for files
uploaded with multipart upload.
Change-Id: I80e0b11a96f87ed6a87eb5301034c08dbc09e8aa
WHAT:
search for bucket names during creating access grant flow
WHY:
make user be able to search for needed buckets. Just in case if the number of buckets is too large.
Change-Id: I73bcaa160c7a1f433d8f0f7213999e7e40543bbc
version
The test-versions test currently takes 1h 40min to run each time. By
running each installation concurrently, hopefully, it will reduce the execution
time for the whole test.
Change-Id: I680c7d9945e982894b11825c9075c167f754e087
We should set the client requested maxParts to MaxListLimit if it is
greater than that value instead of returning an error.
MinIO default value for maxParts is 10,000 while the satellite's
MaxListLimit is 1,000. If we return an error, the ListParts with default
maxParts will throw an error.
Change-Id: I06739e1d8d8f96803eba491585395da0443aec04
We are no longer planning on implementing downtime penalization using
the method described in
docs/blueprints/archive/storage-node-downtime-tracking-deprecated.md.
Now, we are implementing the design described in
docs/blueprints/storage-node-downtime-tracking-with-audits.md.
This change removes the downtime estimation chores from the satellite
core as well as the package satellite/downtime. A future change will
remove the database table.
Change-Id: I1a1d3cf9dceeba36255d25243294865b89925518