there were two changes to this package: one that modified
some things and renamed access.go to main.go, and one that
reduced the binary sizes against main.go.
somehow the latter change was included on this branch but
the former was not, which made the folder contain both
main.go and access.go. building the package then fails
with duplicate definitions.
the easiest fix is to remove access.go which makes the folder
match the contents of the current master branch.
Change-Id: I7070a0d25a0399cef3166b8b2189427bc55587e0
Make changes so that we only import the necessary files from the console package so that the generated wasm code is as small as possible.
This change gets the compiled wasm code down to 8.6MB uncompressed and 2MB when compressed with `gzip --best`.
https://review.dev.storj.io/c/storj/storj/+/3396
Change-Id: Ifdd4be285810757b46bbbe43327c0d0139e5f8f7
Remove a declared variable that's set by never read nor passed to any
function so it's unused code.
Change-Id: I8daf9d1f71d29ab39d7a80011d1b4813ada1c67d
We need to be able to update just remote_pieces column in DB. This is
needed at least for repair process.
Change-Id: I20dcc9b06babfefbbf102f32b1d14946379f26c2
It was designed to detect and remove zombie segments in the PointerDB.
This tool should be not relevant with the MetabaseDB anymore.
Change-Id: I112552203b1329a5a659f69a0043eb1f8dadb551
We migrated satelliteDB off of Postgres and over to CockroachDB (crdb), but there was way too high contention for the injuredsegments table so we had to rollback to Postgres for the repair queue. A couple things contributed to this problem:
1) crdb doesn't support `FOR UPDATE SKIP LOCKED`
2) the original crdb Select query was doing 2 full table scans and not using any indexes
3) the SLC Satellite (where we were doing the migration) was running 48 repair worker processes, each of which run up to 5 goroutines which all are trying to select out of the repair queue and this was causing a ton of contention.
The changes in this PR should help to reduce that contention and improve performance on CRDB.
The changes include:
1) Use an update/set query instead of select/update to capitalize on the new `UPDATE` implicit row locking ability in CRDB.
- Details: As of CRDB v20.2.2, there is implicit row locking with update/set queries (contention reduction and performance gains are described in this blog post: https://www.cockroachlabs.com/blog/when-and-why-to-use-select-for-update-in-cockroachdb/).
2) Remove the `ORDER BY` clause since this was causing a full table scan and also prevented the use of the row locking capability.
- While long term it is very important to `ORDER BY segment_health`, the change here is only suppose to be a temporary bandaid to get us migrated over to CRDB quickly. Since segment_health has been set to infinity for some time now (re: https://review.dev.storj.io/c/storj/storj/+/3224), it seems like it might be ok to continue not making use of this for the short term. However, long term this needs to be fixed with a redesign of the repair workers, possible in the trusted delegated repair design (https://review.dev.storj.io/c/storj/storj/+/2602) or something similar to what is recommended here on how to implement a queue on CRDB https://dev.to/ajwerner/quick-and-easy-exactly-once-distributed-work-queues-using-serializable-transactions-jdp, or migrate to rabbit MQ priority queue or something similar..
This PRs improved query uses the index to avoid full scans and also locks the row its going to update and CRDB retries for us if there are any lock errors.
Change-Id: Id29faad2186627872fbeb0f31536c4f55f860f23
We need to be able to list all buckets in DB without knowing project ID.
This method will be used to list buckets for metainfo loop
implementation based on metabase.
Change-Id: Iac75af0eee4f31e80a15577575a8249cbca787b2
- TestBucketNameValidation
- TestBatch
- TestCommitObjectMetadataSize
- TestIDs
TestOverwriteZombieSegments is removed as not relevant to metabase.
Change-Id: I13cf5abe342089960628f185061303fd4f9d09a4
This also removes the
TestEndpoint_DeleteObjectPieces_ObjectWithoutLastSegment test case as it
does not seem relevant to metabase.
Change-Id: I06a0ecaa8232c10c15e433517a7ba056933bf858
We should set the client requested maxParts to MaxListLimit if it is
greater than that value instead of returning an error.
MinIO default value for maxParts is 10,000 while the satellite's
MaxListLimit is 1,000. If we return an error, the ListParts with default
maxParts will throw an error.
Change-Id: I06739e1d8d8f96803eba491585395da0443aec04
We are no longer planning on implementing downtime penalization using
the method described in
docs/blueprints/archive/storage-node-downtime-tracking-deprecated.md.
Now, we are implementing the design described in
docs/blueprints/storage-node-downtime-tracking-with-audits.md.
This change removes the downtime estimation chores from the satellite
core as well as the package satellite/downtime. A future change will
remove the database table.
Change-Id: I1a1d3cf9dceeba36255d25243294865b89925518
We have some issues with SUBSTRING function on cockroachdb so for now we
are removing it from SQL query and replacing with go code.
Change-Id: I5be921211067d42e7d1a4997076bcfdbed9617a1
We want to stop using the serial_numbers table in satelliteDB. One of the last places using the serial_numbers table is when storagenodes settle orders, we look up the bucket name and project ID from the serial number from the serial_numbers table.
Now that we have support to add encrypted metadata into the OrderLimit, this PR makes use of that and now attempts to read the project ID and bucket name from the encrypted orderLimit metadata instead of from the serial_numbers table. For backwards compatibility and to ensure no errors, we will still fallback to the old way of getting that info from the serial_numbers table, but this will be removed in the next release as long as there are no errors.
All processes that create orderLimits must have an orders.encryption-keys set. The services that create orderLimits (and thus need to encrypt the order metadata) are the satellite apiProcess, the repair process, audit service (core process), and graceful exit (core process). Only the satellite api process decrypts the order metadata when storagenodes settle orders. This means that the same encryption key needs to be provided in the config for the satellite api process, repair process, and the core process like so:
orders.include-encrypted-metadata=true
orders.encryption-keys="<"encryptionKeyID>=<encryptionKey>"
Change-Id: Ie2c037971713d6fbf69d697bfad7f8b672eedd66
iterator
This method replaces `deleteByPrefix` as at the moment only function of
this method was to delete objects in a bucket.
Change-Id: I5266103672003fbd64f3847f53760b1ba0016fe2
Which database access and how it internally does migrations is an
implementation detail and does not belong in the requirements interface.
Change-Id: Ia4a6994f39470063a96a8e5f3a1bd27aa79fe5cd