The flags weren't properly loading from config.
The code assumed that every node that's online for downloading also have
data uploaded to them -- which is not true.
Change-Id: Ifd65a47b9eca5b4841231928244fab17acbde6fb
This is another change to remove monkit calls from fast methods. Those
calls are visible in CPU profiles.
Change-Id: Ib3beba0dca6a6d93c3342b0994c580f78bbdd50b
This patch addresses the following issues:
1. Running full migration in cockroachdb is quite slow. We already have an approach for unit tests to start from the latest snapshot. This patch makes it possible to use it for integrations tests.
2. Migration requires executing a separated command which makes it hard to run application in containerized test environments (like storj-up) or from IDE. This patch introduces a hidden flag to run migration.
3. Test user creation is painful. We do it with calling GraphQL + admin API. Providing an option with testuser makes the integration tests significant more simple (especially as the projectID -> access grant can be predictable)
Change-Id: I61010728727b490ea6aac32620f2da0484966727
Monkit calls for fast methods which are executed very frequently can
slowdown whole process. This change removes monkit calls which are not
used.
See https://review.dev.storj.io/c/storj/storj/+/8498 as an example of
speed improvement after removing monkit calls.
Change-Id: If6567d80e05b748e6393b58a5142e43013107c61
When doing server-side copy, deletes the committed version of the target location if it already exists. It does not touch pending versions. The version of the copy is set to the highest already existing version + 1.
Fixes: https://github.com/storj/storj/issues/5071
Change-Id: I1d91ac17054834b1f4f0970a9fa5d58198c58a37
Assert that listing works with our new object consistency approach.
Half of https://github.com/storj/storj/issues/4868
Change-Id: I5e92f86122b50103cec7bf6d3b2c8ed103caceec
Before and after segments loop we are collecting stats about number of entries in segments and objects table. We are using number of segments
to validate loop execution but currently we are not using number of
objects anywhere. This change drops SQL query to count objects count as
it's not use anywhere at the moment.
Change-Id: I25ce77758870beb0daa5c0e21084a4c633a26f15
* Disallow too large listing limit, which would cause a lot of memory to
be consumed.
* Fix throttling logic and add a test.
* Fix read error handling; depending on the concurrency it can return
the NotFound status either in the Read or Close.
Change-Id: I778f04a5961988b2480df5c7faaa22393fc5d760
We will introduce new logic for creating new objects (BeginObject).
Instead of using single version internally (1) we will be selecting first
available version during object creation. Because we need to be sure
that everything is wired up correctly we need a feature flag to be
able to control if new feature is enabled.
Change-Id: If0f8496397130811f43bf9db9fdcc2b30cd2e4ca
Assert that listing pending objects works with our new object consistency
approach.
Half of https://github.com/storj/storj/issues/4868
Change-Id: Ic7bf3b20db57e64853d0464d7dc0da5441efd56f
Current metainfo.ListObjects implementation is using metabase iterator to list objects.
In the non-recursive case, it used to retrieve all the corresponding rows and then discarded the entries that did not fit the listing request.
This can lead in some edge cases (each prefix contains more than batchsize objects/sub-prefixes) to make unecessary calls to the db.
This change defines the metabase.ListObjects and aims at retrieving only prefixes (but not objects under it) and objects by modifying the SQL query.
In this version, it is not optimized on the database side. Cockroach will still have to go through all rows under a prefix, so there is still room for improvement.
metainfo.ListObjects is not currently using this method as we would like to assess its performance on the QA satellite first.
Fixes https://github.com/storj/storj/issues/5088
Change-Id: Ied3a9210871871d9d4a3096888d3e40c2dceed61
They are needed for segment-verify tool.
Also rename some of the conversion methods to make clear,
which of them have side-effects.
Change-Id: Ie9a0952548e9ed5068c7a30c2fd2134b07139bca
If source and destination are the same, do nothing and return the original object.
Later, we will have to handle the case when metadata is changed, but for now it's not possible through libuplink.
An issue has been created for this (https://github.com/storj/storj/issues/5168)
Change-Id: I91cf48afeec498d3b2c219fa0d3baf2163cff384
We need a method for getting a list of segments from the metabase,
without converting the aliases and omitting all inline segments.
Change-Id: I26d919c675fc285ab03a35b327edd9b5c8bbe4b0
We plan to replace metabase.BeginObjectExactVersion usage in
metainfo.BeginObject with metabase.BeginObjectNextVersion. To make this
switch as simple a possible would be nice to have the same results for
both methods. This change is extending return value for
BeginObjectNextVersion to whole object struct. Tests were also adjusted
to be more like metabase.BeginObjectExactVersion tests.
Part of https://github.com/storj/storj/issues/4871
Change-Id: I4db99d74af07e5a73757b55233e0bbdc7b99d565
method similar to metabase.DeleteObjectExactVersion which will delete last committed object
Closes https://github.com/storj/storj/issues/4872
Change-Id: Ia9f8c227dc59575bf8ed297886b35536097028b4
We are preparing to use object versions internally and to do
that we need to prepare different parts of the system to handle
object versions different than '1'. This change adjust code
responsible for server-side move and copy.
What was done:
* begin methods for move and copy are now using GetObjectLastCommitted
to find object
* results from begin move and copy operation contains now version to
be able to map object correctly with finish operation
* begin methods are putting version into satellite stream id and
finish methods are using this version as parameter instead hardcoded
value
Fixes https://github.com/storj/storj/issues/4867
Change-Id: I1380911279c21e10a3fff0342793efd2e73eafad
Currently we are tracking loop time with RunOnce method
metrics but it may give inaccurate values if loop if waiting
for observer for a ver long time. We had such case with GC
which is only loop observer in GC peer and its joning loop
every 5 days.
Change-Id: I08546c912e00c3641488de6a5d75948fe75c8e99
Updated metabase.UpdateObjectMetadata method to update set metdata always for last committed object
Closes https://github.com/storj/storj/issues/4870
Change-Id: I060683e31efcaf3e2531fea143cf0567e5ff5f73
This change reverts satellite/metabase/iterator.go of 7390f389c to the
previous version (without optimization) but leaves added benchmarks in
place.
We noticed that this optimization doesn't work and actually elevates
listing times for most buckets, hence the revert until we come up with a
better idea.
Benchmarks:
name old time/op new time/op delta
NonRecursiveListing/Postgres/listing_no_prefix-8 1.30ms ± 4% 4.52ms ± 4% +246.92% (p=0.008 n=5+5)
NonRecursiveListing/Postgres/listing_with_prefix-8 3.26ms ± 3% 4.44ms ± 2% +36.19% (p=0.008 n=5+5)
NonRecursiveListing/Cockroach/listing_no_prefix-8 618µs ± 3% 2225µs ± 2% +259.94% (p=0.008 n=5+5)
NonRecursiveListing/Cockroach/listing_with_prefix-8 1.81ms ± 5% 2.60ms ± 5% +43.96% (p=0.008 n=5+5)
Updates storj/team-metainfo#115
Change-Id: I96e4e7a563b188df478f8489027dc0042469b839
This is longest metabase at the moment and would nice to speed
up is a bit to improve overal tests execution time.
Change-Id: I86da8e0e593d20024b3ec778cbeab34a4613151f
Restored GetObjectLatestVersion and renamed it to GetObjectLastCommitted
Add test cases to cover server-side copy
Closes https://github.com/storj/storj/issues/4866
Change-Id: I343b339a60152b8fb92fda97baf80bd8fe60d631
I don't know why the go people thought this was a good idea, because
this automatic reformatting is bound to do the wrong thing sometimes,
which is very annoying. But I don't see a way to turn it off, so best to
get this change out of the way.
Change-Id: Ib5dbbca6a6f6fc944d76c9b511b8c904f796e4f3
We noticed that in the system we have undeleted very old pending
objects. General rule is to delete them after some inactivity. Turns
out that all those objects are objects migrated to metabase from
previous DB schema. During this migration we didn't set
zombie_deletion_deadline to any value.
This change takes into account pending objects with zombie deletion
deadline set to nil during zombie deletion process.
I also checked accross all production satellites and youngest pending
objects with nil zombie_deletion_deadline are from 2021 so it is safe
to delete them.
Change-Id: Ie2b6a4b4e203c1750cf8408ee281c0631b263082
Split out the function to delete a batch of objects from a bucket, so
that we get metrics which give a rough indication how long this operation
takes.
Part of https://github.com/storj/storj/issues/4957
Change-Id: I20a4ed5894217f4cd0b2f25aee297f0ecda57ab5
Don't terminate the expired objects loop or the zombie objects loop when
there is a DB error when selecting the objects for deleting them because
it isn't critical and the loops will pick them up again in the next
iteration.
The exception is if the DB rows scan method returns an error because
that's a symptom of the passed arguments to the method don't match with
the columns order, number, or type of the query, or there is invalid
data in the DB.
Don't also terminate these loops if the there is a DB error when
deleting the objects because the loops will pick them up in the next
iteration.
Because we don't return those errors now for not terminating the loop,
we have to log them.
Change-Id: I86bcf83d619345255840ae8f3db61620f044d2af
metabasetest package utils can be used by both tests and benchmarks
if we will use interface TestingT from require package. This change
adjusts metabasettest.CreateObject method
Change-Id: I3c138e2ef9873b804ab5b3402804efa409397a9f
With pointerdb listing objects operation was optimized to skip
objects from prefixes for non recursive listing. This change it
adopting this optimiaztion from old code.
Main change is to drop current page results if we detect a prefix
that needs to be skipped and jump with next listing query after this
prefix by setting cursor to "some-prefix(byte('/')+1) which is
effectively "some-prefix0".
Benchmark:
name old time/op new time/op delta
NonRecursiveListing/Postgres/listing_no_prefix-8 960µs ±11% 257µs ±12% -73.19% (p=0.008 n=5+5)
NonRecursiveListing/Postgres/listing_with_prefix-8 945µs ±11% 671µs ±12% -28.97% (p=0.008 n=5+5)
NonRecursiveListing/Cockroach/listing_no_prefix-8 4.31ms ± 8% 1.19ms ± 7% -72.44% (p=0.008 n=5+5)
NonRecursiveListing/Cockroach/listing_with_prefix-8 4.97ms ± 8% 3.35ms ±15% -32.67% (p=0.008 n=5+5)
Fixes https://github.com/storj/team-metainfo/issues/115
Change-Id: Iafdf3600d058abbaf441f792d32a7fc17cc08696
Previously there was no realtime administration of the storage usage
during copies. Now there is.
Closes https://github.com/storj/storj/issues/4719
Change-Id: I0d536bf551d16208116c3aceac89ed590ec473bf
An object copy/move is done by 2 DRPC calls. It's possible a new object was uploaded ore moved to the source location between these calls. For copy, in that case the segments end up with the wrong keys. This change adds an explicit check for that by comparing the streamId supplied by the user with the streamId in the database.
Fixes https://github.com/storj/storj/issues/4930
Change-Id: Id600456ce78fb4069b93644828a0b3eb85e23e16
Previously copying an object to it's ancestor location (copy of copy)
broke the object and all copies.
This fixes this by calling the existing delete method rather than a
custom one when there is an existing object at the copy destination.
The check for existing object at destination has been moved to an
earlier point in FinishCopy.
metabase.DeleteObject exposes a transaction parameter so that it can be
reused within metabase.
Closes https://github.com/storj/storj/issues/4707
Uplink test at https://review.dev.storj.io/c/storj/uplink/+/7557
Change-Id: I418fc3337fa9f30146ccc1db456af168ae41c326
- instead of closing over the outer err variable, potentially
overwriting some errors or something, declare local variables.
- double check that we got the number of rows we expected to get
and error otherwise. this prevents a possible source of inserting
bogus rows into the database.
Change-Id: I30662be2727afe0a90e4215a182fedc2648d1169
Part of the delete query cause a full table scan of segment_copies. This
slowed down the system. This change should have the same semantics but
improved performance.
Part of https://github.com/storj/storj/issues/4898
Change-Id: I4afe23df05467eafc9c91591f47a7251a0f3dd31
Read the source object and write the destination object in the same
transaction, to prevent breaking the object because it was deleted
simultaneously.
This is probably the root cause of the metainfo loop halting from
2022-06-21 onwards, where 2 objects lost their root_piece_id during
copying.
Part of https://github.com/storj/storj/issues/4930
Change-Id: I9c45d56a7bfb48ecd5f4906ee1cca42922901e90
To avoid regression with old versions of uplink objects move we need to
remove FinishMoveObject check for key and nonce, in SQL
in FinishMoveObject do check if metadata is nil then
don't set key and nonce. The same UPDATE clause should return metadata and
if metadata != nil we should do the same validation for key and nonce
to avoid putting broken key and nonce while doing move
Resolves: https://github.com/storj/team-metainfo/issues/108
Change-Id: If723dfad899e9235f53559b71ee1c7fe49deb8b8
TRUNCATE requires table recreation which involves 'online schema change' with crdb.
(with psql it might be fater than DROP, that was the motivation of the original change)
`online schema change` is an async operation with crdb and it's eventually very slow therefore we try to avoid it.
This·reverts·commit·15bed0ed0e81d54fe4ffac9928bdf648f5e06ec6.¬
Change-Id: I93e1ab470962be77e3458d74c8787442c9d7bee0
when deleting an object that has been copied multiple times, we look for an ancestor_stream_id by taking the min of all copies stream_id.
This change simplifies this process by picking any stream_id as a new ancestor by using 'distinct on'.
Fixes https://github.com/storj/storj/issues/4745
Change-Id: Iffb519b82d2ae2ed73af48fa0e86f87384e0158f
We are not using this method and most probably we
won't need to list objects with all statuses at once.
Removing for now.
Change-Id: I7aa0468c5f635ee2fb1fe51db382595c6343dd9c
Tests are intermittently fail with similar error:
```
--- FAIL: TestDeletePendingObject/Cockroach (15.85s)
test.go:380:
Error Trace: test.go:380
delete_test.go:221
Error: Should be zero, but was metabase.DeleteObjectResult{
Objects: []metabase.Object{
{
ObjectStream: {ProjectID: {0x0f, 0x40, 0x70, 0x41, ...}, BucketName: "txxywyg4", ObjectKey: "\xbb+$\x17\x80\xc6\xcaC\xa3\xdb\xc3z*\xa8\xbe\xaf", Version: 1, ...},
- CreatedAt: s"2022-05-20 14:40:15.995376773 +0200 CEST",
+ CreatedAt: s"2022-05-20 14:40:21.04949 +0200 CEST",
ExpiresAt: nil,
Status: 1,
... // 9 identical fields
},
},
Segments: {{RootPieceID: {0x01, 0x00, 0x00, 0x00, ...}, Pieces: {{...}}}, {RootPieceID: {0x01, 0x00, 0x00, 0x00, ...}, Pieces: {{...}}}},
}
Test: TestDeletePendingObject/Cockroach/with_segments
--- FAIL: TestDeletePendingObject/Cockroach/with_segments (0.68s)
```
Looks like we shouldn't have an assumption that all tests can be finished in 5 seconds, especially not in highly parallel environment.
These tests use `time.Now` at the beginning and compare the time saved in the database (usually filled by the database).
The difference shouldn't be higher than 20 seconds (before this commit: 5 seconds) which assumes that the records are saved in this timeframe...
Change-Id: Ia6f52897d13f88c6857c05d728bf8e72ab863c9b
If TestCommitInlineSegment tests are taking longer time
then zombieDeadline created at the beginning of test
can be too far in the past. Creating zombieDeadline for each case
should avoid flakines.
Change-Id: Ieb011e8e470f6f1c32cf9365c8ae819317de6738
TRUNCATE is faster than DELETE when deleting all rows.
As almost every metabase test case calls TestingDeleteAll, this change
should give some slight test speed-up.
Change-Id: Ib477962b6deb93edd60d6db2f1be6ede1b4b2381
Create an error class for the "pending object error" for distinguishing
it from other errors for allowing to return it as a "Not Found" DRPC
status code instead an "Internal" status code.
"Internal" errors are logged in the satellite error so this was
polluting the server logs aside of returning an inappropriate status
code.
Change-Id: I10a81adfc887c030c08a228158adc8815834b23c
We were already able to override (or not) metadata with this method
but to be explicit we are introducting new option to control storing
metadata with object. Separate option should be less error prone.
https://github.com/storj/team-metainfo/issues/105
Change-Id: I4c5bce953a633a0009b05c5ca84266ca6ceefc26
Use the same query when deleting a single object or multiple.
I have chosen not to deduplicate the row "scan" logic because
it is less complicated code and this change would expand to other
parts of the codebase.
Part of https://github.com/storj/storj/issues/4700
Change-Id: I7a958c78c903b2bddd72ca217971f7e8e02a0d0c
s3 allows for overwriting an object when using server-side copy.
This change makes overwriting the destination part of the atomic server-side copy operation so that
if copy fails, the old object is still available.
All the segments of the existing destination are deleted. If this destination object is an ancestor of another object, a new ancestor is promoted.
Fixes https://github.com/storj/storj/issues/4607
Change-Id: I85d1250850bb71867586ac230c8275a0cc1b63c3
When deleting a bucket, make sure that object copies in other buckets are
promoted to new ancestor and left in a working state.
Closes https://github.com/storj/storj/issues/4591
Change-Id: I019d916cd6de5ed51dd0dd25f47c35d0ec666af6
Uplink is fixed and now we should always get both key and nonce
or both empty.
Fixes https://github.com/storj/storj/issues/4646
Change-Id: I65dca2d4d5a10787c2fecad39e301121f1ae242a
Latest CRDB version did't work with our server-side copy deletion
query and we had to rewrite it.
Fixes https://github.com/storj/storj/issues/4655
Change-Id: I66d5c4abfc3c3f53dea8bc01ae11cd2b020e4289
Methods was never used in production and it's not sure that
it will be used at all. Let's drop it and restore if will be needed.
Fixes https://github.com/storj/storj/issues/4480
Change-Id: Ifd780d0096b67be7e72dff84bdcf1d957e0b48b5
So far we assumes that metadata key/nonce cannot be empty at all
but at some point we adjusted code to accept empty metadata/key/nonce
to save DB space.
This change is adjusting how we are processing nonce while
FinishMoveObject/FinishCopyObject. We can use storj.Nonce directly
which makes code cleaner. It's also fixing issue in FinishMoveObject
where we didn't convert nonce correctly to []byte.
Part of change is disabling validation for key and nonce until
uplink will be adjusted. We need change uplink to send always
both key and nonce or non of them. Validation will be restored
as soon as change for uplink will be merged.
https://github.com/storj/storj/issues/4646
Change-Id: Ia1772bc430ae591f54c6a9ae0308a4968aa30bed
We have an issue with latest CRDB. Single query cannot modify
the same table multiple times. Now build is blocked.
This change is unblocking build by:
* adjusting query for inserting into repair queue
* temporary removing code for deletion for server-side copy
* temporary disable backward compatibility tests for CRDB
Change-Id: Idd9744ebd228e5dc05bdaf65cfc8f779472a975d
If B is a copy of A, and C is a copy of B, then in the segment_copies table, it should appear that C is a copy of A.
Fixes https://github.com/storj/storj/issues/4538
Change-Id: I7e6b03f7584597cf616cd1e0cd0156386771d207
In the server-side copy initial implementation, we are inserting segments one by one. This PR inserts them all at once.
Fixes https://github.com/storj/storj/issues/4476
Change-Id: I776dba99be38a0eef73366e8e9287cbb794003dc
For server-side copy we adjusted one method DeleteObjectExactVersion.
Other deletion methods won't be used directly in code at the moment.
We will adjust other methods later or decide if we will need them at
all.
To handle deletion of objects with copies or just copies correctly we
need to use DeleteObjectExactVersion method in two places while:
* removing object before upload
* explicit object deletion
This change is also changing DeleteObjectExactVersion method to
delete pending objects because we need this functionality to
delete object before new upload.
https://github.com/storj/storj/issues/4481
Change-Id: Ieff5cc95732bb70ed8cc0ecdd62e03c929857c02
We were not checking if we were provided an empty StreamID.
Furthermore, this changes returns the object copy with the correct createdAt field.
Change-Id: Iefc563c34ae9d8c1e233895155c1718bf905df91
Copy object functionality should support setting new metadata for
copy. This change is adjusting FinishCopyObject method to set new
metadata when OverrideMetadata field is set to true.
Fixes https://github.com/storj/storj/issues/4483
Change-Id: Ica37cb57e8edae301cdc483fbda4f3ddba5d2702
Getting a copied segment by GetLatestObjectLastSegment needs to retrieve inline_data or remote_alias_pieces and other information from the original segment.
Resolves https://github.com/storj/storj/issues/4478
Change-Id: I8c7822c343b1ec3e04683f31a20f71e3097b4b4a
Return segments when creating a test object so that it can be checked
that the correct segments are remaining after a delete action.
Change-Id: Ifc245948935ba278806e887672c03abc5f2c2654
Updates metadata and metainfo to return object metadata with
FinishCopyObject request.
https://github.com/storj/storj/issues/4474
Change-Id: I32cba5c20a943272e9b5964df1b3d6463ad212dc
We would like to disable in production those parts of code
which are now mixed with new server-side copy logic.
Change-Id: Iff50682bc9545207330f58dd19b5eee53d404d7f
Part of server-side copy.
For getting a copied segment GetSegmentByPosition needs to retrieve inline_data or remote_alias_pieces and other information from the original segment.
Fixes https://github.com/storj/storj/issues/4477
Change-Id: I283cbc546df6e1e2fa34a803c7ef280ecd0f65d7
Currently the metainfo/metabase DB connections are missing the proper
application_name in order to differentiate and filter queries on the DB
side for analytics.
Without it, it is very time-consuming to correlate processes and their load.
This change adds the "check" on DB connection init and passes the fallbacks
in all places to catch connection strings, that do not set it.
Change-Id: Iea5cea8658bc63778ff89038e5c1c352bf482cfd
It can be useful to compare object copies created during unit tests.
They appear in the segment_copies table as couple (stream_id, ancestor_stream_id).
Change-Id: Id335c3ff7084fe30346456d27e670aff329154ea
Part of server-side copy implementation.
Creates the methods BeginCopyObject and FinishCopyObject in satellite/metabase/copy_object.go.
This commit makes it possible to copy objects and their segments in metabase.
https://github.com/storj/team-metainfo/issues/79
Change-Id: Icc64afce16e5e0e15b83c43cff36797bb9bd39bc
segment_copies table has (stream_id, ancestor_stream_id) as primary key,
but it should have only stream_id as primary as a stream_id can only have one ancestor_stream_id.
https://github.com/storj/team-metainfo/issues/84
Change-Id: I0218295e86e449c9dfefa5a21319f3083bf80afd
Part of server-side copy implementation.
Adds new table to metabase for keeping references between
copied segments and its ancestors.
Fixes https://github.com/storj/team-metainfo/issues/84
Change-Id: I436d4b533a3d951bcd752e222e3b721cee7f0844
Before this change we were returning full DB error message.
That can be very confusing for end user. This change is translating
error message into more user frindly version and fixes also DRPC
error status code.
Fixes https://github.com/storj/team-metainfo/issues/76
Change-Id: I29b06ab4ba50a0d14db7a822a2906d95d65ab524
We don't have different version of object than 1 so at the moment
this method is not needed. Also using GetObjectExactVersion
should be slightly more performant.
Change-Id: I78235d8ae22594cc1d6345dabcc915f41cd7797b
We enabled this chore on different satellites
for testing. Works fine so we can enable this
by default for all configurations.
Change-Id: I987639685d8de5c7e5798adca30fe26bdac9e1d1
This new method will be used with mechanism to limit number
of segments per project, similar to limiting buckets or bandwidth.
This is only one of multiple changes we will do to implement this
limitation.
Change-Id: Ia516c2a006ad1d7b4431d780679be9d809848f4b
We implemented filtering of system metadata to
reduce DB load in case user will need only list of
objects but we missed that encryption configuration
is needed to decrypt object key.
This change is always including encryption configuration into
list objects results.
Change-Id: Iaf7b3d8441480e3ad787f19a28f1b648584d5f7d
To resolve problem with lack of ability to set metadata while MPU on
gateway we are adding setting metadata with BeginObject. This
change makes also metadata optional while CommitObject. We need
this functionality to not override metadata set with BeginObject in
case when metadata is not set with CommitObject.
Another reson is that we would like to not set metadata at all if user
didn't specify metadata. At the moment we always setting some bytes
for metadata fields e.g. empty EncryptedMetadata field can have key
and nonce set.
Change-Id: Ifee25b7718eb1f919119db9b698b29d8b5ebe2ec
To reduce database load, add option to omit as many object properties
as possible when listing objects.
Change-Id: I817633801b00629a4042d1d1bd2389ee581953de
Turns out that S3 protocol is setting object metadata with initial
request, in our case it's BeginObject, so we need to modify metabase
methods to accept metadata at the beginning and at the end.
BeginObject methods will set metadata always (niled or not niled).
One additional improvment for metadata fields was introduced and it's
validating fields as optional if EncryptedMetadata was not set.
This change currently doesn't have any implications and it's a base for
other changes. Metadata is not set with BeginObject metainfo endpoint
yet.
Change-Id: I1f768407bc3428500b0d30ee188257420d953001
Multipart upload limits added. Last part has no size limit.
Max number of parts: 10000, min part size: 5 MiB
Change-Id: Ic2262ce25f989b34d92f662bde720d4c4d0dc93d
this is so that multipart upload allows overwriting parts. our
consistency model is more relaxed because part overwrites
won't be atomic if they consist of multiple segments, but at
least this allows overwriting at all.
Change-Id: I21dac4c24046e05efe74e6c6fd189a02c95eb6c8
This change introduce problems with server side move so
let's revert it for now. Problem was found when latest
version of storj/storj was used in uplink tests.
This reverts commit 1ef06fae99.
Change-Id: I4d4fad5d1ea04ba15ff9d7bd765f7e078e9187c2
We were using mixed types for nonce fields. Protobuf
have storj.Nonce, metabase have []byte. This change
is a refactoring to have everywere its possible only
storj.Nonce.
Change-Id: Id54bd8481f30c721cdaf3df79206d25e7cfdab55
We were manually converting ObjectKey fields to []byte to use it with
SQL query but we can just implement Value method to convert it automatically.
Change-Id: I6d346f4b59718e1e8ef37cd9f95e613b864b42cd
We would like to verify if zombie object/segment works fine.
We need some metric for that. Figuring out number of deleted
objects is harder so let's for that later.
Change-Id: Ic99e2ce93256130b7c51f514824fddc009655075
Currently loops wait for the coalesce duration for TriggerWait.
Let's skip the coalesce when we trigger it manually.
Change-Id: If5bacd4e263d233f1f3ea41b989922d2ed5a48d4
server-side move extended with moving between buckets, for this reason
we change bucket name for object in db.
Change-Id: Ie21bcccc170e6ff14dcd8053fdb86fdf6d8438a0
Not all errors from RunOnce can be retried. The context can be cancelled
with several different errors, e.g. timeout. Ensure we stop the loop
when context has errored, because none of the queries will succeed
when it has failed.
Change-Id: If3ff11f11a6f43c0d67633be1cfaf23e3e9e55f3
Second method needed to perform server-side move. It updates
metadata key and nonce and all segments key and nonces.
Change-Id: Ia43b26622a13048269f0ae9e1524b345db112adb
First from two methods needed to perform server-side move. It gets
metadata key and nonce and all segments key and nonces and returns
all of that to uplink.
Change-Id: Ied2c79559e77d3f63091c4d61948f2d6a2147d67
Current default is equal to hardcoded maximum of batch size
for loop listing. With this change we will bump maximum
batch size but current default won't change. We will be able to
increase it later without need for source code change.
Change-Id: I2744a87be28af4157f58ede73455682f61733bc1
We were using this method for metainfo loop but now
segment loop is not using it so we can drop it.
Change-Id: I60c5b4f86a619259906d8c2ba76e665b8715be75
At some point we moved metabase package outside Metainfo
but we didn't do that for satellite structure. This change
refactors only tests.
When uplink will be adjusted we can remove old entries in
Metainfo struct.
Change-Id: I2b66ed29f539b0ec0f490cad42c72840e0351bcb
Additional test case where user is uploading 3 segments
each within 23h interval and when zombie deletion process
is executed then nothing is deleted because last segment
was uploaded in less then 24h.
Change-Id: I7426d6fe2c7e9b88c054a01408910c986bcf8d5f
Added options flag to define after which object won't be marked as inactive. All segments CreatedAt
time needs to be bellow this flag to treat object as inactive.
Change-Id: Ib5cffc776c6ee1b62b51eb8595438f968b42528c
This change syncs batchSizeLimit and ListLimit constants to prevent
throwing away results returned while listing with a maximum returns
limit.
Change-Id: Ie2425542d945cb88653dcc34c079737bb32320d4
To optimize memory consumption we where consuming
segment data during processing results from delete
query. Turns out that there is a chance that query will be
rolled-back if something will go wrong while reading
results. In such case its possible to delete pices but
object/segment will be still in DB.
This change removed piece deletion from problematic
place. Pieces are still deleted in batches but are not
limited at the moment. To avoid memory issues object
deletion batch was decreased.
Change-Id: Icb3667220f9c25f64b73cf71d0cf3fdc7e5107c5
This change adds a NOT NULL constraint to the created_at column in the segment table.
All occurrences of CreatedAt as a pointer are changed to non pointer version (metabase, segment loop, etc)
Change-Id: I3efd476ebd1edd3327b69c9223d9edc800e1cc52
This change adds dedicated methods on metabase.Pieces to be able to add, remove pieces and also to check duplicates.
Change-Id: I21aaeff40c017c2ebe1cc85a864ae546754769cc
In case when number of deleted segments in a single
batch was multiplication of opts.DeletePiecesBatchSize it
was possible to finish deletion of objects to early.
Change-Id: I9181d4a3c64053d9df46a11a6e0cd22153c73ee9
We made decision to avoid satellite shutdown when segment loop
will return error. Loop still can reeturn error but it will be logged
and we will make monitoring/alert around that error.
Change-Id: I6aa8e284406edf644a09d6b1fe00c3155c5430c9
It turns out that some DB methods are not
passing context to to lower levels even if have
context as a parameter.
Change-Id: I82f4fc1a47c362687a91364d85e4468057e53134
When the delta is very small from the bounds then the ratio calculation
doesn't work that well. Let's allow 100 from the bounds, since that
would be expected in any case.
We won't add a configuration for it, since it's not that useful.
Change-Id: I049066a42470b825f430b7f32ebe92d544c6cc8b
Currently, pending audit is finding segment by segment location
(path) because we want to move audit to segmentloop and we will
have only StreamID and Position we need to add columns for those
fields. Altering existing table can cause issues while
migration and deployment. Cleaner choise is to make new table.
This change contains migration with new segment_pending_audit
table that will replace pending_audits table and adjustments
to use new table in the code.
Table pending_audits will be dropped with next release.
Change-Id: Id507e29c152da594bac1fd812c78d7ecf45ec51f
It's possible that single object will have a lot of segments and
at the moment we are trying to collect all pieces at once and
send to storage nodes information about deletion. Such
approach my lead to using extensive amount of memory. This
change is handling this problem by calling DeletePieces
function multiple times with only part of pieces to delete for
a single call.
Change-Id: Ie1e66cd9d86d130eb89a61cf6e23f38b8cb8859e
This adds verification for the processed count and before and after
segment/objects table counts.
This adds new flag:
metainfo.segment-loop.suspicious-processed-ratio: 0.03
This defaults to 3%, which at 100M segments is 3M segments.
Change-Id: I5ee03e913ddc4e67e94010ced126a2a9ea51f41b
This adds verification for the processed count and before and after
segment/objects table counts.
This adds new flag:
metainfo.loop.suspicious-processed-ratio: 0.03
This defaults to 3%, which at 100M objects is 3M objects.
Change-Id: Ife5522ecc97bcc5a55667f36868a0f1fc8e4c561
Loops are using custom structs to provide
segments while iterating/looping so we need
to expose new field there too.
Change-Id: I12c8f4a01afeac171bf638d278253999fa90a8cb
We added expires_at column to segments table and now
we need to populate this column while committing segment.
We still need to migrate existing segments with
separate tool.
Change-Id: Ibac8c63d97201dd98cc2cb9db385f4cb73bc3f7e
Currently we did not limit the "as of system time" for iterating over
objects table. Using just an interval would cause problems with the
tests. That could be overcome skipping that interval for tests
altogether, however, we should probably test those more to ensure that
GC stays working as intended.
This is a safer code, however, maybe not as straigthforward as it could
be.
Change-Id: I374f77783b2af42bb6da846735ceea20a7ce5e60
Currently it's difficult to gather how many objects and segments are
being inserted. Adding separate monitoring counters make this easier.
Change-Id: I986cd82f03e99d2aa6fc76028255ee1090d1b294
Currently the iterate is being called in only one location so there's no
benefit in passing them as arguments over using the receiver.
Change-Id: I433a5d8b795b1bcc1f1e9320d87b10820cf537f1
In a rare case it's possible to start the loop iteration without
observers. The most likely case is that the observer is cancelled and
the coalesce timer trigger asynchronously, although being stopped.
Nevertheless, all the observers may also exit during the iteration, in
either case it should not result in an error.
If there's a probem with the observers, then they can report their own
error as they see fit.
Change-Id: Ie423fec41e6295be05536a4b7b0b6623ffebf2fb
Satellites set their configuration values to default values using
cfgstruct, however, it turns out our tests don't test these values
at all! Instead, they have a completely separate definition system
that is easy to forget about.
As is to be expected, these values have drifted, and it appears
in a few cases test planet is testing unreasonable values that we
won't see in production, or perhaps worse, features enabled in
production were missed and weren't enabled in testplanet.
This change makes it so all values are configured the same,
systematic way, so it's easy to see when test values are different
than dev values or release values, and it's less hard to forget
to enable features in testplanet.
In terms of reviewing, this change should be actually fairly
easy to review, considering private/testplanet/satellite.go keeps
the current config system and the new one and confirms that they
result in identical configurations, so you can be certain that
nothing was missed and the config is all correct.
You can also check the config lock to see what actual config
values changed.
Change-Id: I6715d0794887f577e21742afcf56fd2b9d12170e
We want to move some of current metainfo loop observers to
segment loop. This change adds new service, similar to metainfo
loop but which is iterating only over segments.
Change-Id: I67f7f461781723a4476e2b83377f31736d7c4870
Method IterateLoopSegments can be used to iterate over all segments in metabase without touching objects.
Change-Id: I3cc0e783884b603b47ef3f8233e357aa8a391250
Currently the interface is not useful. When we need to vary the
implementation for testing purposes we can introduce a local interface
for the service/chore that needs it, rather than using the large api.
Unfortunately, this requires adding a cleanup callback for tests, there
might be a better solution to this problem.
Change-Id: I079fe4dbe297b0ae08c10081a1cea4dfbc277682
Initially metabase was developed separately and it was useful to have a
separate environment flag for tests, however, it's more convenient to
use the same as rest of the testsuite.
Change-Id: Ia4d79be27ce5911cbae68d57cdf0b30f63459444
Currently we were duplicating code for AS OF SYSTEM TIME in several
places. This replaces the code with using a method on
dbutil.Implementation.
As a consequence it's more useful to use a shorter name for
implementation - 'impl' should be sufficiently clear in the context.
Similarly, using AsOfSystemInterval and AsOfSystemTime to distinguish
between the two modes is useful and slightly shorter without causing
confusion.
Change-Id: Idefe55528efa758b6176591017b6572a8d443e3d
So far we were setting ZombieDeletionDeadline alwasy as nil and because of that DB default was never set. This change adds separate query for inserting object if deadline is not set.
Change-Id: I3d6a16570e7c74b5304e13edad8c7adcd021340c
The system and database time may drift. We should use database time for
absolute "as of system time" to ensure that it's not newer than the
current database time. When the "as of system time" is in the future,
then the query will fail.
Change-Id: I5423f6aaad966ca03a76b5ff805bfba932e44a51
errs.Class should not contain "error" in the name, since that causes a
lot of stutter in the error logs. As an example a log line could end up
looking like:
ERROR node stats service error: satellitedbs error: node stats database error: no rows
Whereas something like:
ERROR nodestats service: satellitedbs: nodestatsdb: no rows
Would contain all the necessary information without the stutter.
Change-Id: I7b7cb7e592ebab4bcfadc1eef11122584d2b20e0
There's a rare chance that `Stop` returns false, however doesn't have
time triggered. Use a non-blocking drain to remove the token.
Change-Id: I1ae18a197424017f0ca76656602709a029b56bfd
Initially we duplicated the code to avoid large scale changes to
the packages. Now we are past metainfo refactor we can remove the
duplication.
Change-Id: I9d0b2756cc6e2a2f4d576afa408a15273a7e1cef
We need some chores to join without triggering the loop.
For example it's fine to run metrics, only when something else is
running.
Change-Id: I9d8bd16f59c28c540c8d72971bc4e233a8660c02
Currently the tool verifies:
* validity of plain_offset
* whether plain_size is smaller than encrypted_size
Change-Id: I9ec4fb5ead3356a196392c26ca377fcdb367138e
Currently the loop handling is heavily related to the metabase rather
than metainfo.
metainfo over time has become related to the "public API" for accessing
the metabase data.
Currently updates monkit.lock, because monkit monitoring does not handle
ScopeNamed correctly. Needs a followup change to monitoring check.
Change-Id: Ie50519991d718dfb872ec9a0176a82e732c97584
metabase has become a central concept and it's more suitable for it to
be directly nested under satellite rather than being part of metainfo.
metainfo is going to be the "endpoint" logic for handling requests.
Change-Id: I53770d6761ac1e9a1283b5aa68f471b21e784198