this is so that multipart upload allows overwriting parts. our
consistency model is more relaxed because part overwrites
won't be atomic if they consist of multiple segments, but at
least this allows overwriting at all.
Change-Id: I21dac4c24046e05efe74e6c6fd189a02c95eb6c8
This change introduce problems with server side move so
let's revert it for now. Problem was found when latest
version of storj/storj was used in uplink tests.
This reverts commit 1ef06fae99.
Change-Id: I4d4fad5d1ea04ba15ff9d7bd765f7e078e9187c2
We were using mixed types for nonce fields. Protobuf
have storj.Nonce, metabase have []byte. This change
is a refactoring to have everywere its possible only
storj.Nonce.
Change-Id: Id54bd8481f30c721cdaf3df79206d25e7cfdab55
We were manually converting ObjectKey fields to []byte to use it with
SQL query but we can just implement Value method to convert it automatically.
Change-Id: I6d346f4b59718e1e8ef37cd9f95e613b864b42cd
We would like to verify if zombie object/segment works fine.
We need some metric for that. Figuring out number of deleted
objects is harder so let's for that later.
Change-Id: Ic99e2ce93256130b7c51f514824fddc009655075
Currently loops wait for the coalesce duration for TriggerWait.
Let's skip the coalesce when we trigger it manually.
Change-Id: If5bacd4e263d233f1f3ea41b989922d2ed5a48d4
server-side move extended with moving between buckets, for this reason
we change bucket name for object in db.
Change-Id: Ie21bcccc170e6ff14dcd8053fdb86fdf6d8438a0
Not all errors from RunOnce can be retried. The context can be cancelled
with several different errors, e.g. timeout. Ensure we stop the loop
when context has errored, because none of the queries will succeed
when it has failed.
Change-Id: If3ff11f11a6f43c0d67633be1cfaf23e3e9e55f3
Second method needed to perform server-side move. It updates
metadata key and nonce and all segments key and nonces.
Change-Id: Ia43b26622a13048269f0ae9e1524b345db112adb
First from two methods needed to perform server-side move. It gets
metadata key and nonce and all segments key and nonces and returns
all of that to uplink.
Change-Id: Ied2c79559e77d3f63091c4d61948f2d6a2147d67
Current default is equal to hardcoded maximum of batch size
for loop listing. With this change we will bump maximum
batch size but current default won't change. We will be able to
increase it later without need for source code change.
Change-Id: I2744a87be28af4157f58ede73455682f61733bc1
We were using this method for metainfo loop but now
segment loop is not using it so we can drop it.
Change-Id: I60c5b4f86a619259906d8c2ba76e665b8715be75
At some point we moved metabase package outside Metainfo
but we didn't do that for satellite structure. This change
refactors only tests.
When uplink will be adjusted we can remove old entries in
Metainfo struct.
Change-Id: I2b66ed29f539b0ec0f490cad42c72840e0351bcb
Additional test case where user is uploading 3 segments
each within 23h interval and when zombie deletion process
is executed then nothing is deleted because last segment
was uploaded in less then 24h.
Change-Id: I7426d6fe2c7e9b88c054a01408910c986bcf8d5f
Added options flag to define after which object won't be marked as inactive. All segments CreatedAt
time needs to be bellow this flag to treat object as inactive.
Change-Id: Ib5cffc776c6ee1b62b51eb8595438f968b42528c
This change syncs batchSizeLimit and ListLimit constants to prevent
throwing away results returned while listing with a maximum returns
limit.
Change-Id: Ie2425542d945cb88653dcc34c079737bb32320d4
To optimize memory consumption we where consuming
segment data during processing results from delete
query. Turns out that there is a chance that query will be
rolled-back if something will go wrong while reading
results. In such case its possible to delete pices but
object/segment will be still in DB.
This change removed piece deletion from problematic
place. Pieces are still deleted in batches but are not
limited at the moment. To avoid memory issues object
deletion batch was decreased.
Change-Id: Icb3667220f9c25f64b73cf71d0cf3fdc7e5107c5
This change adds a NOT NULL constraint to the created_at column in the segment table.
All occurrences of CreatedAt as a pointer are changed to non pointer version (metabase, segment loop, etc)
Change-Id: I3efd476ebd1edd3327b69c9223d9edc800e1cc52
This change adds dedicated methods on metabase.Pieces to be able to add, remove pieces and also to check duplicates.
Change-Id: I21aaeff40c017c2ebe1cc85a864ae546754769cc
In case when number of deleted segments in a single
batch was multiplication of opts.DeletePiecesBatchSize it
was possible to finish deletion of objects to early.
Change-Id: I9181d4a3c64053d9df46a11a6e0cd22153c73ee9
We made decision to avoid satellite shutdown when segment loop
will return error. Loop still can reeturn error but it will be logged
and we will make monitoring/alert around that error.
Change-Id: I6aa8e284406edf644a09d6b1fe00c3155c5430c9
It turns out that some DB methods are not
passing context to to lower levels even if have
context as a parameter.
Change-Id: I82f4fc1a47c362687a91364d85e4468057e53134
When the delta is very small from the bounds then the ratio calculation
doesn't work that well. Let's allow 100 from the bounds, since that
would be expected in any case.
We won't add a configuration for it, since it's not that useful.
Change-Id: I049066a42470b825f430b7f32ebe92d544c6cc8b
Currently, pending audit is finding segment by segment location
(path) because we want to move audit to segmentloop and we will
have only StreamID and Position we need to add columns for those
fields. Altering existing table can cause issues while
migration and deployment. Cleaner choise is to make new table.
This change contains migration with new segment_pending_audit
table that will replace pending_audits table and adjustments
to use new table in the code.
Table pending_audits will be dropped with next release.
Change-Id: Id507e29c152da594bac1fd812c78d7ecf45ec51f
It's possible that single object will have a lot of segments and
at the moment we are trying to collect all pieces at once and
send to storage nodes information about deletion. Such
approach my lead to using extensive amount of memory. This
change is handling this problem by calling DeletePieces
function multiple times with only part of pieces to delete for
a single call.
Change-Id: Ie1e66cd9d86d130eb89a61cf6e23f38b8cb8859e
This adds verification for the processed count and before and after
segment/objects table counts.
This adds new flag:
metainfo.segment-loop.suspicious-processed-ratio: 0.03
This defaults to 3%, which at 100M segments is 3M segments.
Change-Id: I5ee03e913ddc4e67e94010ced126a2a9ea51f41b
This adds verification for the processed count and before and after
segment/objects table counts.
This adds new flag:
metainfo.loop.suspicious-processed-ratio: 0.03
This defaults to 3%, which at 100M objects is 3M objects.
Change-Id: Ife5522ecc97bcc5a55667f36868a0f1fc8e4c561
Loops are using custom structs to provide
segments while iterating/looping so we need
to expose new field there too.
Change-Id: I12c8f4a01afeac171bf638d278253999fa90a8cb
We added expires_at column to segments table and now
we need to populate this column while committing segment.
We still need to migrate existing segments with
separate tool.
Change-Id: Ibac8c63d97201dd98cc2cb9db385f4cb73bc3f7e
Currently we did not limit the "as of system time" for iterating over
objects table. Using just an interval would cause problems with the
tests. That could be overcome skipping that interval for tests
altogether, however, we should probably test those more to ensure that
GC stays working as intended.
This is a safer code, however, maybe not as straigthforward as it could
be.
Change-Id: I374f77783b2af42bb6da846735ceea20a7ce5e60
Currently it's difficult to gather how many objects and segments are
being inserted. Adding separate monitoring counters make this easier.
Change-Id: I986cd82f03e99d2aa6fc76028255ee1090d1b294
Currently the iterate is being called in only one location so there's no
benefit in passing them as arguments over using the receiver.
Change-Id: I433a5d8b795b1bcc1f1e9320d87b10820cf537f1
In a rare case it's possible to start the loop iteration without
observers. The most likely case is that the observer is cancelled and
the coalesce timer trigger asynchronously, although being stopped.
Nevertheless, all the observers may also exit during the iteration, in
either case it should not result in an error.
If there's a probem with the observers, then they can report their own
error as they see fit.
Change-Id: Ie423fec41e6295be05536a4b7b0b6623ffebf2fb
Satellites set their configuration values to default values using
cfgstruct, however, it turns out our tests don't test these values
at all! Instead, they have a completely separate definition system
that is easy to forget about.
As is to be expected, these values have drifted, and it appears
in a few cases test planet is testing unreasonable values that we
won't see in production, or perhaps worse, features enabled in
production were missed and weren't enabled in testplanet.
This change makes it so all values are configured the same,
systematic way, so it's easy to see when test values are different
than dev values or release values, and it's less hard to forget
to enable features in testplanet.
In terms of reviewing, this change should be actually fairly
easy to review, considering private/testplanet/satellite.go keeps
the current config system and the new one and confirms that they
result in identical configurations, so you can be certain that
nothing was missed and the config is all correct.
You can also check the config lock to see what actual config
values changed.
Change-Id: I6715d0794887f577e21742afcf56fd2b9d12170e
We want to move some of current metainfo loop observers to
segment loop. This change adds new service, similar to metainfo
loop but which is iterating only over segments.
Change-Id: I67f7f461781723a4476e2b83377f31736d7c4870
Method IterateLoopSegments can be used to iterate over all segments in metabase without touching objects.
Change-Id: I3cc0e783884b603b47ef3f8233e357aa8a391250
Currently the interface is not useful. When we need to vary the
implementation for testing purposes we can introduce a local interface
for the service/chore that needs it, rather than using the large api.
Unfortunately, this requires adding a cleanup callback for tests, there
might be a better solution to this problem.
Change-Id: I079fe4dbe297b0ae08c10081a1cea4dfbc277682
Initially metabase was developed separately and it was useful to have a
separate environment flag for tests, however, it's more convenient to
use the same as rest of the testsuite.
Change-Id: Ia4d79be27ce5911cbae68d57cdf0b30f63459444
Currently we were duplicating code for AS OF SYSTEM TIME in several
places. This replaces the code with using a method on
dbutil.Implementation.
As a consequence it's more useful to use a shorter name for
implementation - 'impl' should be sufficiently clear in the context.
Similarly, using AsOfSystemInterval and AsOfSystemTime to distinguish
between the two modes is useful and slightly shorter without causing
confusion.
Change-Id: Idefe55528efa758b6176591017b6572a8d443e3d
So far we were setting ZombieDeletionDeadline alwasy as nil and because of that DB default was never set. This change adds separate query for inserting object if deadline is not set.
Change-Id: I3d6a16570e7c74b5304e13edad8c7adcd021340c
The system and database time may drift. We should use database time for
absolute "as of system time" to ensure that it's not newer than the
current database time. When the "as of system time" is in the future,
then the query will fail.
Change-Id: I5423f6aaad966ca03a76b5ff805bfba932e44a51
errs.Class should not contain "error" in the name, since that causes a
lot of stutter in the error logs. As an example a log line could end up
looking like:
ERROR node stats service error: satellitedbs error: node stats database error: no rows
Whereas something like:
ERROR nodestats service: satellitedbs: nodestatsdb: no rows
Would contain all the necessary information without the stutter.
Change-Id: I7b7cb7e592ebab4bcfadc1eef11122584d2b20e0
There's a rare chance that `Stop` returns false, however doesn't have
time triggered. Use a non-blocking drain to remove the token.
Change-Id: I1ae18a197424017f0ca76656602709a029b56bfd
Initially we duplicated the code to avoid large scale changes to
the packages. Now we are past metainfo refactor we can remove the
duplication.
Change-Id: I9d0b2756cc6e2a2f4d576afa408a15273a7e1cef
We need some chores to join without triggering the loop.
For example it's fine to run metrics, only when something else is
running.
Change-Id: I9d8bd16f59c28c540c8d72971bc4e233a8660c02
Currently the tool verifies:
* validity of plain_offset
* whether plain_size is smaller than encrypted_size
Change-Id: I9ec4fb5ead3356a196392c26ca377fcdb367138e
Currently the loop handling is heavily related to the metabase rather
than metainfo.
metainfo over time has become related to the "public API" for accessing
the metabase data.
Currently updates monkit.lock, because monkit monitoring does not handle
ScopeNamed correctly. Needs a followup change to monitoring check.
Change-Id: Ie50519991d718dfb872ec9a0176a82e732c97584
metabase has become a central concept and it's more suitable for it to
be directly nested under satellite rather than being part of metainfo.
metainfo is going to be the "endpoint" logic for handling requests.
Change-Id: I53770d6761ac1e9a1283b5aa68f471b21e784198