We stoped returning lots of errors as is to avoid leaking our internals
but some errors were meanigful for client. Example of such error is
"exceeded maximum number of parts". With this change we are wrapping
some important commit object errors with new ErrFailedPrecondition
error to be able to return it easily to uplink.
Change-Id: Id834b78362ed1920f0c3f6f1c7d9587bfd27e36a
With pending_objects table support enabled we missed passing correctly
expiration time from pending object to committed object. This change
updates commit query to take into account expiration time.
Change-Id: I67146d5b2f7f0bda02925d16275fbc59acb705bd
Currently it wasn't quite clear what was a stub version and an actual
version. Use a PendingVersion constant to make this distinction clear.
Also use PendingVersion = NextVersion = 0, that way it's clearer that
the version hasn't been yet determined. DefaultVersion = 1 might imply
that the object will get that version once commited, however that will
entirely depend on whether use-pending-objects is used or versioning is
enabled or not.
Change-Id: I21398141f97035c48c778f23b542266b834c44f1
While adding support for pending_objects table one case was missed.
When we are uploading object to location where old objects exists
we are not removing old object segments at all. Old object is
overwritten with new object metadata but segments remains without
corresponding object. This fix removes all existing committed objects
(with it's segments) before committing new object.
https://github.com/storj/storj/issues/6255
Change-Id: Id657840edf763fd6aec8191788d819191b074fb7
With zombie deletion chore we are removing inactive pending objects from
objects table but new we need also to do this for pending_objects table.
https://github.com/storj/storj/issues/6050
Change-Id: Ia29116c103673a1d9e10c2f16654022572210a8a
Extends metabase.BucketEmpty logic to check also pending_objects
table for any entry.
https://github.com/storj/storj/issues/6057
Change-Id: Ia26c272de24a983b308a0b692e6bd5800487eb98
While deleting bucket we need also to delete pending objects from
pending_objects table.
Part of https://github.com/storj/storj/issues/6048
Change-Id: Icc83eaecf8388704e0b6329c397e8028debcf672
New metabase method IteratePendingObjectsByKeyNew to iterate
over entries in pending_objects table with the same object key.
Implementation and tests are mostly copy of code for
IteratePendingObjectsByKey. Main difference is that pending_objects
table have StreamID column part of primary key instead Version.
Method will be used to support new table in
metainfo.ListPendingObjectStreams request.
After full transition to pending_objects table we should remove 'New'
suffix from methods names.
Part of https://github.com/storj/storj/issues/6047
Change-Id: Ifc1ecbc534f8510fbd70c4ec676cf2bf8abb94cb
New metabase method IteratePendingObjects to iterate over entries in
pending_objects table. Implementation and tests are mostly copy of
code that we are using to iterate over objects table. Main difference
is that pending_objects table have StreamID column part of primary
key instead Version. Also structure of pending object is smaller
than the one from object table but it's a detail.
Method will be used to support new table in metainfo.ListObjects
request.
Next step will be to port rest of iterator implementation to support
pending_objects table in metainfo.ListPendingObjectStreams.
Part of https://github.com/storj/storj/issues/6047
Change-Id: Ia578182f88840539f3668d4a242953e061eace02
We are deleting pending objects while aborting multipart upload. We are
using metainfo BeginDeleteObject to do that. This change starts using
DeletePendingObjectNew to delete entry from pending_objects table when
request indicates that object is in this table.
Part of https://github.com/storj/storj/issues/6048
Change-Id: I4478a9c13c8e3db48dc5de3087ef03d1b4c47a5c
With this change we are removing code responsible for deleting objects
and supporting server side copies created with references. In practice
we are restoring delete queries that we had before server side copy
implementation (with small exception, see bellow).
From deletion queries we are also removing parts with segment metadata
as result because we are not longer sending explicit delete requests to
storage nodes.
https://github.com/storj/storj/issues/5891
Change-Id: Iee4e7a9688cff27a60fb95d60dd233d996f41c85
We don't need to support segment copies with references anymore.
We migrated to copies where all metadata are copied from original
segment to copy.
https://github.com/storj/storj/issues/5891
Change-Id: Ic91dc21b0386ddf5c51aea45530024cd463e8ba9
With this change we are removing code responsible to handle deleting
bucket and supporting server side copies created with references. In practice we are restoring delete queries that we had before server side copy implementation (with small exception, see bellow).
From deletion queries we are also removing parts with segment metadata
as result because we are not longer sending explicit delete requests to
storage nodes.
https://github.com/storj/storj/issues/5891
Change-Id: If866d9f3a1b01e9ebd9b49c4740a6425ba06dd43
Change is adjusting CommitObject to use `pending_objects` table to
commit object.
Satellite stream id is used to determine if we need to use
`pending_objects` or `objects` table during commit.
General goal is to support both tables until `objects` table will be
free from pending objects.
Part of https://github.com/storj/storj/issues/6046
Change-Id: I2ebe0cd6b446727c98c8e210d4d00504dd0dacb6
Change is adjusting CommitSegment to check pending object existence in
`pending_objects` or `objects` table.
Satellite stream id is used to determine if we need to use
`pending_objects` or `objects` table.
General goal is to support both tables until `objects` table will be
free from pending objects. Whenever it will be needed code will be
supporting both tables at once.
Part of https://github.com/storj/storj/issues/6046
Change-Id: I954444a53b4733ae6fc909420573242b02746787
Change is adjusting BeginSegment to check pending object existence in
`pending_objects` or `objects` table.
Satellite stream id is used to determine if we need to use
`pending_objects` or `objects` table.
General goal is to support both tables until `objects` table will be
free from pending objects. Whenever it will be needed code will be
supporting both tables at once.
Part of https://github.com/storj/storj/issues/6046
Change-Id: I08aaa605c23d82695fde352fdbd0a7fd11f46bb5
Change is adjusting BeginObjectNextVersion to create pending object in
`pending_objects` or `objects` table depends on configuration. This is
first change to move pending objects from objects table.
General goal is to support both tables until `objects` table will be
free from pending objects. Whenever it will be needed code will be
supporting both tables at once.
To be able to decide if we need to use `pending_objects` table or
`objects` table we extend satellite stream id to keep that information
for later use.
BeginObjectExactVersion will be not adjusted because at the moment it's
used only in tests.
Part of https://github.com/storj/storj/issues/6046
Change-Id: Ibf21965f63cca5e1775469994a29f1fd1261af4e
For some test queries we are using workaround to filter them out from
full table scan detection. To avoid confustion what is this all about
we are changing label to be more descriptive.
Change-Id: I41a744e8faf400e3e8de7e416d8f4242f9093fce
This change adds only schema definition of pending_objects table and
small amount of supporting code which will be useful for testing later.
With this table we would like to achieve two major things:
* simplify `objects` table, before we will start working on object
versioning
* gain performance by removing need to filter `objects` results with `status` column, which is not indexed and we would like to avoid that
https://github.com/storj/storj/issues/6045
Change-Id: I6097ce1c644a8a3dad13185915fe01989ad41d90
Segments loop implementation is using lots of memory to convert
alias pieces to pieces for each segment while iteration. To improve
situation this change is reusing Pieces between batch pages. This
should signifcantly reduce memory usage for ranged loop executions.
Change-Id: I469188779908facb19ad85c6bb7bc3657111cc9a
Currently, we have issue were while counting unhealthy pieces we are
counting twice piece which is in excluded country and is outside segment
placement. This can cause unnecessary repair.
This change is also doing another step to move RepairExcludedCountryCodes
from overlay config into repair package.
Change-Id: I3692f6e0ddb9982af925db42be23d644aec1963f
placement.AllowedCountry is the old way to specify placement, with the new approach we can use a more generic (dynamic method), which can check full node information instead of just the country code.
The 90% of this patch is just search and replace:
* we need to use NodeFilters instead of placement.AllowedCountry
* which means, we need an initialized PlacementRules available everywhere
* which means we need to configure the placement rules
The remaining 10% is the placement.go, where we introduced a new type of configuration (lightweight expression language) to define any kind of placement without code change.
Change-Id: Ie644b0b1840871b0e6bbcf80c6b50a947503d7df
..instead of using segment_copies and ancestor_stream_id, etc.
This bypasses reference counting entirely, depending on our mark+sweep+
bloomfilter garbage collection strategy to get rid of pieces once they
are no longer part of a segment.
This is only safe to do after we have stopped passing delete requests on
to storage nodes.
Refs: https://github.com/storj/storj/issues/5889
Change-Id: I37bdcffaa752f84fd85045235d6875b3526b5ecc
Currently we are using Reliable to get missing pieces for repair
checker. The issue is that now checker is looking at more things than
just missing pieces (clumped/off, placement pieces) and using only node
ID is not enough. We have issue where we are skipping offline nodes from
clumped and off placement pieces check.
Reliable was refactored to get data (e.g. country, lastNet) about all
reliable nodes. List is split into online and offline. This data will be
cached for quick use by repair checker. It will be also possible to
check nodes metadata like country code or lastNet.
We are also slowly moving `RepairExcludedCountryCodes` config from
overlay to repair which makes more sens for it.
This this first part of changes.
https://github.com/storj/storj/issues/5998
Change-Id: If534342488c0e440affc2894a8fbda6507b8959d
In case some invalid characters in bucket name we need to cast
bucket name to byte array for query argument. This change is
doing this for some missed cases.
Change-Id: I47d0d8e3c85a69bdf63de1137adcd533dcfe50a8
By mistake AOST was added to query in deleteInactiveObjectsAndSegments
in DeleteZombieObjects. Delete statement is not supporting it.
Unfortunately unit tests didn't cover this case. This change removes
AOST from mentioned method and it adding AOST cases to unit tests.
Change-Id: Ib7f65134290df08c490c96b7e367d12f497a3373
Because we are saving all tallies as a single SQL statement we finally
reached maximum message size. With this change we will call SaveTallies multiple times in batches.
https://github.com/storj/storj/issues/5977
Change-Id: I0c7dd27779b1743ede66448fb891e65c361aa3b0
Some of tally queries are not passing bucket name as byte but as string.
If bucket contains some special characters encoding to bytea can fail.
This change makes sure all parts of tally passes bucket name correctly.
Change-Id: I7330d546b44d86a2e4614c814580e9e5262370ed
We decided that we will stop sending explicit delete requests to nodes
and we will cleanup deleted with GC instead.
https://github.com/storj/storj/issues/5888
Change-Id: I65a308cca6fb17e97e3ba85eb3212584c96a32cd
* optimize SQL for zombie objects deletion query by reducing some direct
selects to segments table
* set AOST for expired/zombie object deletion (was 0 since now)
https://github.com/storj/storj/issues/5881
Change-Id: I50482151d056a86fe0e31678a463f413d410759d
this change adds code to CreateGetOrderLimits to filter
out any nodes that are not in the placement specified
by the segment. notably, it does not change the audit
or repair order limits. the list segments code had to be
changed to include getting the placement field from the
database.
Change-Id: Ice3e42a327811bb20928c619a72ed94e0c1464ac
We will remove segments loop soon so we need first to move
Segment definition to rangedloop package.
https://github.com/storj/storj/issues/5237
Change-Id: Ibe6aad316ffb7073cc4de166f1f17b87aac07363
A chore responsible for purging data from the console DB has been
implemented. Currently, it removes old records for unverified user
accounts. We plan to extend this functionality to include expired
project member invitations in the future.
Resolves#5790
References #5816
Change-Id: I1f3ef62fc96c10a42a383804b3b1d2846d7813f7
The "object not found" included an additional prefix "metabase:", which
broke uplink error message detection.
Ideally, changing an internal error message shouldn't break the uplink.
Change-Id: I5ce7789cc11742d3435af1ec555bc96927f1bedc
This combines the ListStreamPositions and GetSegmentByPosition
calls with a ListSegments call that now knows how to return
only the segments within a Range, just like ListStreamPositions.
It would theoretically be possible to also include the
GetObjectLastCommitted call by having it do one of three
queries based on the incoming request range, but that would
mean duplicating the data for the object in every single
row that is returned for each segment in the range.
One gross thing that ListSegments has to do now is update the
first segment returned with the information from any ancestor
segment because GetSegmentByPosition used to do that. It only
updates the first segment so that it doesn't do O(N) database
queries. It seems difficult to have it do a single query to
update all of the segments at once. I'm not certain this change
should be merged on this basis alone.
This change has made me think a couple of things should happen:
1. Server side copy with ancestor segments strikes again
making the code less clear and potentially more buggy
or inefficient for a rare case (empirically <0.1%)
2. The download code requests individual segments from
the satellite lazily as part of its download which
requires the satellite telling it the locations of
all of the segments which requires the satellite
querying the locations of all of the segments. Instead
the download RPC could return the orders for all of
the segments for a range and the download code could
issue N download calls rather than 1 download call and
N get segment calls. I believe both sides of the code
paths would be simpler and more efficient this way.
3. In looking at the timing information for downloads when
testing this, we really need to focus on getting the
auth key and bandwidth limit verification times down.
Here's the timing I saw:
- 42ms: validate auth
- 52ms: bandwidth usage checking
- 14ms: get object info
- 26ms: get segment position info
- 26ms: getting the first segment full info
- 20ms: unaccounted for by spans
- 6ms: creating the orders
This change will remove 26ms, but there's a good 90ms
in just validation. With improved semantics hitting the
database only once and improved validation, a download
rpc taking ~30ms seems doable compared to our current
~200ms.
Change-Id: I4109dba082eaedb79e634c61dbf86efa93ab1222
By accident query to get latest table stats was ordered by 'row_count'
column instead 'create'. We need latest stats so we need ordering by
creation time.
Change-Id: I9d0a0edda8bab59c3d96b7a15cd6502ed51633fc
This flag was in general one time switch to enable versions internally.
New we can remove it as it makes code more complex.
Change-Id: I740b6e8fae80d5fac51d9425793b02678357490e
We want to eliminate usages of LoopSegmentEntry.Pieces, because it is
costing a lot of cpu time to look up node IDs with every piece of every
segment we read.
In this change, we are eliminating use of LoopSegmentEntry.Pieces in the
node tally observer (both the ranged loop and segments loop variants).
It is not necessary to have a fully resolved nodeID until it is time to
store totals in the database. We can use NodeAliases as the map key
instead, and resolve NodeIDs just before storing totals.
Refs: https://github.com/storj/storj/issues/5622
Change-Id: Iec12aa393072436d7c22cc5a4ae1b63966cbcc18
Currently, to get number of entries in segments table we are doing
heavy SELECT count(*) operation. For biggest satellite it's taking
25min now. We are using this method to get stat before and after
segments loop so it adds almost 1h to overall loop time.
With current version of crdb we are using this additional code won't be
used because global configuration for stats refresh rate is inaccurate
for such large table like `segments`. Soon we should be able to upgrade
crdb and be able to adjust refresh rate per table and configure it to
satisfy defined threshold.
https://github.com/storj/storj/issues/5544
Change-Id: I05cfd9154f08894d2bc56bf716b436d1b03b87f1
Segments loop have build-in sanity check to verify if number of segments
processed by loop is roughly fine. We want to have the same verification
for ranged loop.
https://github.com/storj/storj/issues/5544
Change-Id: Ia19edc0fb4aa8dc45993498a8e6a4eb5928485e9
This code is essentially replacement for eestream.CalcPieceSize. To call
eestream.CalcPieceSize we need eestream.RedundancyStrategy which is not
trivial to get as it requires infectious.FEC. For example infectious.FEC
creation is visible on GE loop observer CPU profile because we were
doing this for each segment in DB.
New method was added to storj.Redundancy and here we are just wiring it
with metabase Segment.
BenchmarkSegmentPieceSize
BenchmarkSegmentPieceSize/eestream.CalcPieceSize
BenchmarkSegmentPieceSize/eestream.CalcPieceSize-8 5822 189189 ns/op 9776 B/op 8 allocs/op
BenchmarkSegmentPieceSize/segment.PieceSize
BenchmarkSegmentPieceSize/segment.PieceSize-8 94721329 11.49 ns/op 0 B/op 0 allocs/op
Change-Id: I5a8b4237aedd1424c54ed0af448061a236b00295
This is first attempt to use AliasPieces inastead Pieces with
segments/range loop. So far we were always using Pieces
which are always converted from AliasPieces for easy use.
Side effect is that using NodeID with loop observers is heavy
e.g. we are using maps which behaves slower with NodeIDs.
We are starting with audit observer because it's easy to change
it as in feact it doesn't need access to real NodeID at all. We just
need to reference node in some way and this way is NodeAlias.
Results of BenchmarkRemoteSegment:
name old time/op new time/op delta
RemoteSegment/Cockroach/multiple_segments-8 1.79µs ± 6% 0.03µs ± 4% -98.29% (p=0.008 n=5+5)
name old alloc/op new alloc/op delta
RemoteSegment/Cockroach/multiple_segments-8 0.00B 0.00B ~ (all equal)
name old allocs/op new allocs/op delta
RemoteSegment/Cockroach/multiple_segments-8 0.00 0.00 ~ (all equal)
Change-Id: Ib7fc87e568a4d3a9af27b5e3b644ea68ab6db7aa
Additional elements added:
* monkit metric for observers methods like Start/Fork/Join/Finish to
be able to check how much time those methods are taking
* few more logs e.g. entries with processed range
* segmentsProcessed metric to be able to check loop progress
Change-Id: I65dd51f7f5c4bdbb4014fbf04e5b6b10bdb035ec
We have an issue where object can appear in two different listing pages.
It's because protobuf listing cursor doesn't have version included and
now we can have internally versions higher than 1. On satellite side
version 1 was always used as a default cursor version.
As a workaround for existing implementation of libuplink library we will
use always maximum version for listing cursor on satellite side.
Fixing protobuf and libuplink implementation will happen later.
https://github.com/storj/storj/issues/5570
Change-Id: Ibd27b174556c9d8b8bd60fab8cff7862fd11e994
Peer for generating bloom filters will be able to use ranged loop.
As an addition some cleanup were made:
* remove unused parts of GC BF peer (identity, version control)
* added missing Close method for ranged loop service
* some additional tests added
https://github.com/storj/storj/issues/5545
Change-Id: I9a3d85f5fffd2ebc7f2bf7ed024220117ab2be29
TestLoopContinuesAfterObserverError was failing due to system
granularity measuring the duration as 0.
TestDialer_DialTimeout was failing due to connection failure came with a
delay and wasn't being handled.
Change-Id: I4638c86f5d021a86c3d3529fab13cf3608f35c40
Our DB support in storj/private was updated to enable basic context
support for executing SQL queries. This change requires some small
adjustments as not all parts were working correctly.
storj/private commit with change:
4bc77107b7acfcc2f7ad65796d5dd3d7c64801e4
Change-Id: I64d7ed92788ea0920d12cecd1aa0e414720e9b9c
This is automated test around metabase tests. It's detecting queries
which were performing full table scan during test execution.
After merging this and checking that this is not problematic in any way
we will enable this also for testplanet tests.
One query was adjusted to avoid full table scan and pass new check.
https://github.com/storj/storj/issues/5471
Change-Id: I2d176f0c25deda801e8a731b75954f83d18cc1ce
implemented observer and partial, created new structures to keep mon
metrics remain in same way as in segment loop
Change-Id: I209c126096c84b94d4717332e56238266f6cd004
Add node tally ranged loop observer and partial.
Add node tally randed observer to range loop peer.
Add config flag to select which loop to use for node tally.
Update satellite core to use segement/ranged loop based on a flag.
Duplicate existing node tally test but using ranged loop.
Change-Id: I6786f1a16933463fab5f79601bf438203a7a5f9e
Small cleanups of the observer stats init code:
1. Use sync.Once for race free addition to the monitoring chain
(purely defensive)
2. Set the observer durations before adding to the monitoring chain on
first use.
3. observerDurations slice does not need to be initialized to non-nil
Change-Id: I9ae8ec96debc7d52c4ee5d22762a89f21bb2e38c
When an observer errors we still want to finish the other observers.
This changes store the error and continues the loop, skipping
the observer which errored out and setting the duration metric to -1.
When the error occurs in the process stage, it does continue the other
ranges of the same observer. It removes the observer entirely after the process
stage. To improve this would make it more complex due to race
conditions.
Closes https://github.com/storj/storj/issues/5389
Change-Id: I528432c491d4340817d6950f1200ee2b9e703309
Add option AsOfSystemTime to segment provider to make it equivalent to
the old segment loop.
There's no comment on what it does because it's pretty complex and
makes no sense, but we can improve it later.
closes https://github.com/storj/storj/issues/5434
Change-Id: I8f09b03803e681e2fd41008c5dba67804b0f37a1
Add an observer to monitor ranged segment loop progress.
Tested by running the segment loop in storj-up and navigating to
http://<container>:11111/mon/stats and there is the entry:
rangedloop-live,scope=storj.io/storj/satellite/metabase/rangedloop numSegments=364523630000.000000
part of https://github.com/storj/storj/issues/5223
Change-Id: If3d2774d2f17f51eac86f47c6dda1fb8ad696dfe
Support interruption of the ranged segment loop through context.
Part of https://github.com/storj/storj/issues/5223
Change-Id: Iae0260e250f8ea33affed95c6592a1f42df384eb
added in storj-sim rangedloop for each satellite, to verify it works for metrics oveserver,
removed identity from rangedloop peer as we never use it, added logs on service run, added loop
to service instead of endless for loop, interval value to config
Closes: https://github.com/storj/storj/issues/5414
Change-Id: Ibc3b06071b68feda4a35b45da2bbe36e22a02fc8
Wire up duration measurement of observers with monkit.
Tested by attaching a SleepObserver, starting the rangedloop in storj-up
and navigating to http://<container>:11111/mon/stats. It reports the
following statistic:
completed-observer-duration,observer=*rangedlooptest.SleepObserver,scope=storj.io/storj/satellite/metabase/rangedloop duration=10.000117
Change-Id: Ief131d34001dd5d3ba1d7be6f161986e1f66440d
Before we introduced objects versions internally move operation was
always failing when under target location object exists. But then we
had only single version 1 all the time. With versions different than 1
we need to check all existing objects under target location.
To be backward compatible with our API new logic looks like this:
* if there is no object under target location use source object version
as target version
* if there are only pending objects find first free (highest) version
which could be used to move object there
* if there is committed object under target location reject move
operation
Fixes https://github.com/storj/storj/issues/5403
Change-Id: I717f3e7c42470b406287d6ec335f6f057d3fc3b5
Track duration of all segment loop observers. Factor out functions to
reduce size.
Still need to send the measurements out via monkit.
Part of https://github.com/storj/storj/issues/5223
Change-Id: Iae0260e250f8ea33affed95c6592a1f42df384eb
We missed proper handling of object copies for method
GetStreamPieceCountByNodeID which is used by metabase.GetObjectIPs.
That caused some lack of IPs returned when queriyng IPs of copy and
broke things like pices map on linksharing.
Fixes https://github.com/storj/storj/issues/5406
Change-Id: I9574776f34880788c2dc9ff78a6ae20d44fe628f
Some observers assume that they will observe all the segments for a
given stream, and that they will observe those segments in a sequential
stream over one or more iterations.
This change updates the range provider from rangedlooptest to provide
these guarantees.
The change also removes the Mock suffix from the provider/splitter types
since the package name (rangedlooptest) implies that the type is a test
double.
Change-Id: I927c409807e305787abcde57427baac22f663eaa
We have a bug in our behavior while doing API pods deployment. At this
time its possible to have pods with multiple versions flag set true only
partially for some of pods. Because of that it's possible to start new
object without removing existing/older version on BeginObject
(new behavior) and also don't remove that existing/older object on
CommitObject. That can cause to have two committed objects with
different versions and that's a state we want to avoid.
To fix it we are removing multiple versions flag from CommitObject to
always try delete existing objects. This way even if we don't remove
existing object on BeginObject it will be always removed while
committing.
Fixes https://github.com/storj/storj/issues/5373
Change-Id: Idc334bf5cc785d2f559af96e92c3de6d82ca58ba
Add an abstraction rangedloop.SegmentProvider to fetch chunks of
segments from the metainfo database in parallel.
Part of https://github.com/storj/storj/issues/5223
Change-Id: Ife26467ea0c3be550bde0b05464ef1db62dd4d2a
Minimal implementation of the ranged (=threaded) segment loop
service, to improve performance over the existing loop.
Has tests with a an inmemory segment database
and example observer.
Does not have yet: database link, observer duration tracking,
suspicious processed ratio guard, rate limiting, minimum execution
interval per observer, etc.
Part of https://github.com/storj/storj/issues/5223
Change-Id: I08ffb392c3539e380f4e7b4f1afd56c4c394668d
To be able to verify segments in a list of buckets, this change:
- adds method ListBucketsStreamIDs to list all stream ids belonging to a list of buckets provided using a ListVerifyBucketList on which Add(projectID, bucketName) is defined.
- allows to specify a list of streamIDs to check in ListVerifySegments
Fixes https://github.com/storj/storj-private/issues/101
Change-Id: I72a48a0873a3056ac54ad56c0e9242364b2ae918
We tested new upload flow (with multiple versions) to fix inconsistency
while uploading object on QA/EUN1/SLC. Now we would like to enable it
for all satellites by default. Tests required small adjustments.
Fixes https://github.com/storj/storj/issues/5283
Change-Id: I0d53c041abebc0d182ba5a88bb1dac906c29caf0
We added alternative way to calculate bucket tallies for accounting and
now it's tested and we will enable it by default.
CollectBucketTallies was extended to support overriding current time
to be able to test handling expired objects.
Change-Id: I738b99a33fd2e086245f92d874c1cbb806e834c0
Multipart upload requires to have the same UploadID returned from
different requests (BeginUpload, ListUploads). Otherwise client won't
be able to find existing uploads. Main issue was that data needed to
construct UploadID is in System metadata which can be filtered out
by listing option.
This change is fixing how we are setting Status for listed objects and
it's forcing reading System metadata if we are reading pending objects.
Fixes https://github.com/storj/storj/issues/5298
Change-Id: I8dd5fbab4421a64dc3ed95556408ead4c829f276
Pair uuid's to create ranges. Will be used to parallelize the segment
loop.
Part of https://github.com/storj/storj/issues/5223
Change-Id: I73e2fb8a2cd379b840864449b6251b48feeb7b66
ReverifyPiece() is not currently hooked up to anything, but is planned
to take the place of audit.(*Verifier).Reverify().
ReverifyPiece() works by downloading one piece in its entirety, rather
than pulling an entire stripe across many nodes.
Change-Id: Ie2c680f4d3c3b65273a72466a3f9f55c115b0311
One of two parts to stop using objects loop for bucket accounting,
this method collects bucket tallies from list of bucket locations
part1 of: https://github.com/storj/team-metainfo/issues/125
Change-Id: Id2d492582453e28463cddf1245622fb7f191050c