There are several different object types in a versioned table,
which will determine the exact behaviour.
The object type states are:
* Pending - the object is yet to be committed and is being uploaded.
* Committed - the object has been finished and can be read.
* DeleteMarker - indicates that the object should be treated as not
present when is at the top of the version stack.
There are also versioning states:
* Unversioned - only one unversioned object is allowed per object key.
* Versioned - multiple objects with the same key are allowed.
Change-Id: I65dfa781e8da253a4e5d572b799d53c351196eee
This fixes an inconsistency with error returned on copy and move
endpoints to match other endpoints. validateAuth() is already
wrapping the RPC status around the error, so this shouldn't be
doing it again.
This also ensures that rate limit errors for FinishCopyObject and
FinishMoveObject are correctly returned as rpcstatus.ResourceExhausted
so uplink can correctly map these to uplink.ErrTooManyRequests.
Change-Id: I6bf6185b456d6774b99d56cf3d7d8f8aa2afa0e8
Currently it wasn't quite clear what was a stub version and an actual
version. Use a PendingVersion constant to make this distinction clear.
Also use PendingVersion = NextVersion = 0, that way it's clearer that
the version hasn't been yet determined. DefaultVersion = 1 might imply
that the object will get that version once commited, however that will
entirely depend on whether use-pending-objects is used or versioning is
enabled or not.
Change-Id: I21398141f97035c48c778f23b542266b834c44f1
Protobuf definition is read to support getting specific version of
object so we just need to wire requested version into metainfo.GetObject
endpoint.
https://github.com/storj/storj/issues/6221
Change-Id: If4568b82119a6c893788a0a86e598b05ff5951cf
If MaxObjectTTL is set in the API key, BeginObject will use it for the
object expiration time, unless an explicit ExpireAt is available in the
request.
Context: https://github.com/storj/storj/issues/6249
Change-Id: I2adf57d979a9c68eec3a787f3739d2f1dbec1f7e
This small feature will give us ability to test pending_objects table
without enabling it globally.
Change-Id: I802f45987ad329f94adfc0f02957c802b21d8251
table
New method IteratePendingObjectsByKeyNew is used to provide results for
metainfo.ListPendingObjectStreams. This endpoint is used to list
pending objects with the same object key. In this case to support
both tables (objects, pending_objects) we need to do one query per table
and merge results.
Because existing metainfo protobuf API is missing some fields to have
proper listing cursor we are not able to make ListPendingObjectStreams
correct for returning more than single page. We need to fix it
separately.
With this change also turns out that approach to merge results from
listing objects for ListObjects method was wrong and this change is also
fixing this problem.
Handling both tables will be removed at some point and only
pending_objects will be used to look for results.
Part of https://github.com/storj/storj/issues/6047
Change-Id: I8a88a6f885ad529704e6c032f1d97926123c2909
Adjust metainfo.ListObjects method to use IteratePendingObjects to
support new pending_objects table. New method will be used only when
we are listing pending objects.
Because until objects table will be free from pending objects we can
have results in both tables we are merging listing results. This also
means that in some (rare?) cases we may return more results than
specified listing limit. This situation is temporary.
Part of https://github.com/storj/storj/issues/6047
Change-Id: I06389145e5d916c532dfdbd3dcc9ef68ef70e515
We are deleting pending objects while aborting multipart upload. We are
using metainfo BeginDeleteObject to do that. This change starts using
DeletePendingObjectNew to delete entry from pending_objects table when
request indicates that object is in this table.
Part of https://github.com/storj/storj/issues/6048
Change-Id: I4478a9c13c8e3db48dc5de3087ef03d1b4c47a5c
Change is adjusting CommitObject to use `pending_objects` table to
commit object.
Satellite stream id is used to determine if we need to use
`pending_objects` or `objects` table during commit.
General goal is to support both tables until `objects` table will be
free from pending objects.
Part of https://github.com/storj/storj/issues/6046
Change-Id: I2ebe0cd6b446727c98c8e210d4d00504dd0dacb6
Change is adjusting BeginObjectNextVersion to create pending object in
`pending_objects` or `objects` table depends on configuration. This is
first change to move pending objects from objects table.
General goal is to support both tables until `objects` table will be
free from pending objects. Whenever it will be needed code will be
supporting both tables at once.
To be able to decide if we need to use `pending_objects` table or
`objects` table we extend satellite stream id to keep that information
for later use.
BeginObjectExactVersion will be not adjusted because at the moment it's
used only in tests.
Part of https://github.com/storj/storj/issues/6046
Change-Id: Ibf21965f63cca5e1775469994a29f1fd1261af4e
We are doing full bucket name validation for many requests but
we should do this only while creating bucket. Other requests will be
covered only by basic name length validation. Less strict validation for
other requests will make bucket usable in case of invalid bucket names
in DB (we have such cases from the past).
https://github.com/storj/storj/issues/6044
Change-Id: I3a41050e3637787f788705ef15b5dc4df4d01fc6
Before this change, if a user creates a bucket with a user_agent attributed then deletes and recreates it, the row in bucket_metainfos
will not have the user_agent. This is because we skip setting the field
in bucket_metainfos if the bucket already exists in value_attributions.
This can be problematic, as we return the bucket's user agent during the
ListBuckets operation, and the client may be expecting this value to be
populated.
This change ensures the bucket table user_agent is set when (re)creating a bucket. To avoid decreasing BeginObject performance, which also
updates attribution, a flag has been added to determine whether to
make sure the buckets table is updated: `forceBucketUpdate`.
Change-Id: Iada2f233b327b292ad9f98c73ea76a1b0113c926
..instead of using segment_copies and ancestor_stream_id, etc.
This bypasses reference counting entirely, depending on our mark+sweep+
bloomfilter garbage collection strategy to get rid of pieces once they
are no longer part of a segment.
This is only safe to do after we have stopped passing delete requests on
to storage nodes.
Refs: https://github.com/storj/storj/issues/5889
Change-Id: I37bdcffaa752f84fd85045235d6875b3526b5ecc
We decided that we will stop sending explicit delete requests to nodes
and we will cleanup deleted with GC instead.
https://github.com/storj/storj/issues/5888
Change-Id: I65a308cca6fb17e97e3ba85eb3212584c96a32cd
For now we will use bucket placement to determine if we should exclude
some node IPs from metainfo.GetObjectIPs results. Bucket placement is
retrieved directly from DB in parallel to metabase
GetStreamPieceCountByNodeID request.
GetObjectIPs is not heavily used so additional request to DB shouldn't
be a problem for now.
https://github.com/storj/storj/issues/5950
Change-Id: Idf58b1cfbcd1afff5f23868ba2f71ce239f42439
We would like to have ability to limit burst uploads to the single
object (the same location). This change we are limiting such upload to
one per second.
Change-Id: Ib9351df1017cbc07d7fc2f846c2dbdbfcd3a360c
This combines the ListStreamPositions and GetSegmentByPosition
calls with a ListSegments call that now knows how to return
only the segments within a Range, just like ListStreamPositions.
It would theoretically be possible to also include the
GetObjectLastCommitted call by having it do one of three
queries based on the incoming request range, but that would
mean duplicating the data for the object in every single
row that is returned for each segment in the range.
One gross thing that ListSegments has to do now is update the
first segment returned with the information from any ancestor
segment because GetSegmentByPosition used to do that. It only
updates the first segment so that it doesn't do O(N) database
queries. It seems difficult to have it do a single query to
update all of the segments at once. I'm not certain this change
should be merged on this basis alone.
This change has made me think a couple of things should happen:
1. Server side copy with ancestor segments strikes again
making the code less clear and potentially more buggy
or inefficient for a rare case (empirically <0.1%)
2. The download code requests individual segments from
the satellite lazily as part of its download which
requires the satellite telling it the locations of
all of the segments which requires the satellite
querying the locations of all of the segments. Instead
the download RPC could return the orders for all of
the segments for a range and the download code could
issue N download calls rather than 1 download call and
N get segment calls. I believe both sides of the code
paths would be simpler and more efficient this way.
3. In looking at the timing information for downloads when
testing this, we really need to focus on getting the
auth key and bandwidth limit verification times down.
Here's the timing I saw:
- 42ms: validate auth
- 52ms: bandwidth usage checking
- 14ms: get object info
- 26ms: get segment position info
- 26ms: getting the first segment full info
- 20ms: unaccounted for by spans
- 6ms: creating the orders
This change will remove 26ms, but there's a good 90ms
in just validation. With improved semantics hitting the
database only once and improved validation, a download
rpc taking ~30ms seems doable compared to our current
~200ms.
Change-Id: I4109dba082eaedb79e634c61dbf86efa93ab1222
This flag was in general one time switch to enable versions internally.
New we can remove it as it makes code more complex.
Change-Id: I740b6e8fae80d5fac51d9425793b02678357490e
While working on fixing listing for committed objects we didn't fix
the same case for pending objects. For case were we have many
pending objects under different locations we need to set cursor
version to highest value to avoid duplicates.
For case where we have many pending objects under the same location
we will need to make a separate fix.
https://github.com/storj/storj/issues/5570
Change-Id: Id5c8eb728868e8e1177fdbcf65a493142be4eaf0
We have an issue where object can appear in two different listing pages.
It's because protobuf listing cursor doesn't have version included and
now we can have internally versions higher than 1. On satellite side
version 1 was always used as a default cursor version.
As a workaround for existing implementation of libuplink library we will
use always maximum version for listing cursor on satellite side.
Fixing protobuf and libuplink implementation will happen later.
https://github.com/storj/storj/issues/5570
Change-Id: Ibd27b174556c9d8b8bd60fab8cff7862fd11e994
This modification introduce support of the new "desired node" field of download segment/object.
This can be used to request more nodes than the suggested minimum. It can be used to achieve better performance in exchange of using more bandwidth. (more parallel downloads).
Change-Id: Ia167d6979e6d70a597c85070a4ccd1c3a573e406
* storj/common
* storj/private
Latests common version requires small refactoring for names and types
used by metainfo code.
Change-Id: I224fe93b4751c996ba6e846be0e5677252cf830f
We have code that is used only by old uplinks and can fail at some point
but we don't interrupt anything and only log message about failure.
Until now it was logged as error but it's nothing critial so we can
reduce it to warning.
As an addition log entry was extended with more information about client
that is using this backward compatibility code.
Change-Id: Ie21c673ee59eb10de065cc371132f8f9505e2220
Multipart upload requires to have the same UploadID returned from
different requests (BeginUpload, ListUploads). Otherwise client won't
be able to find existing uploads. Main issue was that data needed to
construct UploadID is in System metadata which can be filtered out
by listing option.
This change is fixing how we are setting Status for listed objects and
it's forcing reading System metadata if we are reading pending objects.
Fixes https://github.com/storj/storj/issues/5298
Change-Id: I8dd5fbab4421a64dc3ed95556408ead4c829f276
GetObject and GetObjectIPs are invoked by the Linksharing service to
display the shared object and its map. These two endpoint currently
require read permission.
There is a use case where an object can be shared with an access grant
that has only list permission. In such a case, the expectation is that
the linksharing service would still display the metadata of the shared
object (name, size, map), but the content would be still inaccessible.
See https://github.com/storj/gateway-mt/issues/209 for details.
This change allows GetObject and GetObjectIPs to require either read or
list permission to support the described use case.
Change-Id: I3477edc7bf8990e9848482890da047094c875d09
BeginCopyObject checks twice for write permission in the destination
bucket. One check should be enough.
Change-Id: I3d5935d34f69cd48eaaf00d0117683edfdcefc05
The threshold of piece deletions from the nodes during CommitObject
when overriding an existing object seemed to cause a race condition in
tests.
This change makes the threshold configurable so we can set it to maximum
so CommitObject waits until all pieces are removed from the nodes in the
test.
Change-Id: Idf6b52e71d0082a1cd87ad99a2edded6892d02a8
We have new flow where existing object is deleted not on begin
object but on commit object. Deletion on commit object is still
missing deletion from storage nodes. This change adds this part
to the code.
Fixes https://github.com/storj/storj/issues/5222
Change-Id: Ibfd34665b2a055ec6c0d6e260c1a57e8a4c62b0e
With this change we are switching methods to begin object, from
BeginObjectExactVersion to BeginObjectNextVersion. Main implication
is that from now it will be possible to have object with version
different than 1. New object will always get first available version.
Main reason to do this it to avoid deleting existing object during
reuploading object. Now we can create multiple pending objects but
only last committed will be available to the user. Any previous
committed object will be deleted.Because of that we moved logic to
delete existing object from BeginObject to CommitoObject request.
New logic is behind feature flat to be able to test it well first
before enablng on production.
Fixes https://github.com/storj/storj/issues/4871
Change-Id: I2dd9c7364fd93796a05ef607bda9c39a741e6a89
We will introduce new logic for creating new objects (BeginObject).
Instead of using single version internally (1) we will be selecting first
available version during object creation. Because we need to be sure
that everything is wired up correctly we need a feature flag to be
able to control if new feature is enabled.
Change-Id: If0f8496397130811f43bf9db9fdcc2b30cd2e4ca
We are preparing to use object versions internally and to do
that we need to prepare different parts of the system to handle
object versions different than '1'. This change adjust code
responsible for server-side move and copy.
What was done:
* begin methods for move and copy are now using GetObjectLastCommitted
to find object
* results from begin move and copy operation contains now version to
be able to map object correctly with finish operation
* begin methods are putting version into satellite stream id and
finish methods are using this version as parameter instead hardcoded
value
Fixes https://github.com/storj/storj/issues/4867
Change-Id: I1380911279c21e10a3fff0342793efd2e73eafad
Metadata validation for CommitObject request was placed in a wrong
place. There is a case (old uplink) where encrypted key is bundled
inside encrypted metadata bytes and we need to extract it before we
can validate it. This change moves metadata validation to a place where
we are sure we have encrypted metadata and encrypted metadata encrypted
key ready to be checked.
"Run Versions Test" is covering this case and it was failing without
this change.
Change-Id: Ib709ad901fbb3fa4865a393195b7b3f4c0d87e7a
For object Version in different places we are using different types.
Satellite StreamID is using int32 but metabase accepts int64. Metabase
type is correct one and we should align other places with it.
As a small addition this change is also passing version correctly
between requests instead of using hardcoded value.
Change-Id: I63634d73c0a48c009e4db5f203ff18b7f3218b02
Updated metabase.UpdateObjectMetadata method to update set metdata always for last committed object
Closes https://github.com/storj/storj/issues/4870
Change-Id: I060683e31efcaf3e2531fea143cf0567e5ff5f73
Restored GetObjectLatestVersion and renamed it to GetObjectLastCommitted
Add test cases to cover server-side copy
Closes https://github.com/storj/storj/issues/4866
Change-Id: I343b339a60152b8fb92fda97baf80bd8fe60d631