To be compatible with S3 we need to return 'Method Not Allowed' when
delete marker is requested and to do this libuplnk needs to know about
delete marker. It will be returned only if object version will be
specified.
Change-Id: I288da5566c74e1b4951f7cd249dbf34622b92e91
We decided that we won't use seprate table for handling pending
objects. We need to remove related code.
After this change we still needs to remove related code from metabase.
https://github.com/storj/storj/issues/6421
Change-Id: Idaaa8ae08b80bab7114e316849994be171daf948
Additional feature flag (onyly for testing) to set versioning enabled
for all new create buckets. We need it until we will have support
for enabling/disabling versioning for bucket on metainfo API.
In addition this change is fixing also two small issues which makes
testing this flag imposible:
* metabase Status list was not aligned with protobuf definition
* object retruned by metainfo API didn't have correct status set in some
cases
Change-Id: I0d63dff6a08efa588c8999af1e17db476943e067
Protobuf definition is ready to support downloading specific version of
object so we just need to wire requested version into metainfo
DownloadObject endpoint.
https://github.com/storj/storj/issues/6221
Change-Id: I3ddc173beb6a6cf30d782dd65c6aa5f88f2cbd44
We need to update metainfo GetObject endpoint to use ObjectVersion
([]byte) field instead of old Version field (int32).
Change-Id: I61663ec8d9f5c731f91346a285048477fb493730
We never extended metainfo protocol to return committed object
detailed into and this change is doing it now. Main motivation to
do this now is need for providing object version after upload.
Change-Id: Ib59bdfd9485e4a0091ac02458cc63427cb7159de
Protobuf definition is ready to support deleting specific version of
object so we just need to wire requested version into metainfo
BeginDeleteObject endpoint.
Dependencies bumped to get latest metainfo protobuf definition.
https://github.com/storj/storj/issues/6221
Change-Id: Ifc3cc0b49d9acdf4f7e57e7184b0f2ae045f9113
There are several different object types in a versioned table,
which will determine the exact behaviour.
The object type states are:
* Pending - the object is yet to be committed and is being uploaded.
* Committed - the object has been finished and can be read.
* DeleteMarker - indicates that the object should be treated as not
present when is at the top of the version stack.
There are also versioning states:
* Unversioned - only one unversioned object is allowed per object key.
* Versioned - multiple objects with the same key are allowed.
Change-Id: I65dfa781e8da253a4e5d572b799d53c351196eee
This fixes an inconsistency with error returned on copy and move
endpoints to match other endpoints. validateAuth() is already
wrapping the RPC status around the error, so this shouldn't be
doing it again.
This also ensures that rate limit errors for FinishCopyObject and
FinishMoveObject are correctly returned as rpcstatus.ResourceExhausted
so uplink can correctly map these to uplink.ErrTooManyRequests.
Change-Id: I6bf6185b456d6774b99d56cf3d7d8f8aa2afa0e8
Currently it wasn't quite clear what was a stub version and an actual
version. Use a PendingVersion constant to make this distinction clear.
Also use PendingVersion = NextVersion = 0, that way it's clearer that
the version hasn't been yet determined. DefaultVersion = 1 might imply
that the object will get that version once commited, however that will
entirely depend on whether use-pending-objects is used or versioning is
enabled or not.
Change-Id: I21398141f97035c48c778f23b542266b834c44f1
Protobuf definition is read to support getting specific version of
object so we just need to wire requested version into metainfo.GetObject
endpoint.
https://github.com/storj/storj/issues/6221
Change-Id: If4568b82119a6c893788a0a86e598b05ff5951cf
If MaxObjectTTL is set in the API key, BeginObject will use it for the
object expiration time, unless an explicit ExpireAt is available in the
request.
Context: https://github.com/storj/storj/issues/6249
Change-Id: I2adf57d979a9c68eec3a787f3739d2f1dbec1f7e
This small feature will give us ability to test pending_objects table
without enabling it globally.
Change-Id: I802f45987ad329f94adfc0f02957c802b21d8251
table
New method IteratePendingObjectsByKeyNew is used to provide results for
metainfo.ListPendingObjectStreams. This endpoint is used to list
pending objects with the same object key. In this case to support
both tables (objects, pending_objects) we need to do one query per table
and merge results.
Because existing metainfo protobuf API is missing some fields to have
proper listing cursor we are not able to make ListPendingObjectStreams
correct for returning more than single page. We need to fix it
separately.
With this change also turns out that approach to merge results from
listing objects for ListObjects method was wrong and this change is also
fixing this problem.
Handling both tables will be removed at some point and only
pending_objects will be used to look for results.
Part of https://github.com/storj/storj/issues/6047
Change-Id: I8a88a6f885ad529704e6c032f1d97926123c2909
Adjust metainfo.ListObjects method to use IteratePendingObjects to
support new pending_objects table. New method will be used only when
we are listing pending objects.
Because until objects table will be free from pending objects we can
have results in both tables we are merging listing results. This also
means that in some (rare?) cases we may return more results than
specified listing limit. This situation is temporary.
Part of https://github.com/storj/storj/issues/6047
Change-Id: I06389145e5d916c532dfdbd3dcc9ef68ef70e515
We are deleting pending objects while aborting multipart upload. We are
using metainfo BeginDeleteObject to do that. This change starts using
DeletePendingObjectNew to delete entry from pending_objects table when
request indicates that object is in this table.
Part of https://github.com/storj/storj/issues/6048
Change-Id: I4478a9c13c8e3db48dc5de3087ef03d1b4c47a5c
Change is adjusting CommitObject to use `pending_objects` table to
commit object.
Satellite stream id is used to determine if we need to use
`pending_objects` or `objects` table during commit.
General goal is to support both tables until `objects` table will be
free from pending objects.
Part of https://github.com/storj/storj/issues/6046
Change-Id: I2ebe0cd6b446727c98c8e210d4d00504dd0dacb6
Change is adjusting BeginObjectNextVersion to create pending object in
`pending_objects` or `objects` table depends on configuration. This is
first change to move pending objects from objects table.
General goal is to support both tables until `objects` table will be
free from pending objects. Whenever it will be needed code will be
supporting both tables at once.
To be able to decide if we need to use `pending_objects` table or
`objects` table we extend satellite stream id to keep that information
for later use.
BeginObjectExactVersion will be not adjusted because at the moment it's
used only in tests.
Part of https://github.com/storj/storj/issues/6046
Change-Id: Ibf21965f63cca5e1775469994a29f1fd1261af4e
We are doing full bucket name validation for many requests but
we should do this only while creating bucket. Other requests will be
covered only by basic name length validation. Less strict validation for
other requests will make bucket usable in case of invalid bucket names
in DB (we have such cases from the past).
https://github.com/storj/storj/issues/6044
Change-Id: I3a41050e3637787f788705ef15b5dc4df4d01fc6
Before this change, if a user creates a bucket with a user_agent attributed then deletes and recreates it, the row in bucket_metainfos
will not have the user_agent. This is because we skip setting the field
in bucket_metainfos if the bucket already exists in value_attributions.
This can be problematic, as we return the bucket's user agent during the
ListBuckets operation, and the client may be expecting this value to be
populated.
This change ensures the bucket table user_agent is set when (re)creating a bucket. To avoid decreasing BeginObject performance, which also
updates attribution, a flag has been added to determine whether to
make sure the buckets table is updated: `forceBucketUpdate`.
Change-Id: Iada2f233b327b292ad9f98c73ea76a1b0113c926
..instead of using segment_copies and ancestor_stream_id, etc.
This bypasses reference counting entirely, depending on our mark+sweep+
bloomfilter garbage collection strategy to get rid of pieces once they
are no longer part of a segment.
This is only safe to do after we have stopped passing delete requests on
to storage nodes.
Refs: https://github.com/storj/storj/issues/5889
Change-Id: I37bdcffaa752f84fd85045235d6875b3526b5ecc
We decided that we will stop sending explicit delete requests to nodes
and we will cleanup deleted with GC instead.
https://github.com/storj/storj/issues/5888
Change-Id: I65a308cca6fb17e97e3ba85eb3212584c96a32cd
For now we will use bucket placement to determine if we should exclude
some node IPs from metainfo.GetObjectIPs results. Bucket placement is
retrieved directly from DB in parallel to metabase
GetStreamPieceCountByNodeID request.
GetObjectIPs is not heavily used so additional request to DB shouldn't
be a problem for now.
https://github.com/storj/storj/issues/5950
Change-Id: Idf58b1cfbcd1afff5f23868ba2f71ce239f42439
We would like to have ability to limit burst uploads to the single
object (the same location). This change we are limiting such upload to
one per second.
Change-Id: Ib9351df1017cbc07d7fc2f846c2dbdbfcd3a360c
This combines the ListStreamPositions and GetSegmentByPosition
calls with a ListSegments call that now knows how to return
only the segments within a Range, just like ListStreamPositions.
It would theoretically be possible to also include the
GetObjectLastCommitted call by having it do one of three
queries based on the incoming request range, but that would
mean duplicating the data for the object in every single
row that is returned for each segment in the range.
One gross thing that ListSegments has to do now is update the
first segment returned with the information from any ancestor
segment because GetSegmentByPosition used to do that. It only
updates the first segment so that it doesn't do O(N) database
queries. It seems difficult to have it do a single query to
update all of the segments at once. I'm not certain this change
should be merged on this basis alone.
This change has made me think a couple of things should happen:
1. Server side copy with ancestor segments strikes again
making the code less clear and potentially more buggy
or inefficient for a rare case (empirically <0.1%)
2. The download code requests individual segments from
the satellite lazily as part of its download which
requires the satellite telling it the locations of
all of the segments which requires the satellite
querying the locations of all of the segments. Instead
the download RPC could return the orders for all of
the segments for a range and the download code could
issue N download calls rather than 1 download call and
N get segment calls. I believe both sides of the code
paths would be simpler and more efficient this way.
3. In looking at the timing information for downloads when
testing this, we really need to focus on getting the
auth key and bandwidth limit verification times down.
Here's the timing I saw:
- 42ms: validate auth
- 52ms: bandwidth usage checking
- 14ms: get object info
- 26ms: get segment position info
- 26ms: getting the first segment full info
- 20ms: unaccounted for by spans
- 6ms: creating the orders
This change will remove 26ms, but there's a good 90ms
in just validation. With improved semantics hitting the
database only once and improved validation, a download
rpc taking ~30ms seems doable compared to our current
~200ms.
Change-Id: I4109dba082eaedb79e634c61dbf86efa93ab1222
This flag was in general one time switch to enable versions internally.
New we can remove it as it makes code more complex.
Change-Id: I740b6e8fae80d5fac51d9425793b02678357490e
While working on fixing listing for committed objects we didn't fix
the same case for pending objects. For case were we have many
pending objects under different locations we need to set cursor
version to highest value to avoid duplicates.
For case where we have many pending objects under the same location
we will need to make a separate fix.
https://github.com/storj/storj/issues/5570
Change-Id: Id5c8eb728868e8e1177fdbcf65a493142be4eaf0
We have an issue where object can appear in two different listing pages.
It's because protobuf listing cursor doesn't have version included and
now we can have internally versions higher than 1. On satellite side
version 1 was always used as a default cursor version.
As a workaround for existing implementation of libuplink library we will
use always maximum version for listing cursor on satellite side.
Fixing protobuf and libuplink implementation will happen later.
https://github.com/storj/storj/issues/5570
Change-Id: Ibd27b174556c9d8b8bd60fab8cff7862fd11e994
This modification introduce support of the new "desired node" field of download segment/object.
This can be used to request more nodes than the suggested minimum. It can be used to achieve better performance in exchange of using more bandwidth. (more parallel downloads).
Change-Id: Ia167d6979e6d70a597c85070a4ccd1c3a573e406
* storj/common
* storj/private
Latests common version requires small refactoring for names and types
used by metainfo code.
Change-Id: I224fe93b4751c996ba6e846be0e5677252cf830f
We have code that is used only by old uplinks and can fail at some point
but we don't interrupt anything and only log message about failure.
Until now it was logged as error but it's nothing critial so we can
reduce it to warning.
As an addition log entry was extended with more information about client
that is using this backward compatibility code.
Change-Id: Ie21c673ee59eb10de065cc371132f8f9505e2220
Multipart upload requires to have the same UploadID returned from
different requests (BeginUpload, ListUploads). Otherwise client won't
be able to find existing uploads. Main issue was that data needed to
construct UploadID is in System metadata which can be filtered out
by listing option.
This change is fixing how we are setting Status for listed objects and
it's forcing reading System metadata if we are reading pending objects.
Fixes https://github.com/storj/storj/issues/5298
Change-Id: I8dd5fbab4421a64dc3ed95556408ead4c829f276
GetObject and GetObjectIPs are invoked by the Linksharing service to
display the shared object and its map. These two endpoint currently
require read permission.
There is a use case where an object can be shared with an access grant
that has only list permission. In such a case, the expectation is that
the linksharing service would still display the metadata of the shared
object (name, size, map), but the content would be still inaccessible.
See https://github.com/storj/gateway-mt/issues/209 for details.
This change allows GetObject and GetObjectIPs to require either read or
list permission to support the described use case.
Change-Id: I3477edc7bf8990e9848482890da047094c875d09