lrucache is now using time2 package and we can make expiration
test without using time.Sleep.
https://github.com/storj/storj/issues/5788
Change-Id: I48f2693c3db78fcf4e30e618bb3304be3625100c
We would like to have ability to limit burst uploads to the single
object (the same location). This change we are limiting such upload to
one per second.
Change-Id: Ib9351df1017cbc07d7fc2f846c2dbdbfcd3a360c
The blobstore implementation is entirely related to storagenode, so the
rightful place is together with the storagenode implementation.
Fixes https://github.com/storj/storj/issues/5754
Change-Id: Ie6637b0262cf37af6c3e558556c7604d9dc3613d
This combines the ListStreamPositions and GetSegmentByPosition
calls with a ListSegments call that now knows how to return
only the segments within a Range, just like ListStreamPositions.
It would theoretically be possible to also include the
GetObjectLastCommitted call by having it do one of three
queries based on the incoming request range, but that would
mean duplicating the data for the object in every single
row that is returned for each segment in the range.
One gross thing that ListSegments has to do now is update the
first segment returned with the information from any ancestor
segment because GetSegmentByPosition used to do that. It only
updates the first segment so that it doesn't do O(N) database
queries. It seems difficult to have it do a single query to
update all of the segments at once. I'm not certain this change
should be merged on this basis alone.
This change has made me think a couple of things should happen:
1. Server side copy with ancestor segments strikes again
making the code less clear and potentially more buggy
or inefficient for a rare case (empirically <0.1%)
2. The download code requests individual segments from
the satellite lazily as part of its download which
requires the satellite telling it the locations of
all of the segments which requires the satellite
querying the locations of all of the segments. Instead
the download RPC could return the orders for all of
the segments for a range and the download code could
issue N download calls rather than 1 download call and
N get segment calls. I believe both sides of the code
paths would be simpler and more efficient this way.
3. In looking at the timing information for downloads when
testing this, we really need to focus on getting the
auth key and bandwidth limit verification times down.
Here's the timing I saw:
- 42ms: validate auth
- 52ms: bandwidth usage checking
- 14ms: get object info
- 26ms: get segment position info
- 26ms: getting the first segment full info
- 20ms: unaccounted for by spans
- 6ms: creating the orders
This change will remove 26ms, but there's a good 90ms
in just validation. With improved semantics hitting the
database only once and improved validation, a download
rpc taking ~30ms seems doable compared to our current
~200ms.
Change-Id: I4109dba082eaedb79e634c61dbf86efa93ab1222
This flag was in general one time switch to enable versions internally.
New we can remove it as it makes code more complex.
Change-Id: I740b6e8fae80d5fac51d9425793b02678357490e
Metainfo needs to know rate and burst limit to be able to limit users
requests. We made cache for per project limiter but to make single
instance we need to know about limits. So far we were doing direct DB
call to get rate/burst limit for project but it's generating lots of
DB requests and can be easily cached as we even have project limit cache.
This change extends project limit cache with rate/burst limit and starts
using this change while creating project limiter instance for metainfo.
Because data size kept in project limit cache is quite small this change
also bumps a bit default capacity of the cache.
Fixes https://github.com/storj/storj/issues/5663
Change-Id: Icb42ec1632bfa0c9f74857b559083dcbd054d071
This code is essentially replacement for eestream.CalcPieceSize. To call
eestream.CalcPieceSize we need eestream.RedundancyStrategy which is not
trivial to get as it requires infectious.FEC. For example infectious.FEC
creation is visible on GE loop observer CPU profile because we were
doing this for each segment in DB.
New method was added to storj.Redundancy and here we are just wiring it
with metabase Segment.
BenchmarkSegmentPieceSize
BenchmarkSegmentPieceSize/eestream.CalcPieceSize
BenchmarkSegmentPieceSize/eestream.CalcPieceSize-8 5822 189189 ns/op 9776 B/op 8 allocs/op
BenchmarkSegmentPieceSize/segment.PieceSize
BenchmarkSegmentPieceSize/segment.PieceSize-8 94721329 11.49 ns/op 0 B/op 0 allocs/op
Change-Id: I5a8b4237aedd1424c54ed0af448061a236b00295
While working on fixing listing for committed objects we didn't fix
the same case for pending objects. For case were we have many
pending objects under different locations we need to set cursor
version to highest value to avoid duplicates.
For case where we have many pending objects under the same location
we will need to make a separate fix.
https://github.com/storj/storj/issues/5570
Change-Id: Id5c8eb728868e8e1177fdbcf65a493142be4eaf0
We have an issue where object can appear in two different listing pages.
It's because protobuf listing cursor doesn't have version included and
now we can have internally versions higher than 1. On satellite side
version 1 was always used as a default cursor version.
As a workaround for existing implementation of libuplink library we will
use always maximum version for listing cursor on satellite side.
Fixing protobuf and libuplink implementation will happen later.
https://github.com/storj/storj/issues/5570
Change-Id: Ibd27b174556c9d8b8bd60fab8cff7862fd11e994
This modification introduce support of the new "desired node" field of download segment/object.
This can be used to request more nodes than the suggested minimum. It can be used to achieve better performance in exchange of using more bandwidth. (more parallel downloads).
Change-Id: Ia167d6979e6d70a597c85070a4ccd1c3a573e406
TestLoopContinuesAfterObserverError was failing due to system
granularity measuring the duration as 0.
TestDialer_DialTimeout was failing due to connection failure came with a
delay and wasn't being handled.
Change-Id: I4638c86f5d021a86c3d3529fab13cf3608f35c40
ListUploads returns incorrect UploadID if Expires was set in
BeginUpload. DB is truncating expiration date to microseconds precision
so we need to do this also in code.
Change-Id: Iee0cf45cb705342f6bb9a2f745acca91cce6ff52
Affected packages admin,attribution,console,metainfo,satellitedb,web,payments
This change removes the satellite/rewards package and its related usages.
It removes references to APIKeyInfo/PartnerID, Project/PartnerID
and User/PartnerID.
Issue: https://github.com/storj/storj/issues/5432
Change-Id: Ieaa352ee848db45e94f85556febdbcf1444d8c3e
Our DB support in storj/private was updated to enable basic context
support for executing SQL queries. This change requires some small
adjustments as not all parts were working correctly.
storj/private commit with change:
4bc77107b7acfcc2f7ad65796d5dd3d7c64801e4
Change-Id: I64d7ed92788ea0920d12cecd1aa0e414720e9b9c
Before we introduced objects versions internally move operation was
always failing when under target location object exists. But then we
had only single version 1 all the time. With versions different than 1
we need to check all existing objects under target location.
To be backward compatible with our API new logic looks like this:
* if there is no object under target location use source object version
as target version
* if there are only pending objects find first free (highest) version
which could be used to move object there
* if there is committed object under target location reject move
operation
Fixes https://github.com/storj/storj/issues/5403
Change-Id: I717f3e7c42470b406287d6ec335f6f057d3fc3b5
We missed proper handling of object copies for method
GetStreamPieceCountByNodeID which is used by metabase.GetObjectIPs.
That caused some lack of IPs returned when queriyng IPs of copy and
broke things like pices map on linksharing.
Fixes https://github.com/storj/storj/issues/5406
Change-Id: I9574776f34880788c2dc9ff78a6ae20d44fe628f
* storj/common
* storj/private
Latests common version requires small refactoring for names and types
used by metainfo code.
Change-Id: I224fe93b4751c996ba6e846be0e5677252cf830f
We tested new upload flow (with multiple versions) to fix inconsistency
while uploading object on QA/EUN1/SLC. Now we would like to enable it
for all satellites by default. Tests required small adjustments.
Fixes https://github.com/storj/storj/issues/5283
Change-Id: I0d53c041abebc0d182ba5a88bb1dac906c29caf0
We have code that is used only by old uplinks and can fail at some point
but we don't interrupt anything and only log message about failure.
Until now it was logged as error but it's nothing critial so we can
reduce it to warning.
As an addition log entry was extended with more information about client
that is using this backward compatibility code.
Change-Id: Ie21c673ee59eb10de065cc371132f8f9505e2220
Multipart upload requires to have the same UploadID returned from
different requests (BeginUpload, ListUploads). Otherwise client won't
be able to find existing uploads. Main issue was that data needed to
construct UploadID is in System metadata which can be filtered out
by listing option.
This change is fixing how we are setting Status for listed objects and
it's forcing reading System metadata if we are reading pending objects.
Fixes https://github.com/storj/storj/issues/5298
Change-Id: I8dd5fbab4421a64dc3ed95556408ead4c829f276
Libuplink is using some aliases to storj package which we will
move directly to libuplink and remove from common/storj.
To make code compilable we need to fix places where we
are using aliased types directly to be able to update libuplink.
Change-Id: I7222a927af3b41e214d1c9204917f3ebce4727ce
GetObject and GetObjectIPs are invoked by the Linksharing service to
display the shared object and its map. These two endpoint currently
require read permission.
There is a use case where an object can be shared with an access grant
that has only list permission. In such a case, the expectation is that
the linksharing service would still display the metadata of the shared
object (name, size, map), but the content would be still inaccessible.
See https://github.com/storj/gateway-mt/issues/209 for details.
This change allows GetObject and GetObjectIPs to require either read or
list permission to support the described use case.
Change-Id: I3477edc7bf8990e9848482890da047094c875d09
New flag 'MultipleVersions' was not correctly passed from metainfo
configuration to metabase configuration. Because configuration was
set correctly for unit tests we didn't catch it and issue was found
while testing on QA satellite.
This change reduce number of places where new metabase flags needs
to be propagated from metainfo configuration to avoid problems with
setting new flags in the future.
Fixes https://github.com/storj/storj/issues/5274
Change-Id: I74bc122649febefd87f665be2fba628f6bfd9044
BeginCopyObject checks twice for write permission in the destination
bucket. One check should be enough.
Change-Id: I3d5935d34f69cd48eaaf00d0117683edfdcefc05
We had multiple experiment so far to collect high cardinality data (mainly in aggregated form).
1. we have a `/top` endpoint which aggregates events with upper bound
2. we use same api (eventstat) to publish S3 gateway-mt agents to influxdb
This patch starts to replace theses api with jtolio/eventkit. Instead of aggregation all events can be sent to a collector host where we can do aggregation and/or persisting data.
Change-Id: Id6df4882b51d2dbd2be9401ee4199d14f3ff7186
The threshold of piece deletions from the nodes during CommitObject
when overriding an existing object seemed to cause a race condition in
tests.
This change makes the threshold configurable so we can set it to maximum
so CommitObject waits until all pieces are removed from the nodes in the
test.
Change-Id: Idf6b52e71d0082a1cd87ad99a2edded6892d02a8
We have new flow where existing object is deleted not on begin
object but on commit object. Deletion on commit object is still
missing deletion from storage nodes. This change adds this part
to the code.
Fixes https://github.com/storj/storj/issues/5222
Change-Id: Ibfd34665b2a055ec6c0d6e260c1a57e8a4c62b0e
With this change we are switching methods to begin object, from
BeginObjectExactVersion to BeginObjectNextVersion. Main implication
is that from now it will be possible to have object with version
different than 1. New object will always get first available version.
Main reason to do this it to avoid deleting existing object during
reuploading object. Now we can create multiple pending objects but
only last committed will be available to the user. Any previous
committed object will be deleted.Because of that we moved logic to
delete existing object from BeginObject to CommitoObject request.
New logic is behind feature flat to be able to test it well first
before enablng on production.
Fixes https://github.com/storj/storj/issues/4871
Change-Id: I2dd9c7364fd93796a05ef607bda9c39a741e6a89
Add new project db method, GetSalt, to get project salt. If salt
column is empty, return the sha-256 hash of the project ID. This
new method is used in metainfo endpoint ProjectInfo to return the
project salt to the client. This is backwards compatible because
the salt column is not populated yet. The updated endpoint will
do the same thing as the current endpoint.
Change-Id: I7eba376c865e10995a5a916302feca7cd7c7efa2
We will introduce new logic for creating new objects (BeginObject).
Instead of using single version internally (1) we will be selecting first
available version during object creation. Because we need to be sure
that everything is wired up correctly we need a feature flag to be
able to control if new feature is enabled.
Change-Id: If0f8496397130811f43bf9db9fdcc2b30cd2e4ca
Implement a new service to read retain filter from a bucket and
send them out to storagenodes.
This allows the retain filters to be generated by a separate command on
a backup of the database.
Paralellism (setting ConcurrentSends) and end-to-end garbage collection
tests will be restored in a subsequent commit.
Solves https://github.com/storj/team-metainfo/issues/121
Change-Id: Iaf8a33fbf6987676cc3cf74a18a8078916fe673d
Main issue with those tests was that for case where all objects were
uploaded at once (case "some nodes down" and "all nodes down").
Because all objects had the same name while upload each new object
was overwriting existing object. Because of that instead had several
objects to delete by test explicitly we had just 1. Test were not
failing because while overwriting existing object we were deleting it
but it was not what this test should do.
Change-Id: I602116f00be66589c7c0e68fe28c25e5c03e6b5d
We plan to replace metabase.BeginObjectExactVersion usage in
metainfo.BeginObject with metabase.BeginObjectNextVersion. To make this
switch as simple a possible would be nice to have the same results for
both methods. This change is extending return value for
BeginObjectNextVersion to whole object struct. Tests were also adjusted
to be more like metabase.BeginObjectExactVersion tests.
Part of https://github.com/storj/storj/issues/4871
Change-Id: I4db99d74af07e5a73757b55233e0bbdc7b99d565