Restored GetObjectLatestVersion and renamed it to GetObjectLastCommitted
Add test cases to cover server-side copy
Closes https://github.com/storj/storj/issues/4866
Change-Id: I343b339a60152b8fb92fda97baf80bd8fe60d631
As a reminder
* This counters are for data with high-cardinality
* We have strong upper bound for memory limits
* They can be accessed from /top monitoring interface
Example:
```
curl 172.20.0.10:11111/top
since ~ 2022-08-09T07:45:58Z
auth_request_count project=9094cff8-104e-4956-a367-97ea134b7e06 11.000000
auth_request_buckets 1.000000
auth_request_discarded 0.000000
auth_request_count partner=00000000-0000-0000-0000-000000000000 11.000000
auth_request_buckets 1.000000
auth_request_discarded 0.000000
```
Note: discarded 0 --> we didn't hit the memory limit.
Change-Id: I8db09b4aa61bade55cb324b84b7fbcb8f068c179
We log metainfo object operations and it looks that the log's message
convention is `Object {operation}`, however the `Object Download`
operation didn't match with the actual operation and the one that was
representing it had was `Download Object`.
This commit changes the log's message for the download object operation
according to the other object operations log messages format and fixes
the log message for the Get Object operation.
For finding this I executed the following command at the root of the
repository to obtain the list of lines where we log object operations.
$> ag 'log\.Info\(".*Object.*",' --no-color git:(main)
satellite/metainfo/endpoint_object.go
179: endpoint.log.Info("Object Upload", zap.Stringer("Project ID", keyInfo.ProjectID), zap.String("operation", "put"), zap.String("type", "object"))
336: endpoint.log.Info("Object Download", zap.Stringer("Project ID", keyInfo.ProjectID), zap.String("operation", "get"), zap.String("type", "object"))
557: endpoint.log.Info("Download Object", zap.Stringer("Project ID", keyInfo.ProjectID), zap.String("operation", "download"), zap.String("type", "object"))
791: endpoint.log.Info("Object List", zap.Stringer("Project ID", keyInfo.ProjectID), zap.String("operation", "list"), zap.String("type", "object"))
979: endpoint.log.Info("Object Delete", zap.Stringer("Project ID", keyInfo.ProjectID), zap.String("operation", "delete"), zap.String("type", "object"))
`ag` is a command-line tool similar to `grep`
Change-Id: I9072c5967eb42c397a2c64761d843675dd4991ec
removed segment limit validation and checks in metainfo endpoint and accounting/projectusage
since feature is live and has always has segment limitation now
Resolves: https://github.com/storj/storj/issues/4470
Change-Id: I8cf87cbbc40ac61262f9f05e52573d3ae6410611
Previously there was no realtime administration of the storage usage
during copies. Now there is.
Closes https://github.com/storj/storj/issues/4719
Change-Id: I0d536bf551d16208116c3aceac89ed590ec473bf
Piece deletion service was using KnownReliable method from
overlaycache to get nodes addresses to send delete request.
KnownReliable was always hitting DB because this method was
not using cache. This change is using new DownloadSelectionCache
to avoid direct DB calls.
Change is not perfect because DownloadSelectionCache is not as
precise as KnownReliable method and can select few more nodes
to which we will send delete request but difference should be
small and we can improve it later.
Updates https://github.com/storj/storj/issues/4959
Change-Id: I4c3d91089a18ac35ebcb469a56536c33f76e44ea
We need to provide the ability to see bucket attribution on the gateway side
so customers can validate if bucket is attributed to them. Extendet metainfo.ListBuckets
request with UserAgent.
Fixes https://github.com/storj/storj/issues/4965
Change-Id: I5624874a7faa14cda06183ad44013e9ebb385b63
This just cleanup change to unblock libuplink to reorganize types
which are aliases to storj types.
Change-Id: Id3edf13f1b0aef52d7606d545aa7a6594cf8d13f
This change integrates the session management database functionality
with the web application. Claim-based authentication has been removed
in favor of session token-based authentication.
Change-Id: I62a4f5354a3ed8ca80272814aad2448f901eab1b
If project.UserAgent is set, use this for bucket.UserAgent on bucket
creation. Otherwise, set bucket attribution as before (getting UserAgent
from request headers).
Tests were updated to create the bucket with a different user, added as
a project member. Otherwise, the tests do not catch the bug.
Change-Id: I7ecf79a8eac5957eed361cbea94823190f58b776
- parallel deletion of 50 objects and their 50 copies (one copy per object)
This test is skipped because it's creating deadlocks that are not automatically retried on postgres
- parallel deletion of 1 object and its 50 copies.
Fixes https://github.com/storj/storj/issues/4745
Change-Id: Id7a28251c06bb12b5edcc88721f60bf7a4bc0492
We can use PieceIDDeriver in all places where we are deriving id from
the same id multiple times. We have serveral such places: gc, segment
deletion, segment validation, order limit creation. Using it should
save some resources.
Change-Id: I24668d516c0f7cea4aec6470614067734149501d
The existing versionCollector metrics can tell us how many times various
metainfo endpoints are called, but they don't tell us how many bytes a
client is transferring. We currently can't collect precise information
on this, but we can collect information on how much planned traffic is
requested via order limits.
The implementation as provided is intended to measure objects sizes
before erasure encoding is taken into account.
Change-Id: I2f1d2a7831630e8439ecf5342e933df259151792
Create an error class for the "pending object error" for distinguishing
it from other errors for allowing to return it as a "Not Found" DRPC
status code instead an "Internal" status code.
"Internal" errors are logged in the satellite error so this was
polluting the server logs aside of returning an inappropriate status
code.
Change-Id: I10a81adfc887c030c08a228158adc8815834b23c
Version collector previously returned errors and logged them in the
calling code. It is cleaner to log inside version collector.
Change-Id: I52cb49a1ef53f3f1f51692ddb26ec095cfd0f100
We were already able to override (or not) metadata with this method
but to be explicit we are introducting new option to control storing
metadata with object. Separate option should be less error prone.
https://github.com/storj/team-metainfo/issues/105
Change-Id: I4c5bce953a633a0009b05c5ca84266ca6ceefc26
We implemented server-side copy feature and we would like to
confirm that it is not affecting expired deletion service.
Resolves: https://github.com/storj/storj/issues/4698
Change-Id: Ia8ca27a7ab7764a48a0c85dc7be80a58bfc83729
Initial space used for pieces is calcualted, not retrieved
from storage nodes and at the end of test we are deleting
also copies that become ancestors to verify that all data
was removed from storage nodes.
Change-Id: I9804adb9fa488dc0094a67a6e258c144977e7f5d
Before, the VA query was summing the total and dividing by the number of
rows. This gives the average bytes stored per hour, but we charge for
usage with byte-hours. Why not do value attribution the same way?
To do that, we don't divide by the number of rows. We also have object
and segment fees so return segment-hours and object-hours too.
Change-Id: I1f18b7e1b2bae1d3fae1ca3b93bfc24db5b9b0e6
This change has two purposes. First is to avoid DB call in case
source and destination bucket are the same.
Second is to return bucket not found error in correct order. If
source and destination bucket are different we will first check
source and later destination. Currently we will get first error
about not existing destination bucket.
Because of this change we stop putting bucket placement
into satellite stream id but its not needed as we don't use
this value with finish move/copy object methods.
Change-Id: I0f7b3ba604d53c722e8fa4d7a37843a69d02bebd
So far we assumes that metadata key/nonce cannot be empty at all
but at some point we adjusted code to accept empty metadata/key/nonce
to save DB space.
This change is adjusting how we are processing nonce while
FinishMoveObject/FinishCopyObject. We can use storj.Nonce directly
which makes code cleaner. It's also fixing issue in FinishMoveObject
where we didn't convert nonce correctly to []byte.
Part of change is disabling validation for key and nonce until
uplink will be adjusted. We need change uplink to send always
both key and nonce or non of them. Validation will be restored
as soon as change for uplink will be merged.
https://github.com/storj/storj/issues/4646
Change-Id: Ia1772bc430ae591f54c6a9ae0308a4968aa30bed
Add uplink-php and nextcloud as user agents. These sending of these
user agents was added to recent releases of these clients.
Change-Id: Ia2732ade1d9e5cf8d4e41fe246faec3feaa58c25
Uplink have some types aliased from storj/common repo. It's like
that for easier type replacement if we decide to use custom type
instead of aliasing. Because in storj/storj we are not using aliases
it's impossible to do refactoring on uplink side. This change is
cleaning up this situation.
Change-Id: I20c8e31b9a821983483af1c67b2e7bb91397fd9d
Chronograph statistics indicate that much of our Gateway-MT traffic may
originate from and also is metriced as rclone traffic. This makes it
difficult to understand what our users are doing. This solution makes
it clear what products are actually being used, likely without
increasing the cardinality of our metrics by more than one.
Change-Id: I5d5e2af3715fa0864f69f1145fd78caf7e4a4224
For server-side copy we adjusted one method DeleteObjectExactVersion.
Other deletion methods won't be used directly in code at the moment.
We will adjust other methods later or decide if we will need them at
all.
To handle deletion of objects with copies or just copies correctly we
need to use DeleteObjectExactVersion method in two places while:
* removing object before upload
* explicit object deletion
This change is also changing DeleteObjectExactVersion method to
delete pending objects because we need this functionality to
delete object before new upload.
https://github.com/storj/storj/issues/4481
Change-Id: Ieff5cc95732bb70ed8cc0ecdd62e03c929857c02
Copy object functionality should support setting new metadata for
copy. This change is adjusting FinishCopyObject method to set new
metadata when OverrideMetadata field is set to true.
Fixes https://github.com/storj/storj/issues/4483
Change-Id: Ica37cb57e8edae301cdc483fbda4f3ddba5d2702
Updates metadata and metainfo to return object metadata with
FinishCopyObject request.
https://github.com/storj/storj/issues/4474
Change-Id: I32cba5c20a943272e9b5964df1b3d6463ad212dc
We would like to disable in production those parts of code
which are now mixed with new server-side copy logic.
Change-Id: Iff50682bc9545207330f58dd19b5eee53d404d7f
Tests refactoring to reuse testplanet instance between some
of tests. This should decrease resources needed to run those
tests.
Change-Id: I98f3041ec23085d3903b19acd339904973319ec1
A user-agent string can contain multiple "products", in the case of
Gateway-MT at least this includes the HTTP client's full user agent.
This means that "other" is often logged even when we know the Storj
product, and sometimes logged more than once per call to "collect".
This makes sure that "other" is only logged if a product isn't
identified, and only logged once.
Change-Id: I8536f7eb32877e36fec97dab7b8d477ccb10f92e
transfer-sh will be set as of https://github.com/dutchcoders/transfer.sh/pull/467.
filezilla needs to be verified and duplicati is set per info from @TopperDEL
comet and orbiter are added as preparation.
Change-Id: I44d730a7b3ba1969068e48c2477b478831799cd1
Before this change we were returning full DB error message.
That can be very confusing for end user. This change is translating
error message into more user frindly version and fixes also DRPC
error status code.
Fixes https://github.com/storj/team-metainfo/issues/76
Change-Id: I29b06ab4ba50a0d14db7a822a2906d95d65ab524
We don't have different version of object than 1 so at the moment
this method is not needed. Also using GetObjectExactVersion
should be slightly more performant.
Change-Id: I78235d8ae22594cc1d6345dabcc915f41cd7797b
We already split main code base, now we need to split test
to reflect new files structure (bucket/object/segment/other).
Fixes https://github.com/storj/team-metainfo/issues/12
Change-Id: Ica1054c4fc7df764483b03f204b4beba094df8e1
This is part of change to split metainfo endpoint into smaller
files. It will be grouped by bucket/object/segment/other requests.
Tests will be split as a separate set of changes.
Updates https://github.com/storj/team-metainfo/issues/12
Change-Id: I6c691e4d0e192fe3ad7974d2d0ab5ced0d272f3c
This is part of change to split metainfo endpoint into smaller
files. It will be grouped by bucket/object/segment/other requests.
Tests will be split as a separate set of changes.
Change-Id: I9b097dcc8fa889f985b7f4ef5f8f435a1ff0ef95
This is part of change to split metainfo endpoint into smaller
files. It will be grouped by bucket/object/segment/other requests.
Tests will be split as a separate set of changes.
Change-Id: I5128c84e06c82777fe71460bf5f9a6e26e52a243
Currently the rate limit has kept per satellite api endpoint.
Since we run 9+ api endpoints in production, we do not need
a limit of 1000, since the intention was to allow 1000 total.
This change reduces the effective limit given 9 instances
down to 900, which should be close enough.
Change-Id: Ia579149ccc3a12e8febe0cfd5586b8a39de40f55
We were returning pure non rpc errors in two cases.
This change added loging and correct rpc error as a
return.
Change-Id: I581ceb17dcdc00921dfa3c1057015c3b4d04308d
We need to combine methods from accounting.Service (ExceedsStorageUsage and ExceedsSegmentUsage)
to run checks concurrently.
Resolves https://github.com/storj/team-metainfo/issues/73
Change-Id: I47831bca92457f16cfda789da89dbd460738ac97
Refactoring to do few things:
* move simple validation before validations with DB calls
* combine validation check/update for storage and segment
limits together
Change-Id: I6c2431ba236d4e388791d2e2d01ca7e0dd4439fc
updating project segment usage entry to redis is enabled only if
segment validation is enabled in config value, so no incompatible
changes will be added while deployment.
Change-Id: I1288cb9ff0a8a00f095dc94e20d2f14393e9a613
We want to know usage statistics for our main tools
like uplink-cli or rclone. Initially we will collect
only usage stats without relation to specific process
e.g. download or upload.
Change-Id: I203b1a6c07ae014e710368f77163f13fdf10763c
comment tests that contains fields needs
to rename on uplink side without breaking compatibility.
after rename tests will be moved back from comments.
Change-Id: I3bc4aff6ae7f6711ade956ac389f0d7e1a1ab91a
comment tests that contains fields needs
to rename on uplink side without breaking compatibility.
after rename tests will be moved back from comments.
Change-Id: I439783c62678c32805a85aa52bef1d2b767543a1
We want to be able to limit the number of segments per project for users.
To limit this we need to check limit value associated with project
and value of used segments already in BeginMoveObject, BeginMoveSegment
and increment cache segments usage after each CommitSegment call.
Resolves https://github.com/storj/team-metainfo/issues/1
Change-Id: I6290e67c095a174b9d101c4521802d9bfe0453b8
At some point we missed to add metadata key to list objects
response. Because of that uplink is taking key from pb.SteamMeta.
We need to clean it up.
Tests will be added on uplink side.
https://github.com/storj/uplink/issues/71
Change-Id: I3328e2f1b86bca15aeaf89f8e59cdca3c8e97742
For backward compatibility we are overriding pb.StreamMeta
values returned as encrypted metadata. It turns out that we
should do it not when target values are missing but when
values to override exists.
This was causing problems after move operation. Details can be
found here https://github.com/storj/uplink/issues/70
Backward compatibility test will be added to storj/uplink testsuite.
Change-Id: I72e7a01226b1dd62902cb0d6ebb1ff91a4693005
Previously, only valid partner IDs could be used for bucket level value attribution. Now that any useragent byte slice can be used, we should allow for empty useragent strings to be stored rather than throwing an error or leaving the bucket with no attribution.
Change-Id: I7043f835588dab1c401a27e31afd74b6b5a3e44b
Currently detecting if error NotFound is for object
or bucket is very fragile on uplink side. We cannot
change this messages for now.
Change-Id: I6ec4d98116477812f031134e4f1c9e73bdce8b27
Check if the context is done at the beginning of the download object and
segment and return right away if it's the case.
Check if Redis returned an error due to context cancellation for not
logging the error.
Change some logging messages of Redis errors to reflect on them if the
error happens while downloading an object or a segment.
Change-Id: I8ed8ff9ff7bb170b560f41356ea06820ce6c4e12
The main motivation is to wrap the bucket DB and metainfo DB, so we
could check if a bucket is empty before applying geofencing config.
Change-Id: I8bac21555e01d51a663fb557bc1acfc8106bc2e1
Add the project ID to error logs messages about not being able to
retrieve and track storage or bandwidth usage.
Some error logs related to these errors already contained the project ID
but others didn't, having it in the ones that didn't make them more
consistent and will provide more info which may be helpful for
troubleshooting them.
Change-Id: Ia9fc707a7f3aff0867645bb941badc199c2bf832
Don't log as an Error when users make request that cannot be fulfilled
because they exceed their storage or bandwidth limits.
Those are not system errors and the service is handling them correctly
and showing them in the logs an "ERROR" is misleading.
Change-Id: Iac642b7e8ba92840bb943192ad0694b5f4930258
To resolve problem with lack of ability to set metadata while MPU on
gateway we are adding setting metadata with BeginObject. This
change makes also metadata optional while CommitObject. We need
this functionality to not override metadata set with BeginObject in
case when metadata is not set with CommitObject.
Another reson is that we would like to not set metadata at all if user
didn't specify metadata. At the moment we always setting some bytes
for metadata fields e.g. empty EncryptedMetadata field can have key
and nonce set.
Change-Id: Ifee25b7718eb1f919119db9b698b29d8b5ebe2ec
To reduce database load, add option to omit as many object properties
as possible when listing objects.
Change-Id: I817633801b00629a4042d1d1bd2389ee581953de
Removed PathCipher and DefaultSegmentSize from CreateBucketParams
since they are unused anymore and breaks Integration on uplink side
Change-Id: I1393a7f1f436940731aa59edd693043336383290
The UserAgent should be stored as is, with the exception of removing the trailing version from any libuplink user agents
Change-Id: If17ef2fc4b59480a3477300f2585a07d64cc2bf4
Currently slower storagenodes can slow down deletion queue.
To make piece deletion faster reduce the maximum time spent in
either dialing or piece deletion requests.
With this change:
* dial timeout is 3s
* request timeout is 15s
* fail threshold is set to 10min
Similarly, we'll mark storage node as failed when the timeout occurs.
The timeout usually indicates that the storagenode is overwhelmed.
Garbage collection will ensure that the pieces get deleted eventually.
Change-Id: Iec5de699f5917905f5807140e2c3252088c6399b
We needed Redundancy insided sat StreamID when uplink was defining RS values. Now it can be removed.
Change-Id: Id37187493eaa00cf29cb0262a050d71add3deb96
We should improve the way how we are handling metabase errors in
metainfo endpoint.
https://storjlabs.atlassian.net/browse/PG-316
Change-Id: I1da6f333546cabf34d6eb1de8e94a3ef455d75d5
Multipart upload limits added. Last part has no size limit.
Max number of parts: 10000, min part size: 5 MiB
Change-Id: Ic2262ce25f989b34d92f662bde720d4c4d0dc93d
Speedup is done by reducing number of testplanet instances
for tests without changing main test logic.
Change-Id: Ic3849485d37b8ca55c013a45b7191dce65b88b04
Uplink needs only part of columns we are reading from DB.
To improve performance we should read only those that are
realy needed.
Change-Id: Ib39259318169c46afe5fa4c6ce2184da82e960c8
This change introduce problems with server side move so
let's revert it for now. Problem was found when latest
version of storj/storj was used in uplink tests.
This reverts commit 1ef06fae99.
Change-Id: I4d4fad5d1ea04ba15ff9d7bd765f7e078e9187c2
We were using mixed types for nonce fields. Protobuf
have storj.Nonce, metabase have []byte. This change
is a refactoring to have everywere its possible only
storj.Nonce.
Change-Id: Id54bd8481f30c721cdaf3df79206d25e7cfdab55
We are not using the benchmark results for anything, they are mostly
there to ensure that we don't break the benchmarks. So we can disable
CockroachDB for them.
Similarly add short versions of other tests.
Also try to precompile test/benchmark code.
Change-Id: I60b501789f70c289af68c37a052778dc75ae2b69
It's safer to create new connection pool for piece deletion
only if dialer have no existing pool assigned.
Change-Id: I26661683ab7c0198587905478057c01c8f533a7e
We should be using object naming insted of path.
This is one place where we can easiliy change it.
To regenerate protbuf I had to remove gogo.proto.
Most probably it was confilicting with gogo.proto
from common/pb.
Change-Id: Ia5972f77994765c8f26bf1c3dc8205d2eadd70fa
We want to enable connection pool for piece deletion to avoid
doing multiple SSL hanshakes to SN while massive deletion process.
Change-Id: Ic917e4eda304ee16a286926ef046fe9e38bf38ca
This PR utilize the new burst limit column from projects table to allow
control on the limit for request per seconds and token bucket size
When no burst limit is explicitly set, rate limit is applied to both so
we don't limit how quickly request can be made in a second.
Change-Id: I883235c60c5d6416aeadd1c80ed2ebd193aa4d9f
server-side move extended with moving between buckets, for this reason
we change bucket name for object in db.
Change-Id: Ie21bcccc170e6ff14dcd8053fdb86fdf6d8438a0
Some processing inside storagenodes is async compared to uplink upload
and download, hence we need to explicitly wait for storagenodes to
finish their pending work before flushing orders to the satellite.
Hopefully this fixes TestAttributionReport flakiness.
Change-Id: I77c651ab6471ae094b5c21d1ab3860c96cb0d039
Second method needed to perform server-side move. It updates
metadata key and nonce and all segments key and nonces.
Change-Id: Ia43b26622a13048269f0ae9e1524b345db112adb
First from two methods needed to perform server-side move. It gets
metadata key and nonce and all segments key and nonces and returns
all of that to uplink.
Change-Id: Ied2c79559e77d3f63091c4d61948f2d6a2147d67
Currently, requests that were successfully passed through the metainfo
endpoints rate-limiter might still fail in the middle of the
corresponding response. The problem is that we perform rate-limiting a
second time, which means other requests would influence whether the
current (already rate-checked) request will fail. This also has other
unintended effects, like responding with rpcstatus.PermissionDenied for
requests that were successfully rate-checked and did not lack
permissions but were rate-checked again in the middle of
(*Endpoint).BeginObject. This situation has been happening on the
gateway side and might affect other uplink clients. This change, where
appropriate, swaps subsequent validateAuth with validateAuthN that
performs rate-limiting once.
Change-Id: I6fc26dedb8c442dd20acaab5942f751279020b08
At some point we moved metabase package outside Metainfo
but we didn't do that for satellite structure. This change
refactors only tests.
When uplink will be adjusted we can remove old entries in
Metainfo struct.
Change-Id: I2b66ed29f539b0ec0f490cad42c72840e0351bcb
Two small cleanups:
* merging private commitObject, commitSegment,
makeInlineSegment with its public versions. We were
using it when pb.Pointer was still used.
* removing unused CreatePath method
Change-Id: Ib18b07473d91259335dab874559ef52412ab813d
We're seeing BeginDeleteObject in metaclient returning object not found:
metabase: no rows deleted in the Gateway-MT mint tests. There's a
client check for rpcStatus.NotFound, but the metabase endpoint isn't
wrapping the db error as a DRPC error.
Here's the chain:
gateway.AbortMultipartUpload()
project.AbortUpload()
metainfoClient.BeginDeleteObject() <- understands DRPC errors
endpoint.DeletePendingObject() <- where this code is
db.DeletePendingObject() <- returns error
Change-Id: I93991de76487426df0a807b0d1e69fc975196a1a
We need a way to delete whole part. This especially
needed for uplink multipart API to do cleanup after
aborted or failed part upload.
Test will be added when uplink part will be merged.
Change-Id: I9ba69a49e1adcdce0f42dd3a76f938fcf931155a
Added includeMetadata parameter which represents if metadata should be included in response
by default true, in case of new uplink version - ObjectIncludes will be used instead.
Change-Id: I2f8d3b4cc354cd655f8093bbbebe0e3c2ae14e6f
Bucket tally calculation will be removed from metaloop and will
use metabase objects iterator directly.
At the moment only bucket tally needs objects so it make no sense
to implement separate objects loop.
Change-Id: Iee60059fc8b9a1bf64d01cafe9659b69b0e27eb1
We added expires_at column to segments table and now
we need to populate this column while committing segment.
We still need to migrate existing segments with
separate tool.
Change-Id: Ibac8c63d97201dd98cc2cb9db385f4cb73bc3f7e
Satellites set their configuration values to default values using
cfgstruct, however, it turns out our tests don't test these values
at all! Instead, they have a completely separate definition system
that is easy to forget about.
As is to be expected, these values have drifted, and it appears
in a few cases test planet is testing unreasonable values that we
won't see in production, or perhaps worse, features enabled in
production were missed and weren't enabled in testplanet.
This change makes it so all values are configured the same,
systematic way, so it's easy to see when test values are different
than dev values or release values, and it's less hard to forget
to enable features in testplanet.
In terms of reviewing, this change should be actually fairly
easy to review, considering private/testplanet/satellite.go keeps
the current config system and the new one and confirms that they
result in identical configurations, so you can be certain that
nothing was missed and the config is all correct.
You can also check the config lock to see what actual config
values changed.
Change-Id: I6715d0794887f577e21742afcf56fd2b9d12170e
We want to move some of current metainfo loop observers to
segment loop. This change adds new service, similar to metainfo
loop but which is iterating only over segments.
Change-Id: I67f7f461781723a4476e2b83377f31736d7c4870
Previously the object range was not used for calculating order limit.
This meant that even if you were downloading only a small range it would
account bandwidth based on the full segment.
This doesn't fully address the accounting since the lazy segment
downloads do not send their requested range nor requested limit.
Change-Id: Ic811e570c889be87bac4293547d6537a255078da
Currently the interface is not useful. When we need to vary the
implementation for testing purposes we can introduce a local interface
for the service/chore that needs it, rather than using the large api.
Unfortunately, this requires adding a cleanup callback for tests, there
might be a better solution to this problem.
Change-Id: I079fe4dbe297b0ae08c10081a1cea4dfbc277682
The system and database time may drift. We should use database time for
absolute "as of system time" to ensure that it's not newer than the
current database time. When the "as of system time" is in the future,
then the query will fail.
Change-Id: I5423f6aaad966ca03a76b5ff805bfba932e44a51
errs.Class should not contain "error" in the name, since that causes a
lot of stutter in the error logs. As an example a log line could end up
looking like:
ERROR node stats service error: satellitedbs error: node stats database error: no rows
Whereas something like:
ERROR nodestats service: satellitedbs: nodestatsdb: no rows
Would contain all the necessary information without the stutter.
Change-Id: I7b7cb7e592ebab4bcfadc1eef11122584d2b20e0
Initially there were pkg and private packages, however for all practical
purposes there's no significant difference between them. It's clearer to
have a single private package - and when we do get a specific
abstraction that needs to be reused, we can move it to storj.io/common
or storj.io/private.
Change-Id: Ibc2036e67f312f5d63cb4a97f5a92e38ae413aa5
cache is really common variable and type name and we have already used
the package name alias in multiple places.
Change-Id: I6435785b7549b541d533de59ec94557b9bd11e04
Initially we duplicated the code to avoid large scale changes to
the packages. Now we are past metainfo refactor we can remove the
duplication.
Change-Id: I9d0b2756cc6e2a2f4d576afa408a15273a7e1cef
Currently the loop handling is heavily related to the metabase rather
than metainfo.
metainfo over time has become related to the "public API" for accessing
the metabase data.
Currently updates monkit.lock, because monkit monitoring does not handle
ScopeNamed correctly. Needs a followup change to monitoring check.
Change-Id: Ie50519991d718dfb872ec9a0176a82e732c97584
metabase has become a central concept and it's more suitable for it to
be directly nested under satellite rather than being part of metainfo.
metainfo is going to be the "endpoint" logic for handling requests.
Change-Id: I53770d6761ac1e9a1283b5aa68f471b21e784198
The cursor was not being used in the batch deletion.
The stream ID was not being used while deleting, which could in rare
circumstaces delete a newly uploaded object.
Use the stream id in deletion, rather than passing that information from
one query to another.
Change-Id: I03271c6e72747e345dfb0bb70989f29e835efd8e
Check that the bloom filter creation date is earlier than the
metainfo loop system time used for db scanning.
Change-Id: Ib0f47c124f5651deae0fd7e7996abcdcaac98fb4
During metainfo refactor we disabled some validation as it was designed to validate pointer. Now part of this validation is restored. This is first part.
Change-Id: I6132f922fe23d60118bbccfdb77fd93c3c81afed
Document the fields that migrated objects have missing, it's easy to
forget that they might not exist.
Avoid downloading the segment, if we're not sure whether it's the
correct one. We'll later improve the code with an heuristic to get a
best guess, which segment to download.
Change-Id: I12395c17bbf0edf25e0d00c8d072fce6085e303b
We recently added create_at column to segments table.
Old segments needs to get this value from objects table.
This tool will iterate over all objects and update corresponding
segments if create_at column is not set.
Change-Id: Ib5aedc384637e739ee9af84454af0639e2559416