We need to keep an inmemory cache to avoid lookups into aliases table.
This adds the inmemory state of the cache.
Change-Id: Ief2b9bb19e10b46839b9208472dfc3035eb49af3
This is first step in supporting node aliases. It adds a table
that automatically assigns aliases to nodes inserted into the table.
Change-Id: Ibdf40097c3c1e5b371500203f8db203505a48adc
This ensures the caveats are unique even when they contain the same
permissions and will result in unique macaroons. This is important to
ensure revocation doesn't impact more macaroons than intended.
Change-Id: I6354edd0119f2d85eaf580f2d1926a3de9151b88
We need this method to fix repairing pending objects. In another PR, it
will replace the GetObjectLatestVersion + GetSegmentByPosition calls
that are currently executed.
Change-Id: I4c5c2ab604edf898452b6fd21b86d4d3f970ce79
Comparing the result from a subquery with the "IN" operator instead of
"=" makes a huge difference in the execution time of the SQL query on
CockroachDB.
Change-Id: I76e8f75a7bc95951667345d1ed9bd60f9aef3edb
We wanto have single uplink branch for standard and multipart-upload satellite but some tests are using helper methods from multipart. This change adds methods used by uplink test.
Change-Id: I82352ed56674ff7e8743b58061ba594018e78e3b
We are checking if satStreamID is created in the last 48 hours. If it is
older we treat is as expired an fail to unmarshal it.
Since the satStreamID is also the Upload ID for multipart uploads, this
means that all calls fail for multipart uploads older than 48 hours.
Even aborting old multipart uploads is not possible.
To resolve this issue, we should stop checking satStreamID for
expiration.
Change-Id: Ieaf53ed3cd800cdd08843676c2d9490b007d962e
Parts that have segment index gaps should be treated similarly how
multipart objects are, because direct calculation of the segment does
not work.
Change-Id: I2717eac36f085b5100f3d600fcf0ce056202a9eb
CreateGetOrderLimits is not used anymore because we have CreateGetOrderLimits2. We need to remove old method and fix name of second.
Change-Id: I59148b8d28fc9dbab7d452c884319125a02745d1
In some cases we need to set encryption parameters later, with CommitObject method. This change makes Encryption optional with BeginObject* methods and mandatory with CommitObject if not set earlier.
Change-Id: I812c9b0e8fc213ca32d4758e0e68227e0e9bdd32
In the past we were storing fixed segment size with StreamInfo, encrypted in metadata. The value was unencrypted size of segment, not encrypted one.
Change-Id: Id6b18440c674223eabbb152b1636c83e1ab6462c
On upload we need to override pending and committed object. This change is adjusting DeleteObjectAllVersions to delete both.
Change-Id: Ib66c2af207c618119f7bf0de7fa9d3e5145d8641
* Deduplicate NodeID list prior to fetching IPs.
* Use NodeSelectionCache for fetching reliable IPs.
* Return number of segements, reliable pieces and all pieces.
Change-Id: I13e679caab275488b4037624b840a4068dad9589
For being able to have resilient multi-region satellites we cannot stop
processing uploads/download client request when Redis isn't responding
properly.
These changes avoid to stop the processing of the client requests when
we cannot check if the client exceeds its storage or bandwidth limits
and we cannot update its used storage/bandwidth limits because Redis is
not responding successfully or the satellite database returns an error.
Change-Id: Ia7f12c07fc9ffdfad0e7ff052ff3fd81eca0f0e3
Non-multipart uplink implementation is always trying to download object
by downloading last segment first (PartNumber=0, Index=-1) but this
approach won't work with multipart object. We need to reject such old
style request with reasonable message.
Change-Id: I9221e019933565a8d25136bdfef3e054320bac3d
Avoid using project uuid string representation, because
it uses more bandwidth.
This reduces the encrypted metadata size from 118 -> 97 bytes.
Change-Id: Ic53a81b83acc065f24f28cd404f9c0b1fe592594
The total_plain_size and total_encrypted_size columns in the objects
table were set as INT4, which limits the size of committed objects to
just 2 GiB.
This patch migrates the DB to change the type of these fields to INT8.
Change-Id: Iad7e7b44a652e6c5b8e17b80588637bb48390fe6
IterateObjectsAllVersionsWithStatus
We need different implementation for IterateObjectsAllVersions because
we want to iterate over all object without specifying object status.
Existing method will have new name but implementation details are not
changed.
Change-Id: I01b987996772fa7f8fd73da9910d52db2d1aa0d7
This fix issues with passing observers between iteration methods.
It's not best implementation but I think we will need to optimize it
soon one way or another.
Change-Id: I574599bfd10822d84e2d2f1800bcd88e176a76ea
We need to be able to update just remote_pieces column in DB. This is
needed at least for repair process.
Change-Id: I20dcc9b06babfefbbf102f32b1d14946379f26c2
It was designed to detect and remove zombie segments in the PointerDB.
This tool should be not relevant with the MetabaseDB anymore.
Change-Id: I112552203b1329a5a659f69a0043eb1f8dadb551
We need to be able to list all buckets in DB without knowing project ID.
This method will be used to list buckets for metainfo loop
implementation based on metabase.
Change-Id: Iac75af0eee4f31e80a15577575a8249cbca787b2
- TestBucketNameValidation
- TestBatch
- TestCommitObjectMetadataSize
- TestIDs
TestOverwriteZombieSegments is removed as not relevant to metabase.
Change-Id: I13cf5abe342089960628f185061303fd4f9d09a4
This also removes the
TestEndpoint_DeleteObjectPieces_ObjectWithoutLastSegment test case as it
does not seem relevant to metabase.
Change-Id: I06a0ecaa8232c10c15e433517a7ba056933bf858
We should set the client requested maxParts to MaxListLimit if it is
greater than that value instead of returning an error.
MinIO default value for maxParts is 10,000 while the satellite's
MaxListLimit is 1,000. If we return an error, the ListParts with default
maxParts will throw an error.
Change-Id: I06739e1d8d8f96803eba491585395da0443aec04
We have some issues with SUBSTRING function on cockroachdb so for now we
are removing it from SQL query and replacing with go code.
Change-Id: I5be921211067d42e7d1a4997076bcfdbed9617a1
iterator
This method replaces `deleteByPrefix` as at the moment only function of
this method was to delete objects in a bucket.
Change-Id: I5266103672003fbd64f3847f53760b1ba0016fe2
Otherwise, if left to default version 0, the iterator will include the
cursor item in the result, which fails some tests.
Change-Id: I85103a36852477f371ec46c673a82c2e129978b7
Old iterator returns object keys without prefixes, this helps to reduce
the bandwidth from the database. The endpoint also doesn't send the
prefixes.
Change-Id: I77d85dae671ee3a16abe75db14e19674e80abaf4
to metabase
* EncryptedMetainfoEncryptedKey added to CommitSegment and
UpdateMetadata request
* EncryptedMetainfoEncryptedKey returned with GetObject response and all
delete responses
* EncryptedMetainfoEncryptedKey returned with object iterator results
Change-Id: I917541ab5f3e1863bc8f238d17a15fbf72a23025
This change is adjusting metainfo endpoint to use metabase for uploading
and downloading remote objects. Inline segments will be added later.
Change-Id: I109d45bf644cd48096c47361043ebd8dfeaea0f3
While resolving conflicts with `master` I missed this change which is
needed e.g. to run storj-sim.
Change-Id: I56a548ed92b978510526c26c81af03051acfde2f
Make metainfo.RSConfig a valid pflag config value. This allows us to
configure the RSConfig as a string like k/m/o/n-shareSize, which makes
having multiple supported RS schemes easier in the future.
RS-related config values that are no longer needed have been removed
(MinTotalThreshold, MaxTotalThreshold, MaxBufferMem, Verify).
Change-Id: I0178ae467dcf4375c504e7202f31443d627c15e1
After moving SatStreamID and SatSegmentID from common I missed changing
some methods in metainfo endpoint. This change is a fix for that.
Change-Id: I34e121fce47371ee4cfd92cce03809520b68859f
After moving SatStreamID and SatSegmentID from common I missed changing
some methods in metainfo endpoint. This change is a fix for that.
Change-Id: I3344623dc7acfa73db6c20cd3212301e74335857
We have some types that are only valid for satellite usage. Such types
are SatStreamID and SatSegmentID. This change moves those types to
storj/storj and adds basic infrastructure for generating code.
Change-Id: I1e643844f947ce06b13e51ff16b7e671267cea64
Some of metainfo endpoint methods are not used but we still have
implementation there. This change removes unused code and returns
unimplemented error for those methods.
Change-Id: I74e75e0caff76a4f5d119ee989b687b4e9d6e6f9
This change removed unused 'createRequests' struct. As far I remember it
was used to help validating old metainfo beginObject/commitObject flow.
Change-Id: I0f139b9934196d73f26eafa347ba5605722f3a55
As part of the Metainfo Refactoring, we need to make the Metainfo Loop
working with both the current PointerDB and the new Metabase. Thus, the
Metainfo Loop should pass to the Observer interface more specific Object
and Segment types instead of pb.Pointer.
After this change, there are still a couple of use cases that require
access to the pb.Pointer (hence we have it as a field in the
metainfo.Segment type):
1. Expired Deletion Service
2. Repair Service
It would require additional refactoring in these two services before we
are able to clean this.
Change-Id: Ib3eb6b7507ed89d5ba745ffbb6b37524ef10ed9f
To avoid further name collisions, the very broad named package gets moved into
the consoleauth package where its also mainly being used.
Change-Id: Ie563c9700adbf0553baca2b7b8ba4a1d9c29d144
Jira: https://storjlabs.atlassian.net/browse/PG-69
There are a number of segments with piece_hashes_verified = false in
their metadata) on US-Central-1, Europe-West-1, and Asia-East-1
satellites. Most probably, this happened due to a bug we had in the
past. We want to verify them before executing the main migration to
metabase. This would simplify the main migration to metabase with one
less issue to think about.
Change-Id: I8831af1a254c560d45bb87d7104e49abd8242236
Currently we have no way to actually set one
of the following limits to 0 (meaning not usable):
- maxBuckets
- usageLimit
- bandwidthLimit
With having the field nullable,
NULL corresponds to the global default,
0 now actually 0 and
a set value determines a custom limit.
Change-Id: I92bb77529dcbd0881ae8368921be9d246eb0919e
Jira: https://storjlabs.atlassian.net/browse/PG-67
There are a number of old-style objects (without the number of segments
in their metadata) on US-Central-1, Europe-West-1, and Asia-East-1
satellites. We want to migrate their metadata to contain the number of
segments before executing the main migration to metabase. This would
simplify the main migration to metabase with one less issue to think
about.
Change-Id: I42497ae0375b5eb972aab08c700048b9a93bb18f
objectdeletion.ObjectIdentifier with metabase.ObjectLocation
Another change to use metabase.ObjectLocation across satellite codebase
to avoid duplication and provide better type safety.
Change-Id: I82cb52b94a9107ed3144255a6ef4ad9f3fc1ca63
This PR updates `uplink rb --force` command to use the new libuplink API
`DeleteBucketWithObjects`.
It also updates `DeleteBucket` endpoint to return a specific error
message when a given bucket has concurrent writes while being deleted.
Change-Id: Ic9593d55b0c27b26cd8966dd1bc8cd1e02a6666e
This PR fixes a deadlock that can happen when the number of piece
deletion requests is different from the distinct node count from those
requests. The success threshold should be based on the number of nodes
instead of the amount of requests
Change-Id: I83073a22eb1e111be1e27641cebcefecdc16afcb
This change forces the test of GetObjectIPs to use multiple remote
segments (earlier versions of the test were accidentally using inline
segments). This change also revealed a small bug in the for loop code,
which is fixed.
Change-Id: Ic486b079d221952ba13553acf0ca41a8873f3f21
satellite.DB.Console().Projects().GetAll database query
can be replaced with planet.Uplinks[0].Projects[0].ID
Change-Id: I73b82b91afb2dde7b690917345b798f9d81f6831
Currently there is confusion between responsibilities of
metainfo.Endpoint, metainfo.Service, PointerDB.
By separating database "service" into a separate package and
its types allows to disentagle them.
This gives us responsibilities:
1. metainfo.Endpoint - translates requests and permissions
2. metainfo.Service - handles requests and coordinates with
objectdeletion, piecedeletion, metabase
3. metabase.Service - communication with the database interface and invariants
Currently metabase will contain the types necessary to coordinate
information.
Change-Id: If8c992b4b9d9e70a56bbd8a378a5af6b1a2ec34e
TestMaxOutBuckets is one of our slower tests (50-90s).
This change seems to make it 2-12s.
It reduces the number of buckets that need to be created.
It also removes unnecessary storage nodes.
Change-Id: I1012fc6e9258b2f7674b16da4e8b418741c93eea
This PR changes DeleteBucket to be able to delete all objects within a
bucket if `DeleteAll` is set in `BucketDeleteRequest`.
It also changes `DeleteBucket` API to treat `ErrBucketNotFound` as a
successful delete operation instead of returning an error back to the
client.
Change-Id: I3a22c16224c7894f2d0c2a40ba1ae8717fa1005f
This change accomplishes multiple things:
1. Instead of having a max in flight time, which means
we effectively have a minimum bandwidth for uploads
and downloads, we keep track of what windows have
active requests happening in them.
2. We don't double check when we save the order to see if it
is too old: by then, it's too late. A malicious uplink
could just submit orders outside of the grace window and
receive all the data, but the node would just not commit
it, so the uplink gets free traffic. Because the endpoints
also check for the order being too old, this would be a
very tight race that depends on knowledge of the node system
clock, but best to not have the race exist. Instead, we piggy
back off of the in flight tracking and do the check when
we start to handle the order, and commit at the end.
3. Change the functions that send orders and list unsent
orders to accept a time at which that operation is
happening. This way, in tests, we can pretend we're
listing or sending far into the future after the windows
are available to send, rather than exposing test functions
to modify internal state about the grace period to get
the desired effect. This brings tests closer to actual
usage in production.
4. Change the calculation for if an order is allowed to be
enqueued due to the grace period to just look at the
order creation time, rather than some computation involving
the window it will be in. In this way, you can easily
answer the question of "will this order be accepted?" by
asking "is it older than X?" where X is the grace period.
5. Increases the frequency we check to send up orders to once
every 5 minutes instead of once every hour because we already
have hour-long buffering due to the windows. This decreases
the maximum latency that an order will be reported back to
the satellite by 55 minutes.
Change-Id: Ie08b90d139d45ee89b82347e191a2f8db1b88036
services
This PR adds a limiter on the amount of concurrent objects deletion can be handled so
we don't run out of memory.
Change-Id: Id2ce368af6f86845fcdfd34cb2f5e460efe9b272
* Add all new orders to the orders filestore instead of the database.
* Submit orders from the filestore to the new satellite SettleWindow
endpoint.
The orders filestore will eventually replace the orders DB completely.
For now, we will still be checking the orders DB and submitting those
orders if they exist. In a later release, we will completely remove the
orders DB, but we need both the DB and filestore for the transitionary
period.
Change-Id: Iac8780fd5ab770296181bbd313e1d335f072d4dc
This change will require less work for the user of peiecedeletion
service by moving overlay database call into the package.
Change-Id: I14a150ab71fe885780e7a7a74db006a779507ae5
This adds the unimplemented GetObjectIPs method to metainfo endpoint so
we can import new common protobuf definitions.
Change-Id: I154f26baccb6bb3c66de3eb25611930545c9754b
Small extension to test case where another partner is upload/downloading
to/from the same bucket as partner which creates this bucket.
Change-Id: Ib674fe5f95f868b71341e30aba5e2440847738f4
Use new objectdeletion package for deleting pointers.
In the best case scenario, it will make on database call to fetch
information about the number of segments. And another request to delete
and fetch information about other segments.
This PR also changes our object deletion API to return no error when an
object is not found but instead consider such operation as success. This
behavior is asligned with S3 API and makes the code less complex.
Change-Id: I280c56e8b5d815a8c4dafe8227689467e899775a
Improve our delete logic to require fewer database requests.
This PR creates a new objectdeletion package
Change-Id: I0500173bb9b8c771accb350f076329ede6dbb42c
request
We are no longer using `BeginDeleteSegment` or `ListSegments` so we can
avoid generating StreamID as a result of `BeginDeleteObject`.
StreamID from `BeginDeleteObject` is also not used on Uplink side.
Change-Id: I3b068deab17068459849b5cf05811cad4b8a9034
This runs each benchmark for one iteration to ensure that they are
valid. Unfortunately, it does not give any useful metrics as output.
Change-Id: I68940398c8dd849aed656bd12656f48d5df10128
When a request comes in on the satellite api and we validate the
macaroon, we now also check if any of the macaroon's tails have been
revoked.
Change-Id: I80ce4312602baf431cfa1b1285f79bed88bb4497
Every now and then we see the repair error, "piece to add already exists".
With these new logs we should be able to verify if it is due to a change in
the pointer. These logs are only temporary
Change-Id: I029390cc4816668707546df14ed2cfe7ca192b0b
Also distinguish the purpose for selecting nodes to avoid potential
confusion, what should allow caching and what shouldn't.
Change-Id: Iee2451c1f10d0f1c81feb1641507400d89918d61
Before the deleter would close its done channel once, so if additional
tests shared a storagenode, even if not in parallel, the later waits
would not work properly. This fixes that problem.
Change-Id: I7dcacf6699cef7c2c2948ba0f4369ef520601bf5
Delay of 100ms could happen due to other things happening on the test
server. Increase the time to 1s.
Change-Id: I2c7c21f966101771633d73e84cf9850d28089e71
When running testplanet tests, mark storagenode peer PieceDeleter as in
testing mode so that you don't have to do it on each test.
Change-Id: I2592e02c63f8bcc9152ecf436bac4e798b08bccf
In cases like the segment reaper script connecting to the metainfodb,
we don't want a db migration to happen automatically when we call
metainfo.NewStore. This adds MigrateToLatest method for postgreskv
and cockroackv, and calls MigrateToLatest in places where NewStore used
to create tables.
Change-Id: I682d0f26d609af0601dfdb32a24866cdf5d32a7e
There was a race in the test code for piece deleter, which made it
possible to broadcast on the condition variable before anyone was
waiting. This change fixes that and has Wait take a context so it times
out with the context.
Change-Id: Ia4f77a7b7d2287d5ab1d7ba541caeb1ba036dba3
To improve delete performance, we want to process deletes asynchronously
once the message has been received from the satellite. This change makes
it so that storagenodes will send the delete request to a piece Deleter,
which will process a "best-effort" delete asynchronously and return a
success message to the satellite.
There is a configurable number of max delete workers and a max delete
queue size.
Change-Id: I016b68031f9065a9b09224f161b6783e18cf21e5
Currently it was possible that PopAll returns 1010 items, then
makes one RPC call with 1000 items, then RPC call 10 items. Meanwhile,
there have been added 500 new items added to the queue.
This change ensures that we pull items from the queue early and
try to make rpc batches as large as possible.
Change-Id: I1a30dde9164c2ff7b90c906a9544593c4f1cf0e9
* Delete expired segments in expired segments service using metainfo
loop
* Add test to verify expired segments service deletes expired segments
* Ignore expired segments in checker observer
* Modify checker tests to verify that expired segments are ignored
* Ignore expired segments in segment repairer and drop from repair queue
* Add repair test to verify that a segment that expires after being
added to the repair queue is ignored and dropped from the repair queue
Change-Id: Ib2b0934db525fef58325583d2a7ca859b88ea60d
Replace most of old libuplink usages in testplanet. 100% migration will
be possible when we will be able to implement UploadWithClientConfig
with new libuplink.
Change-Id: I432d7d4917c7b67d46a058abd0a2a6a13f565ac4
Automatically attach attribution information to bucket during
BeginObject or CreateBucket when the UserAgent is set.
Change-Id: I405cb26c5a2f7394b30e3f2cf5d2214c8781eb8b
This adds support for observers to join the loop together. This allows
to ensure that when multiple observers join, they will be part of the
same loop iteration.
Change-Id: Ie887d4cedfb074b65c782690a2c09c1704f56dfe
During testing it's possible to get into a scenario where all nodes are
offline and list of requests is empty.
Change-Id: I271c0ca2c72009244df13e8bc1441fcd5f3da9e0
Currently ListV2 loaded the whole data into memory, even when all the
data wasn't being used, using up more memory than needed.
Change-Id: I5846d979344729b447c108a6cc9f4227229ec981
potential encryption overhead.
This is the same approach we have for validating remote segment size.
https://storjlabs.atlassian.net/browse/USR-619
Change-Id: I2597ee734313a3068fd986001680bbedbf1bed2a
size with BeginObject
Such solution will add one round trip to satellite during upload so for
now we are reverting this until we will have solution for this.
Change-Id: Ic2d826448ab7b0318cd6922df05deee9167cf2f0
BeginObject response
We want to control inline segment size and segment size on satellite
side. We need to return such information to uplink like with redundancy
scheme.
Change-Id: If04b0a45a2757a01c0cc046432c115f475e9323c
all permissions
Without read and list permissions BeginObjectDelete won't return error
if occurs. This was breaking Batch processing because there was
assumption that without error response will be always not nil.
https://storjlabs.atlassian.net/browse/SM-590
Change-Id: I0fc9539e429110a660eb28725b266d5e4771d198
uuid.UUID implements driver.Value so it can be directly used as a
scannable result.
Replace uses of dbutil.BytesToUUID with uuid.FromBytes.
Change-Id: I51a670185ceb3cc2199d5aa2b76bc3fc191ca8fe
Instead of providing the database from outside to testplanet create it
inside and then allow wrapping and modifying it. This is more convenient
to use.
Change-Id: I9b8f69e6e0a19ff984b4e2bfe927c9100c77bc6c
This implements a service for pointer verification. This makes the
slightly clearer, because it's not part of metainfo.
It also adds a peer identity cache which reduces database calls and peer
identity decoding.
Change-Id: I45da40460d579c6f5fd74c69bccea215157aafda
step 1 in https://review.dev.storj.io/c/storj/uplink/+/1236
Now the old libuplink uses the temporary DeleteBucketReturnDeleted and
DeleteObjectReturnDeleted methods. This way, in the next step, we will
be able to change the DeleteBucket and DeleteObject methods to return
the deleted bucket/object.
Change-Id: I2e638be1960bca6ce1456c92849fcdd6d93e5252
Initial change for checking bucket existence on satellite side for
requests like BeginObject and ListObjects. This is simple implementation
that is just checking bucket in DB but should be improved in future to
avoid DB calls as much as possible.
Part of https://storjlabs.atlassian.net/browse/USR-365
Change-Id: I9076acddc44d7dbfa7612a1c24a007de01621583
This adds a piece deletion handler that has debounce for failed dialing
and batching multiple jobs into a single request.
Change-Id: If64021bebb2faae7f3e6bdcceef705aed41e7d7b
To handle concurrent deletion requests we need to combine them into a
single request.
To implement this we introduces few concurrency ideas:
* Combiner, which takes a node id and a Job and handles combining
multiple requests to a single batch.
* Job, which represents deleting of multiple piece ids with a
notification mechanism to the caller.
* Queue, which provides communication from Combiner to Handler.
It can limit the number of requests per work queue.
* Handler, which takes an active Queue and processes it until it has
consumed all the jobs.
It can provide limits to handling concurrency.
Change-Id: I3299325534abad4bae66969ffa16c6ed95d5574f
Metainfo method validateAuth checks things like API key, user permission
and rate limit but at the end all errors were returned as
rpcstatus.Unauthenticated.
Old Metainfo is not touched to avoid backward compatibility issues.
Change-Id: I78eb276210fc50151da58a5c84e13ecd0961da29
New API has limited number of options to configure at the moment. We
should remove unused flags from Uplink CLI and add if needed in the
future.
Change-Id: Icf3f3dadd43cb61a3b408b02d0762aef34425dbf
uplinks
New libuplink is not storing encryption values in with bucket but old
uplinks are using those values for configuration. If bucket was created
with new libuplink we will send back satellite defaults.
Change-Id: Ie1bf3682847e07b302270b4c4bf1a7219f4bf011
On satellite, remove all references to free_bandwidth column in nodes table.
On storage node, remove references to AllocatedBandwidth and MinimumBandwidth and mark as deprecated.
Protobuf message, NodeCapacity, is left intact for backwards compatibility.
Once this is released to all satellites, we can drop the column from the DB.
Change-Id: I2ff6c6537fc9008a0c5588e951afea58ede85838
This change is a special case for batch processing. If in batch request
CommitSegment and CommitObject are one after another we can execute
these requests as one. This will avoid current logic where we are saving
pointer for CommitSegment and later we are deleting this pointer and
saving it once again as under last segment path for CommitObject.
This change should handle issue we have in older uplinks with incorrect
order of storing pointers.
Change-Id: I86514c95df169e6fbc91b52e5117472cae70cb8b
we want to return back to the user as quick as possible but also keep
deleting remaining pieces on the storagenodes
Change-Id: I04e9e7a80b17a8c474c841cceae02bb21d2e796f
rationale: if GC kills the satellite, it would be nice to make
it through a repair checker sweep first
Change-Id: Id56171dc8e13940cfb6481e36a910bad077a01ed
Trace the calls to DeletePiecesService.DeletePieces method and add
metrics for having statistics about the rate that specific storage node
is dialed and duration time spent on dialing storage nodes.
These statistics will help us to find out if we should implement
connections queues to storage node for reducing the deletion time in cae
that we see that we're spending too much time dialing frequent storage
nodes.
Ticket: https://storjlabs.atlassian.net/browse/SM-85
Change-Id: I9601676c3a8ad96c73c93833145929e4817755e2
Fixes https://storjlabs.atlassian.net/browse/USER-240
- Adds UnsynchronizedPut method to metainfo service that overwrites any
existing pointer under the same path
- Uses UnsynchronizedPut in the metainfo endpoint for committing the
segments
Change-Id: Icb43f31ea33f14066ca9dfdcf226eb3079b90948
Currently SNs report their free disk space once per hour. If a node
becomes full, it has to wait until the next contact cycle begins to
report; all the while receiving and failing upload requests. By increasing
the minimum required disk space, we can give the storage nodes more time
to report their space before the completely fill up. This change goes
hand-in-hand with another change we want to implement: trigger capacity
report on SN immediately upon falling below threshold.
Change-Id: I12f778286c6c3f582438b0e2949765ac43325e27
Add monkit metric for the rate-limit when the rate limit is hit
Logs warning with projectID
https://storjlabs.atlassian.net/browse/SM-165
Change-Id: I352dc40006021990d1bc66a999f62bbf8deb54db
this commit updates our monkit dependency to the v3 version where
it outputs in an influx style. this makes discovery much easier
as many tools are built to look at it this way.
graphite and rothko will suffer some due to no longer being a tree
based on dots. hopefully time will exist to update rothko to
index based on the new metric format.
it adds an influx output for the statreceiver so that we can
write to influxdb v1 or v2 directly.
Change-Id: Iae9f9494a6d29cfbd1f932a5e71a891b490415ff
Allow rate limit project cache to expire so we can make project level rate limit changes without restarting the satellite process.
Change-Id: I159ea22edff5de7cbfcd13bfe70898dcef770e42
Satellite now is keeping RS values for uplink but old uplinks were using
default bucket settings. Because of that we need to override buckets
settings with satellite settings to avoid breaking older uplinks.
Change-Id: Ia1068db70e4adbf741c5e81d27d9e39799049c22
This reverts commit 8772867855.
for uplink versions v0.25.0 through v0.30.7, there's a bug with multiplesegment upload
where the last segment is inline caused by this commit.
Change-Id: If375e186b23265586caf08991c25980e99f3cc1a
Change is adding object deletion to BeginObject request (before upload).
Now when satellite controls deletion we can move deletion before upload
to satellite. This change improves two things:
* no need for additional request to delete object before upload (need
one more change to storj/uplink)
* fix an issue with lack of permissions to upload if caveat allows only
for writing (e.g. disallow deletes but allows to write)
https://storjlabs.atlassian.net/browse/V3-3362
Change-Id: Ic453146298cdd302df290c532123731a3f99e38e
Change DeleteObjectPieces for deleting the segments' pointers of an
object in a reverse order.
Last segment: L
N: total number of segments
Deleting in reverse order is: L, n-1 to 0
Deleting in reverse order makes BeginDeleteObject usable to delete
partially uploaded objects that were interrupted (e.g. upload
cancellation).
With this change, the uplink upload cancellation, can be changed to use
BeginDeleteObject to cleanup already uploaded segments without having to
retrieve orders and dial every single node which stored a piece.
Ticket: https://storjlabs.atlassian.net/browse/V3-3525
Change-Id: Ieca6fd3801c4b71671811cb5f08a99d5146928a6
A few variables were not renamed to the new standard piecesTotal and
piecesContentSize, so it was unclear which value was being used. These
have been updated, and some comments made more thorough.
Change-Id: I363bad4dec2a8e5c54d22c3c4cd85fc3d2b3096c
This change updates the storagenode piecestore apis to expose access to
the full piece size stored on disk. Previously we only had access to
(and only kept a cache of) the content size used for all pieces. This
was inaccurate when reporting the amount of disk space used by nodes.
We now have access to the total content size, as well as the total disk
usage, of all pieces. The pieces cache also keeps a cache of the total
piece size along with the content size.
Change-Id: I4fffe7e1257e04c46021a2e37c5adc6fe69bee55
With this change RS configuration will be set on satellite. Uplink with
get RS values with BeginObject request and will use it. For backward
compatibility and to avoid super large change redundancy scheme stored
with bucket is not touched. This can be done in future.
Change-Id: Ia5f76fc10c37e2c44e4f7b8754f28eafe1f97eff
Limits how many times metainfo APIs can be called per second by project ID. If limit is exceeded, the API will return Unauthorized/Too Many requests.
Limit per second and the size of the limiter cache per project are configurable, as well as whether the limiter is enabled.
Tests added/updated for the new rate_limit field in projects table.
Tests added for exceeding limits and disableing limiter.
Change-Id: Ic8ad102de3b690a475809d4f684156d5715f20fa
This change is a special case for batch processing. If in batch request
CommitSegment and CommitObject are one after another we can execute
these request as one. This will avoid current logic where we are saving
pointer for CommitSegment and later we are deleting this pointer and
saving it once again as under last segment path for CommitObject.
Change-Id: If170e78c8410f5ba5916cbff6a29b9221db9ce2e