ListSegments loads all the segment data into memory, however this can
add up to a lot of data with inline segments and large objects.
Change-Id: I037738f0e70b810ecbea7d83b00ea7ca9eb90c7a
IterateDatabase method was used by zombie segment reaper which is removed for multipart implementation.
Change-Id: I93e1294236612d6d82b2ab57053bb84e653f72b4
Iterate over streams/segments rather than loading all of them into
memory. This reduces the memory overhead of metainfo loop.
Change-Id: I9e98ab98f0d5f6e80668677269b62d6549526e57
For metainfo loop we need only some of Segment fields. By removing some of them we will reduce memory consumption during loop.
Change-Id: I4af8baab58f7de8ddf5e142380180bb70b1b442d
This method will be used only with metainfo loop and we need to customize query to consume less memory.
Change-Id: Iaa97392f483c5df5609d501b3847b80eb1ea2583
We want to read from DB only those fields that are used by metainfo loop so we need to remove most of fields from LoopObjectEntry.
Change-Id: I14ecae288f631dc0ff54f4c560ce43b736eccdcf
Currently our metabase assumption is that it may contain arbitrary
bucket names and endpoint applies the naming constraints as it sees fit.
However by passing bucket_name as TEXT pg and crdb automatically try to
convert it to []byte, which may or not may work as intended... or in
some cases not work at all.
Cast all bucket name arguments to []byte to make it work.
Change-Id: I44650f5c873010997398bb0163d7f56ff6d9b5cf
We want to have custom loop iterator to avoid reading all object fields to reduce memory consumpion. This is first step to just rename existing iterator to IterateLoopObjects.
Change-Id: I8878ff21a49ba224db2d497cc8f9076e75c7609e
Currently the old encrypted keys may not match the path component
encoding. Change the iterator such that the prefixes handle arbitrary
byte sequences.
Change-Id: I0a50049f4ef9887e1c4df6f9692f967a054430eb
New metainfo loop can have memory issues when in one batch we will have object with many segments. This change limits number of batched segments to defined limit. Solution is not perfect as if we will have single object with extreme large segments count it can cross defined limit a lot. We need to prepare safer solution soon.
Change-Id: Iefcf466d5bac76513d4219b1a9d99adc361c54ae
It looks that we cannot use root piece id as indicator if segment is inline as we have case in SLC satellite that inline segment have root piece id set. Pieces should be better thing to check.
Change-Id: I2377ff88861390342273f5e71871373eaf462615
Segments are not read in batches. For each batch of objects
we are reading all segments for those objects.
Change-Id: Idaf19bbe4d4b095065d59399dd326e22c57499a6
The subquery for DELETE FROM obects returns a stream_id field for filtering. Unfortunately stream_id is not indexed. This change removed the subquery from the CockroachDB delete bucket query.
Change-Id: If1abe21668c593e6d4bdc3ba8cdbad26c09d234e
When listing pending objects with prefix, the prefix should be prepended
to the EncryptedPath in satStreamID. Otherwise, listing multipart
uploads may display different UploadID than expected.
Change-Id: I27e9f9af9348783e053ad123121b6ddd051739e4
We need to keep empty inline segments as we did it with pointerDB because otherwise old uplinks after uploading data won't be able to download such file. To reduce number of empty inline segments on uplink side we need to implement skipping empty last inline segments for multipart upload.
Change-Id: Ice86c805babba1ad17149754cbd6b3f4fd652722
ListAllBuckets could skip buckets when the total number of buckets
exceeds list limit. Replace listing buckets with looping directly
on the objects table.
Change-Id: I43da2fdf51e83915a7854b782f0e9ec32c373018
Until now we where using single RS per object but it turns out that we
need to be able to support RS per segment. We need to give uplink such information while downloading.
As an addition we are using RedundancySchemePerSegment flag for GetObject request to detect if
we should try to get RS from segment for this request response.
Change-Id: I209dad324496ff59b521b11d2343da61dcdbe7f5
When there are concurrent refreshes to the cache and the entries are
missing, it could end up causing multiple database calls, even though
only one is needed.
Change-Id: I1ae7a124bbdd1570473cf3a032d375d2f25a8426
adds tests to BeginObjectNextVersion and BeginObjectExactVersion
to check the behavior when an older or a newer committed version
exists.
The current behavior is: everything is committed.
Change-Id: Ia8facbe0dc038a5d214e4e56da3c8e4df2f18900
Old uplinks sends some additional information inside marshaled protobufs and we need to extract things like encryption parameters. Newer uplinks are passing it directly in request.
Change-Id: I0b575e68c3ed98481247fe38344e7d61cbd542ba
This adds AliasPieces run length encoding. On average it should
make our pieces encoding:
repair=50,optimal=85,total=90 152.0 bytes
repair=16,optimal=37,total=50 65.4 bytes
Change-Id: I391a9183164828f05383a3cde9ab0e4549c2d440
We will add a cache to nodes, so using completely random nodes wouldn't
show the actual performance.
Change-Id: I94f18283712812f05f7795efd3c7cf57499fa52c
We need to keep an inmemory cache to avoid lookups into aliases table.
This adds the inmemory state of the cache.
Change-Id: Ief2b9bb19e10b46839b9208472dfc3035eb49af3
This is first step in supporting node aliases. It adds a table
that automatically assigns aliases to nodes inserted into the table.
Change-Id: Ibdf40097c3c1e5b371500203f8db203505a48adc
This ensures the caveats are unique even when they contain the same
permissions and will result in unique macaroons. This is important to
ensure revocation doesn't impact more macaroons than intended.
Change-Id: I6354edd0119f2d85eaf580f2d1926a3de9151b88
We need this method to fix repairing pending objects. In another PR, it
will replace the GetObjectLatestVersion + GetSegmentByPosition calls
that are currently executed.
Change-Id: I4c5c2ab604edf898452b6fd21b86d4d3f970ce79