If a non-nil value is read from created_at column of the segments table,
it will be set to the CreatedAt field if SegmentListItem.
Change-Id: I02691d8e11fad12c1b0e4c443bdebb568016ffe3
The created_at columns is first added without a default value to avoid
setting the current time to existing segments.
Change-Id: Ic2fe3da238422e2949e6f3016fbac04eb89ba037
Migration step 148 will cause errors because we missed some
references to the columns being dropped. Removing the step
altogether causes problems with backwards compatibility tests
because the change already exists in the latest release tag.
To circumvent, we change v148 to an empty migration.
Add methods FindTable and RemoveColumn in private/dbutil/dbschema
Change-Id: Ia527e95b88a88c5dc82800928ce6f8cfb879e334
Old pointerdbs can have key with just project id, segment index and bucket, without object key. We need to ignore such keys.
Change-Id: I80a466a94e317a229da236fe6bc9e762e6f7ced6
WHAT: markup and logic for set/update node name
WHY: to have abitity to change node name for future purposes like filtering or grouping
Change-Id: I44c679df43d34355efc4c6cff260faa1c3ff480e
Move a specific interface & types used for testing to be a private
subpackage with a name that clearly identifies it for testing purpose.
Change-Id: I646cf3b6f0a3b518a6f9a125998dc5a02df02db6
In order to prevent traffic amplification attack, QUIC allows
application to perform an address validation during connection
establishment.
(https://tools.ietf.org/id/draft-ietf-quic-transport-34.html#name-address-validation)
However, this adds another round trip during connection establishment.
In storj network, it does client authentication before servers starts
sending significant amount of data to any client. We believe the traffic
amplification attack isn't going to be significant when turning off
address validation in QUIC. AND it will provide us a significant
performance boost during connection establishment.
Change-Id: I7f9a0ca5ca770b715d08b1e8ce3022fbb2b85d42
We can have objects with missing segments. Such objects will be removed by segment reaper but between executions we can have it. We should not interrupt migration but collect such objects to cleanup migrated DB later.
Change-Id: I5cc4a66395c1773a6430f34cc25a0f2449133f80
ListSegments loads all the segment data into memory, however this can
add up to a lot of data with inline segments and large objects.
Change-Id: I037738f0e70b810ecbea7d83b00ea7ca9eb90c7a
WHAT:
remove all the api keys related code
WHY:
it became redundant after access grants implementation
Change-Id: I36344d478d8d7524e3994ea2076491be4add1aa3
IterateDatabase method was used by zombie segment reaper which is removed for multipart implementation.
Change-Id: I93e1294236612d6d82b2ab57053bb84e653f72b4
Iterate over streams/segments rather than loading all of them into
memory. This reduces the memory overhead of metainfo loop.
Change-Id: I9e98ab98f0d5f6e80668677269b62d6549526e57
cockroach is having problems with huge transactions and
having them complete before timeouts or whatever, so
do smaller transactions.
because we can have partial recording of payments which
are not unique, we have to do a thing where we read and
check if it already exists before writing. this is not
concurrency safe.
Change-Id: Ia7d59499a43ce6d70cb2a23754edbdd1b643ef1a
Previously if node was not found in containment, it was
given the status, 'skipped'.
We later try to delete skipped nodes from containment.
To fix this, add a new status called 'remove' to differentiate
nodes which should be skipped and nodes which should be deleted.
Change-Id: Ic09e62dc9723c89d0c9f968ce68c039114a9d74e
- toggle logic to trigger visual tab change
- toggling tabs renders appropriate fields
- error handling for professional account inputs
- successfully create professional user account
Goal is to be able to target users with specific professional focused communication.
Change-Id: Iffbeb712dac24ea1a83bb374740e0c1cd2100663
For metainfo loop we need only some of Segment fields. By removing some of them we will reduce memory consumption during loop.
Change-Id: I4af8baab58f7de8ddf5e142380180bb70b1b442d
The previous default FlushBatchSize of 10000 was causing major
slow down in select and insert statements on bucket_bandwidth_rollups.
We saw on the saltlake satellite that a FlushBatchSize of 1000 helped
reduce contention and query latency.
Change-Id: Ib95e73482219bc5aedc11925b1849fa5999774ba
This method will be used only with metainfo loop and we need to customize query to consume less memory.
Change-Id: Iaa97392f483c5df5609d501b3847b80eb1ea2583
We want to read from DB only those fields that are used by metainfo loop so we need to remove most of fields from LoopObjectEntry.
Change-Id: I14ecae288f631dc0ff54f4c560ce43b736eccdcf
Currently our metabase assumption is that it may contain arbitrary
bucket names and endpoint applies the naming constraints as it sees fit.
However by passing bucket_name as TEXT pg and crdb automatically try to
convert it to []byte, which may or not may work as intended... or in
some cases not work at all.
Cast all bucket name arguments to []byte to make it work.
Change-Id: I44650f5c873010997398bb0163d7f56ff6d9b5cf
We want to have custom loop iterator to avoid reading all object fields to reduce memory consumpion. This is first step to just rename existing iterator to IterateLoopObjects.
Change-Id: I8878ff21a49ba224db2d497cc8f9076e75c7609e
The new 'consistency ge-cleanup-orphaned-data' cli command deleted
orphaned transfer queue items, but not entries in the
graceful_exit_progress table. This will delete orphaned entries
from the exit progress table too.
Change-Id: I5f927aac1f258490678deaf179be92ccfe10fcd8
Currently the old encrypted keys may not match the path component
encoding. Change the iterator such that the prefixes handle arbitrary
byte sequences.
Change-Id: I0a50049f4ef9887e1c4df6f9692f967a054430eb
TestStorageNodeApi is failing due to very slight differences in
float values. This change rounds these values to 3 decimal places before
comparing them.
Change-Id: Ic7fae3a5e0a0a942c03d982bfa7b19357f2e3d2e
New metainfo loop can have memory issues when in one batch we will have object with many segments. This change limits number of batched segments to defined limit. Solution is not perfect as if we will have single object with extreme large segments count it can cross defined limit a lot. We need to prepare safer solution soon.
Change-Id: Iefcf466d5bac76513d4219b1a9d99adc361c54ae