We would like to log Node IDs and last contact successes of nodes DQd
in this manner. We would also like to avoid returning an unbounded list
of items from the db. Therefore we change the query to select a limited
number of nodes that meet the DQ conditions and iterate until 0 rows are
returned. Each column of the query is already indexed.
Change-Id: Iaec2d9b56e7202b7c2028ba21750d40c8dd506ee
We were deleting expired objects by directly executing a delete query.
With this change, we first select the objects to be deleted and then
delete them (as recommended by cockroachdb for deleting using a non indexed
column).
Change-Id: Ied150fbdc7031a343a74e0b9dab316598188ef66
At some point we might try to change original segment RS values and set Pieces according to the new values. This change adds add NewRedundancy parameter for UpdateSegmentPieces method to give ability to do that. As a part of change NewPieces are validated against NewRedundancy.
Change-Id: I8ea531c9060b5cd283d3bf4f6e4c320099dd5576
The coupon_codes table will allow for administrators to create new promo
codes associated with coupon information (amount, duration, etc...).
A user will be able to enter a promo code (aka coupon code) in order to
apply a new coupon to their account. The coupon in the coupons table is
linked to the template defined in the coupon_codes table.
Change-Id: I50e49fa92afbc6aa9d01d8a895c069efb59e472b
WHAT:
enter passphrase step for users who has already created passphrase
WHY:
to let users proceed to upload step
Change-Id: I084aec5b863981978cf190f99ee95154fbed9aab
Among other conditions, nodes fail audits by returning incorrect
data and by reaching the max reverify count, but we weren't logging
these events. This commit adds the missing logs.
Change-Id: I80749a7e95e8cb97bc8dd7dac1e523e223114b7f
Update the Redis dependency to use the last major production version.
The last version accepts a context parameter in all the network methods
so it allows us to pass it through them.
Change-Id: I34121b2ec3c2728602115c724933ad24c9e6e4fd
WHAT:
beta satellite top banner's copy is changed to include support/feedback URLs
WHY:
so users using our beta satellite will be able to report feedback somewhere
Change-Id: Ibc349c8b3354b577275fcf1d2b75bfdd267729d9
At the moment we are trying to optimize deletion queries but its hard to verify deletion performance. Until we are sure that the queries are good we will just log errors instead shutting down whole satellite core.
Change-Id: I5625251d4518c35f0d46d6bf37b2f3ea7950675e
If a non-nil value is read from created_at column of the segments table,
it will be set to the CreatedAt field if SegmentListItem.
Change-Id: I02691d8e11fad12c1b0e4c443bdebb568016ffe3
The created_at columns is first added without a default value to avoid
setting the current time to existing segments.
Change-Id: Ic2fe3da238422e2949e6f3016fbac04eb89ba037
Migration step 148 will cause errors because we missed some
references to the columns being dropped. Removing the step
altogether causes problems with backwards compatibility tests
because the change already exists in the latest release tag.
To circumvent, we change v148 to an empty migration.
Add methods FindTable and RemoveColumn in private/dbutil/dbschema
Change-Id: Ia527e95b88a88c5dc82800928ce6f8cfb879e334
Move a specific interface & types used for testing to be a private
subpackage with a name that clearly identifies it for testing purpose.
Change-Id: I646cf3b6f0a3b518a6f9a125998dc5a02df02db6
ListSegments loads all the segment data into memory, however this can
add up to a lot of data with inline segments and large objects.
Change-Id: I037738f0e70b810ecbea7d83b00ea7ca9eb90c7a
IterateDatabase method was used by zombie segment reaper which is removed for multipart implementation.
Change-Id: I93e1294236612d6d82b2ab57053bb84e653f72b4
Iterate over streams/segments rather than loading all of them into
memory. This reduces the memory overhead of metainfo loop.
Change-Id: I9e98ab98f0d5f6e80668677269b62d6549526e57
cockroach is having problems with huge transactions and
having them complete before timeouts or whatever, so
do smaller transactions.
because we can have partial recording of payments which
are not unique, we have to do a thing where we read and
check if it already exists before writing. this is not
concurrency safe.
Change-Id: Ia7d59499a43ce6d70cb2a23754edbdd1b643ef1a
Previously if node was not found in containment, it was
given the status, 'skipped'.
We later try to delete skipped nodes from containment.
To fix this, add a new status called 'remove' to differentiate
nodes which should be skipped and nodes which should be deleted.
Change-Id: Ic09e62dc9723c89d0c9f968ce68c039114a9d74e
For metainfo loop we need only some of Segment fields. By removing some of them we will reduce memory consumption during loop.
Change-Id: I4af8baab58f7de8ddf5e142380180bb70b1b442d
The previous default FlushBatchSize of 10000 was causing major
slow down in select and insert statements on bucket_bandwidth_rollups.
We saw on the saltlake satellite that a FlushBatchSize of 1000 helped
reduce contention and query latency.
Change-Id: Ib95e73482219bc5aedc11925b1849fa5999774ba
This method will be used only with metainfo loop and we need to customize query to consume less memory.
Change-Id: Iaa97392f483c5df5609d501b3847b80eb1ea2583
We want to read from DB only those fields that are used by metainfo loop so we need to remove most of fields from LoopObjectEntry.
Change-Id: I14ecae288f631dc0ff54f4c560ce43b736eccdcf
Currently our metabase assumption is that it may contain arbitrary
bucket names and endpoint applies the naming constraints as it sees fit.
However by passing bucket_name as TEXT pg and crdb automatically try to
convert it to []byte, which may or not may work as intended... or in
some cases not work at all.
Cast all bucket name arguments to []byte to make it work.
Change-Id: I44650f5c873010997398bb0163d7f56ff6d9b5cf
We want to have custom loop iterator to avoid reading all object fields to reduce memory consumpion. This is first step to just rename existing iterator to IterateLoopObjects.
Change-Id: I8878ff21a49ba224db2d497cc8f9076e75c7609e
The new 'consistency ge-cleanup-orphaned-data' cli command deleted
orphaned transfer queue items, but not entries in the
graceful_exit_progress table. This will delete orphaned entries
from the exit progress table too.
Change-Id: I5f927aac1f258490678deaf179be92ccfe10fcd8
Currently the old encrypted keys may not match the path component
encoding. Change the iterator such that the prefixes handle arbitrary
byte sequences.
Change-Id: I0a50049f4ef9887e1c4df6f9692f967a054430eb
New metainfo loop can have memory issues when in one batch we will have object with many segments. This change limits number of batched segments to defined limit. Solution is not perfect as if we will have single object with extreme large segments count it can cross defined limit a lot. We need to prepare safer solution soon.
Change-Id: Iefcf466d5bac76513d4219b1a9d99adc361c54ae
Turns out that many methods generated with dbx where not used at all.
Lets remove them.
As a next step we can think about dropping tables like:
* user_credit
* offer
Change-Id: Id6cda81a701348db2a6b8b26daa22ae9c4f87cb4
It looks that we cannot use root piece id as indicator if segment is inline as we have case in SLC satellite that inline segment have root piece id set. Pieces should be better thing to check.
Change-Id: I2377ff88861390342273f5e71871373eaf462615
WHAT:
config flag to indicate if satellite is in beta
WHY:
to avoid using hardcoded satellite names which may cause issues
Change-Id: If92eb7417c340bf343a9a91e2f6b11f0349020c5
Segments are not read in batches. For each batch of objects
we are reading all segments for those objects.
Change-Id: Idaf19bbe4d4b095065d59399dd326e22c57499a6
Pregenerate the database schema we should use for most tests.
Currently, Cockroach is slow with regards to migration and it's
better if it happens in as few transactions as possible.
This reduces test time from ~21min to ~15min.
Change-Id: Ife8117053e6b9ecf3c93fe63677edf15d4d7c254
The subquery for DELETE FROM obects returns a stream_id field for filtering. Unfortunately stream_id is not indexed. This change removed the subquery from the CockroachDB delete bucket query.
Change-Id: If1abe21668c593e6d4bdc3ba8cdbad26c09d234e
Using random piece num may generate the exact same key or piecenum.
Instead use fixed piecenum and key.
Change-Id: I54b7bc1a6698149bf99608dd46501ea963cec084
TestProjectUsage_ResetLimitsFirstDayOfNextMonth seems to be flaky.
It doesn't seem to reproduce easily, however it should flush orders from
storagenodes to satellites, that way the orders chore flushing on the
satellite makes sense.
Change-Id: I586d9d8ce10b5f6320e79324f6128b4f8c2cac5f
Testing interfaces is slightly clearer when it's in the package needing
the database rather than each individual implementation.
Change-Id: I10334c214a205f7e510b939b4359a2214c4e060a
When listing pending objects with prefix, the prefix should be prepended
to the EncryptedPath in satStreamID. Otherwise, listing multipart
uploads may display different UploadID than expected.
Change-Id: I27e9f9af9348783e053ad123121b6ddd051739e4
We need to keep empty inline segments as we did it with pointerDB because otherwise old uplinks after uploading data won't be able to download such file. To reduce number of empty inline segments on uplink side we need to implement skipping empty last inline segments for multipart upload.
Change-Id: Ice86c805babba1ad17149754cbd6b3f4fd652722
ListAllBuckets could skip buckets when the total number of buckets
exceeds list limit. Replace listing buckets with looping directly
on the objects table.
Change-Id: I43da2fdf51e83915a7854b782f0e9ec32c373018
We have multipart objects so we may get multiple inline segments
sequences or no segments at all for objects.
Change-Id: Ie46ee777a2db8f18f7154e3443bb9e07ecb170f7
We have multipart objects so we may get multiple inline segments
sequences or no segments at all for objects.
Change-Id: Ic19150efe2ca2b1c60ddd5d30b1317922221c221
Until now we where using single RS per object but it turns out that we
need to be able to support RS per segment. We need to give uplink such information while downloading.
As an addition we are using RedundancySchemePerSegment flag for GetObject request to detect if
we should try to get RS from segment for this request response.
Change-Id: I209dad324496ff59b521b11d2343da61dcdbe7f5
Until now we where using single RS per object but it turns out that we
need to be able to support RS per segment. We need to give uplink such information while downloading.
Change-Id: I6565b7c08962b3a1429f6079e7c2023a0a7c8b72
When there are concurrent refreshes to the cache and the entries are
missing, it could end up causing multiple database calls, even though
only one is needed.
Change-Id: I1ae7a124bbdd1570473cf3a032d375d2f25a8426
adds tests to BeginObjectNextVersion and BeginObjectExactVersion
to check the behavior when an older or a newer committed version
exists.
The current behavior is: everything is committed.
Change-Id: Ia8facbe0dc038a5d214e4e56da3c8e4df2f18900
This enables the transfer of pieces from an on-going multipart upload.
Tests are also modified to take into account pending multipart uploads.
See https://storjlabs.atlassian.net/browse/PG-161
Change-Id: I35d433c44dd6e618667e5e8f9f998ef867b9f1ad
Old uplinks sends some additional information inside marshaled protobufs and we need to extract things like encryption parameters. Newer uplinks are passing it directly in request.
Change-Id: I0b575e68c3ed98481247fe38344e7d61cbd542ba
This adds AliasPieces run length encoding. On average it should
make our pieces encoding:
repair=50,optimal=85,total=90 152.0 bytes
repair=16,optimal=37,total=50 65.4 bytes
Change-Id: I391a9183164828f05383a3cde9ab0e4549c2d440
WHAT:
people who sign up on US2 are not redirected to verifying page. From now on we have to set verify URL to make redirect happen
WHY:
user experience
Change-Id: I96c51a2c4f9cb6376cbfea639675b32918b58bee
This PR removes all back-end related referral program code including the
marketing portal.
We will have a separate PR for front-end code and database migration to
drop `offers` and `usercredits` table
Change-Id: If59f952cddfe0558a7dc03a0eac7cc1081517f88
We will add a cache to nodes, so using completely random nodes wouldn't
show the actual performance.
Change-Id: I94f18283712812f05f7795efd3c7cf57499fa52c
We need to keep an inmemory cache to avoid lookups into aliases table.
This adds the inmemory state of the cache.
Change-Id: Ief2b9bb19e10b46839b9208472dfc3035eb49af3
This is first step in supporting node aliases. It adds a table
that automatically assigns aliases to nodes inserted into the table.
Change-Id: Ibdf40097c3c1e5b371500203f8db203505a48adc
This ensures the caveats are unique even when they contain the same
permissions and will result in unique macaroons. This is important to
ensure revocation doesn't impact more macaroons than intended.
Change-Id: I6354edd0119f2d85eaf580f2d1926a3de9151b88
Remove the orders Settlement endpoint because it isn't used and it was
already always returning an error.
Change-Id: I81486fbe7044a1444182173bc0693698ee7cfe7e
These changes are independently tracked on
https://github.com/storj/storj/tree/jt/migration-reorder
The point of this is to make the distributed column
migration, needed for SNO invoice generation, the very
next one, so we can release it as a point release.
Change-Id: I26e1c03629c4f079b9ad12485e2b71a715d82b3b
allow disabling tcp/quic
In order to have more control of a server so that we can
simulate connection failures in `testplanet`, this PR changes
quic.Listener to accept an existing UDPConn instead of relying on the
quic-go library to create the UDPConn.
This PR also adds two flags on the `server.Config` struct to allow
enabling/disabling tcp/tls listener and quic listener. By default, they
are both set to true.
- `DisableTCPTLS`: internal flag, disables tcp/tls listener.
- `DisableQUIC`: hidden flag, disables quic listener
By making the `DisableQUIC` a hidden flag, it allows storagenode operators to
have the ability to disable quic traffic in case their set up can't work
with udp traffic.
Change-Id: I853b12435d988b9c41ad9b873fd57480d792e378
this changes from a satellite error to a local encryption
error with the upcoming permissions changes where we only
include keys for the paths that are allowed.
Change-Id: I7aa37cfbaee31a1e54afe0423b283b9f41d9345f
Limit bucket name lookup to date range of the calling methods since we only need distinct bucket names for that time period.
Adds new index and removes an index specific to project ID since it is no longer needed.
Change-Id: Ic07bbfb1c32280e0c0e39f8da020b284e1e5d974
It's impossible to time correctly this check. The segment may expire
just at the time we upload the repaired pieces to new storage nodes.
They will reject this as expired and the repair will fail.
Also, we penalize storage nodes with audit failure only if they fail
piece hash verification, i.e. return incorrect data, but only if they
have already deleted the piece.
So, it would be best if the repair service does not care about object
expiration at all. This is a responsibility of another service.
Removing this check will also simplify how we migrate this code
correctly to the metabase.
Change-Id: I09f7b372ae2602daee919a8a73cd0475fb263cd2
Fix an issue due to copy-paste problem that made that the Graceful Exit
test to be flaky.
The test uses a time created at the beginning of the test for avoiding
to get undeterministic time differences due to the fact of the response
time variation by the DB queries, however some part of the test were
using a current time rather than this base time, so they have been
addressed.
Change-Id: I4786f06209e041269875c07798a44c2850478438
We need this method to fix repairing pending objects. In another PR, it
will replace the GetObjectLatestVersion + GetSegmentByPosition calls
that are currently executed.
Change-Id: I4c5c2ab604edf898452b6fd21b86d4d3f970ce79
Delete satellite order methods and DB tables which aren't used anymore
after we have done a refactoring on the orders to stuck bucket
information in the orders' encrypted metadata.
There are also configuration parameters and a satellite chore that
aren't needed anymore after the orders refactoring.
Change-Id: Ida3682b95921df70792284b42c96d2508bf8ca9c
Add a command to the satellite for cleaning up the Graceful Exit (a.k.a
GE) transfer queue items of nodes that have exited.
The commit adds to the GE satellite DB a couple of new methods, and its
corresponding test, for performing the operations of the new command.
Change-Id: I29a572a59689d63b24990ac13c52e76d65aaa917
using redash i manually checked that the only times the sum of
the payments does not match the paid column is for 2020-12 and
if it does not match then there are no payments.
Change-Id: I71ce0571de7e38e21548d7d6757b25abc3bfa781
The rollup archiver chore moves bucket bandwidth rollups and
storagenode rollups that are older than a given duration
to two new archive tables.
Change-Id: I1626a3742ad4271bc744fbcefa6355a29d49c6a5
This index is obsolete and duplicates a similiar (project_id, name)
index on the same table.
Moreover, it might confuse CockroachDB which of the two index to use,
which may might affect DB performance.
Change-Id: If8d1df8347714942cea9dca82864ba5f4973bed3
Comparing the result from a subquery with the "IN" operator instead of
"=" makes a huge difference in the execution time of the SQL query on
CockroachDB.
Change-Id: I76e8f75a7bc95951667345d1ed9bd60f9aef3edb
When we observed the value for total piecesizes stored in the network,
we were doing it after converting them to byte-hours, rather than using
the actual piece sizes. This fixes that issue.
Change-Id: I1564d21b519f70eb59f298d97dbd777baf127723
We wanto have single uplink branch for standard and multipart-upload satellite but some tests are using helper methods from multipart. This change adds methods used by uplink test.
Change-Id: I82352ed56674ff7e8743b58061ba594018e78e3b
We are checking if satStreamID is created in the last 48 hours. If it is
older we treat is as expired an fail to unmarshal it.
Since the satStreamID is also the Upload ID for multipart uploads, this
means that all calls fail for multipart uploads older than 48 hours.
Even aborting old multipart uploads is not possible.
To resolve this issue, we should stop checking satStreamID for
expiration.
Change-Id: Ieaf53ed3cd800cdd08843676c2d9490b007d962e
Parts that have segment index gaps should be treated similarly how
multipart objects are, because direct calculation of the segment does
not work.
Change-Id: I2717eac36f085b5100f3d600fcf0ce056202a9eb
CreateGetOrderLimits is not used anymore because we have CreateGetOrderLimits2. We need to remove old method and fix name of second.
Change-Id: I59148b8d28fc9dbab7d452c884319125a02745d1