The coupon_codes table will allow for administrators to create new promo
codes associated with coupon information (amount, duration, etc...).
A user will be able to enter a promo code (aka coupon code) in order to
apply a new coupon to their account. The coupon in the coupons table is
linked to the template defined in the coupon_codes table.
Change-Id: I50e49fa92afbc6aa9d01d8a895c069efb59e472b
Migration step 148 will cause errors because we missed some
references to the columns being dropped. Removing the step
altogether causes problems with backwards compatibility tests
because the change already exists in the latest release tag.
To circumvent, we change v148 to an empty migration.
Add methods FindTable and RemoveColumn in private/dbutil/dbschema
Change-Id: Ia527e95b88a88c5dc82800928ce6f8cfb879e334
cockroach is having problems with huge transactions and
having them complete before timeouts or whatever, so
do smaller transactions.
because we can have partial recording of payments which
are not unique, we have to do a thing where we read and
check if it already exists before writing. this is not
concurrency safe.
Change-Id: Ia7d59499a43ce6d70cb2a23754edbdd1b643ef1a
The new 'consistency ge-cleanup-orphaned-data' cli command deleted
orphaned transfer queue items, but not entries in the
graceful_exit_progress table. This will delete orphaned entries
from the exit progress table too.
Change-Id: I5f927aac1f258490678deaf179be92ccfe10fcd8
Turns out that many methods generated with dbx where not used at all.
Lets remove them.
As a next step we can think about dropping tables like:
* user_credit
* offer
Change-Id: Id6cda81a701348db2a6b8b26daa22ae9c4f87cb4
Pregenerate the database schema we should use for most tests.
Currently, Cockroach is slow with regards to migration and it's
better if it happens in as few transactions as possible.
This reduces test time from ~21min to ~15min.
Change-Id: Ife8117053e6b9ecf3c93fe63677edf15d4d7c254
Testing interfaces is slightly clearer when it's in the package needing
the database rather than each individual implementation.
Change-Id: I10334c214a205f7e510b939b4359a2214c4e060a
This PR removes all back-end related referral program code including the
marketing portal.
We will have a separate PR for front-end code and database migration to
drop `offers` and `usercredits` table
Change-Id: If59f952cddfe0558a7dc03a0eac7cc1081517f88
These changes are independently tracked on
https://github.com/storj/storj/tree/jt/migration-reorder
The point of this is to make the distributed column
migration, needed for SNO invoice generation, the very
next one, so we can release it as a point release.
Change-Id: I26e1c03629c4f079b9ad12485e2b71a715d82b3b
Limit bucket name lookup to date range of the calling methods since we only need distinct bucket names for that time period.
Adds new index and removes an index specific to project ID since it is no longer needed.
Change-Id: Ic07bbfb1c32280e0c0e39f8da020b284e1e5d974
Fix an issue due to copy-paste problem that made that the Graceful Exit
test to be flaky.
The test uses a time created at the beginning of the test for avoiding
to get undeterministic time differences due to the fact of the response
time variation by the DB queries, however some part of the test were
using a current time rather than this base time, so they have been
addressed.
Change-Id: I4786f06209e041269875c07798a44c2850478438
Delete satellite order methods and DB tables which aren't used anymore
after we have done a refactoring on the orders to stuck bucket
information in the orders' encrypted metadata.
There are also configuration parameters and a satellite chore that
aren't needed anymore after the orders refactoring.
Change-Id: Ida3682b95921df70792284b42c96d2508bf8ca9c
Add a command to the satellite for cleaning up the Graceful Exit (a.k.a
GE) transfer queue items of nodes that have exited.
The commit adds to the GE satellite DB a couple of new methods, and its
corresponding test, for performing the operations of the new command.
Change-Id: I29a572a59689d63b24990ac13c52e76d65aaa917
using redash i manually checked that the only times the sum of
the payments does not match the paid column is for 2020-12 and
if it does not match then there are no payments.
Change-Id: I71ce0571de7e38e21548d7d6757b25abc3bfa781
The rollup archiver chore moves bucket bandwidth rollups and
storagenode rollups that are older than a given duration
to two new archive tables.
Change-Id: I1626a3742ad4271bc744fbcefa6355a29d49c6a5
This index is obsolete and duplicates a similiar (project_id, name)
index on the same table.
Moreover, it might confuse CockroachDB which of the two index to use,
which may might affect DB performance.
Change-Id: If8d1df8347714942cea9dca82864ba5f4973bed3
When we observed the value for total piecesizes stored in the network,
we were doing it after converting them to byte-hours, rather than using
the actual piece sizes. This fixes that issue.
Change-Id: I1564d21b519f70eb59f298d97dbd777baf127723
Add ProjectsCursor type for pagination
Add PageCount, CurrentPage, and TotalCount ProjectsPage
This allows us to mimic the logic of GetBucketTotals and the
implementation of BucketUsages in graphql for the new ProjectsByOwnerID
functionality.
Change-Id: I4e1613859085db65971b44fcacd9813d9ddad8eb
Full scope:
private/testplanet,satellite/{overlay,satellitedb}
Description:
In most cases, downtime tracking with audits will eventually lead
to DQ for nodes who are unresponsive. However, if a stray node has no
pieces, it will not be audited and will thus never be disqualified.
This chore will check for nodes who have not successfully been contacted
in some set time and DQ them.
There are some new flags for toggling DQ of stray nodes and the timeframes
for running the chore and how long nodes can go without contact.
Change-Id: Ic9d41fdbf214736798925e728245180fb3c55615
This is the first step in the removal of uptime columns on the
nodes table. These columns are no longer used:
uptime_success_count
total_uptime_count
uptime_reputation_alpha
uptime_reputation_beta
In order to avoid breaking backwards compatibility, we need to
remove all references to these columns before removing the columns
themselves from the database. However, since uptime_success_count
and total_uptime_count are NOT NULLABLE, we can't remove them from
the insert statements in the overlay. So we can't remove the columns
because of the references, and we can't remove the references because
the columns can't be null. What a pickle. To remedy this, we will set a
default on the columns. Then we should be able to remove them from the
insert statements
Change-Id: I75f6c56fb7897835bbf29869f86f39de1d9dd345
GetSuccessfulNodeNotCheckedInSince and GetOfflineNodesLimited are overlay methods
which were only used by the previous downtime tracking system which has been removed.
These methods should also be removed.
Change-Id: Idb829d742e1f987e095604423fff656fe581183e
* Separate audit history interface into its own file in the overlay
package
* Add overlay.AuditHistory struct so that internalpb.AuditHistory is
only used from within the database layer
* Add overlay.GetAuditHistory function for features that will require
access to detailed audit history information
* Do not return full audit history from UpdateAuditHistory - callers to
that function only need to know the online score and whether a full
tracking period has been completed
* Move audit history tests out of satellite/satellitedb, since they are
independent of database implementation
Change-Id: I35b0c4ac23bbaabd80624f8a9631c3cb1a1f33bd
Now that the deprecated downtime tracking service is removed
(3fc76f4ffe), we can safely remove
the nodes_offline_times table.
Change-Id: Ia7c6efe32ba104dff5a830af5f2beee3337eefe5
Nodes which are offline_suspended will no longer be considered for new
uploads. The current threshold that enters a node into offline
suspension is 0.6. Disqualification for offline suspension is still
disabled.
Change-Id: I0da9abf47167dd5bf6bb21e0bc2186e003e38d1a
Query nodes table using AS OF SYSTEM TIME '-10s' (by default) when on CRDB to alleviate contention on the nodes table and minimize CRDB retries. Queries for standard uploads are already cached, and node lookups for graceful exit uploads has retry logic so it isn't necessary for the nodes returned to be current.