NewContainment will replace Containment later in this commit chain, but
for now it is not yet being used.
NewContainment will allow a node to be contained for multiple pending
reverify jobs at a time. It is implemented by way of the reverify queue.
Refs: https://github.com/storj/storj/issues/5231
Change-Id: I126eda0b3dfc4710a88fe4a5f41780618ec19101
We have a bug where if number of buckets in the system will be
multiplication of batch size (2500) then loop that is going over
all buckets can run indefinitely.
Fixes https://github.com/storj/storj/issues/5374
Change-Id: Idd4d97c638db83e46528acb9abf223c98ad46223
It helps for the (*reverifyQueue).Insert() method to be idempotent (it
does not make sense for the same node to be under containment for the
same piece multiple times). This change allows for that, by adding an
`ON CONFLICT DO NOTHING` clause to the database query.
Refs: https://github.com/storj/storj/issues/5231
Change-Id: Id2839ee185d5396c0bc2f84ffad610df9786f6c7
Adding a new worker comparable to Verifier, called Reverifier; as the
name suggests, it will be used for reverifications, whereas Verifier
will be used for verifications.
This allows distinct logging from the two classes, plus we can add some
configuration that is specific to the Reverifier.
There is a slight modification to GetNextJob that goes along with this.
This should have no impact on operational concerns.
Refs: https://github.com/storj/storj/issues/5251
Change-Id: Ie60d2d833bc5db8660bb463dd93c764bb40fc49c
Previously, the node events chore would select based on the earliest
created_at. However, if for some reason this batch fails, it would still
be the next item to select. If there is a consistent error, the chore
would be stuck retrying the same batch over and over. Now instead
GetNextBatch orders by `last_attempted NULLS FIRST ASC, created_at ASC`.
If a batch fails during Notify, last_attempted is updated so we can move
on to a new batch if one exists.
Change-Id: Ia8458e05ac358d85b2f2c6d690f3d607d631be61
SetNodeContained() will change the contained flag in the nodes table,
which will affect whether nodes are selected for new uploads. This flag
_should_ correlate with whether or not a given node has any entries in
the reverification queue. However, the reverification queue is intended
to be 'safely partitionable' from the nodes table, so we can't enforce
that characteristic transactionally. But this is ok; there are no dire
consequences if they are out of sync.
We will be adding a chore that updates the contained flag based on the
contents of the reverification queue periodically, if something fails
to set it directly when appropriate.
Refs: https://github.com/storj/storj/issues/5231
Change-Id: I26460d8718dee63fd55d00a44568b2065fc8fe30
GetByNodeID will allow querying the reverification queue to see if there
are any pending jobs for a given node ID. And thus, to see if that node
ID should be contained or not.
Some parameters on the other methods of the ReverifyQueue interface have
been changed to accept pointers; this was done ahead of the rest of the
changes for the reverification queue to better match the signatures of
the methods that these will replace once ReverifyQueue is actually being
used (meaning fewer changes to tests).
Refs: https://github.com/storj/storj/issues/5251
Change-Id: Ic38ce6d2c650702b69f1c7244a224f00a34893a1
The audit chore will be pushing a large number of segments to be
audited, and the db might choke on that large insert when under load.
This change divides the insert up into batches, which can be sized
however is optimal for the backing database. It also arranges for
segments to be inserted in the order of the primary key, which helps
performance on some systems.
Refs: https://github.com/storj/storj/issues/5228
Change-Id: I941f580f690d681b80c86faf4abca2995e37135d
As part of the effort of splitting out the auditor workers to their own
process, we are transitioning the communication between the auditor
chore and the verification workers to a queue implemented in the
database, rather than the sequence of in-memory queues we used to use.
This logical database is safely partitionable from the rest of
satelliteDB.
Refs: https://github.com/storj/storj/issues/5251
Change-Id: I6cd31ac5265423271fbafe6127a86172c5cb53dc
read one is the wrong method when trying to select one row when there
are multiple. It returns TooManyRows error. Read first is the correct
method.
Change-Id: Ic6c92795486892ac041befd118b6945314bffeaa
Add LastOfflineEmail to overlay.NodeDossier. This is the last time a
node got an offline email. Add two new overlay db methods,
GetOfflineNodesForEmail and UpdateLastOfflineEmail. Edit db method
UpdateCheckIn to nullify last_offline_email if node is up.
Change-Id: I1ee60e7d98dd1b68348a57f9a4fb77c6c9895d6d
This change modifies the method responsible for returning project
usage summaries such that the end date of the given time period
is excluded to prevent overlap.
Change-Id: If06155efff5c6fce3865f5f6e4344873abe3e432
When a node checks in and its version is below the minimum, insert
BelowMinVersion event into node events
Change-Id: I0e437ac34496778369515cbc40c15676da8b27ae
We want to be able to exclude contained nodes from nodes selection. For this, we add a 'contained' column to the nodes table to track the containment status.
Fixes https://github.com/storj/storj/issues/5231
Change-Id: Id78e645f172145adcb8664646e8ebf14e218b57b
Since the auditor will be moving to a different process from the
metainfo loop, we need a way of communicating which segments have
been chosen for audit. This queue will be that communication, for now.
Contrast this with the queue for _re_verifications in commit 9c67f62f.
Refs: https://github.com/storj/storj/issues/5251
Change-Id: I9a269c7ef21e6c5e9c6e5e1f3db298fe159a8a79
* Mark node events table as "safely partitionable", meaning that it
is/will not be queried relationally along with other tables. This way,
we can safely use this table in Postgres rather than CockroachDB,
where most of our other satellite tables are running.
* Add a dbx-generated delete function to the node events table, to allow
us to easily delete entries created before a provided time. This
allows us to keep the table clean, since there is no need to persist
entries after emails have been sent.
Change-Id: I25e8a5c4092fe49dcfa6c8bb73f2043646bb611f
since amount of objects is growing and looping through all of them
starts taking lot of time, we are switching for SQL query to do it
in chunks of tallies per bucket. 2nd part of issue fix.
Closes https://github.com/storj/team-metainfo/issues/125
Change-Id: Ia26bcac0a7e2c6503df9ebbf4817a636841d3284
Implement node events DB with Insert and GetLatestByEmailAndEvent. Get
was changed to GetLatestByEmailAndEvent so we can verify items are being
inserted into the table without needing the ID, which is not available
to us in the tests.
Change-Id: I4abe63631c44774cd7e795fbab0cbab4d801db4c
Change node_events schema to use an id column as primary key rather than
node id because there can be multiple events per node id.
Change-Id: I518d8ef9ea658764876483e282a4058d3c4910f4
Add new table for node events. We can use this to notify node
operators of certain node events. Further, we can squash events for
multiple nodes with the same email into a single notification.
Change-Id: Icea6dd939df8fe4a98806bd79c014e21d239c43e
This table will be used as a queue for pieces that need to be reverified
(a regular audit timed out on the owning node, so now that node is
contained and we need to validate the piece before un-containing it).
Refs: https://github.com/storj/storj/issues/5228
Change-Id: I5dcd26b6adced8674cbd81884c1543a61ea9d4c8
This change increments users' failed_login_count in the database layer to avoid potential data race.
It also updates the login_lockout_expiration as well in one operation.
see: https://github.com/storj/storj/issues/4986
Change-Id: I74624f1bee31667b269cb205d74d16e79daabcb6
After you create a brand new cluster (with storj-up, for example) the project usage fails during the first 5 minutes.
The problem is the usage of `AS OF SYSTEM TIME` which points to a time where the master database didn't exist.
In this specific case the database not found error can be ignored to avoid such messages. (if the database is really missing, we will have problems way more earlier, eg. at the login)
Change-Id: I51ee78994d91fc2a14b56646402faaaa8154c934
This patch addresses the following issues:
1. Running full migration in cockroachdb is quite slow. We already have an approach for unit tests to start from the latest snapshot. This patch makes it possible to use it for integrations tests.
2. Migration requires executing a separated command which makes it hard to run application in containerized test environments (like storj-up) or from IDE. This patch introduces a hidden flag to run migration.
3. Test user creation is painful. We do it with calling GraphQL + admin API. Providing an option with testuser makes the integration tests significant more simple (especially as the projectID -> access grant can be predictable)
Change-Id: I61010728727b490ea6aac32620f2da0484966727
Add new project db method, GetSalt, to get project salt. If salt
column is empty, return the sha-256 hash of the project ID. This
new method is used in metainfo endpoint ProjectInfo to return the
project salt to the client. This is backwards compatible because
the salt column is not populated yet. The updated endpoint will
do the same thing as the current endpoint.
Change-Id: I7eba376c865e10995a5a916302feca7cd7c7efa2
Pieces count in DB are stored as int64 and we would like to align bloom
filter processing with this type.
Change-Id: Iaec767e609a40d802077ae057520541805a7c44f
Remove pkg satellite/payments/monetary as it moved to storj.io/common.
Update all code pkg references from monetary to common/currency.
Change-Id: If2519f4c80cf315a9299e6521a6b9bbc6c399156
This change adds the following endpoints:
- projects/apikeys/{id}: returns a paged list of API keys for the
project specified by the given ID
- apikeys/delete/{id}: deletes the API key specified by the given ID
Additionally, the API Go code generator has been given the ability to
process unsigned integer parameters.
Change-Id: I5ff24e012da24a3f06bea1ebb62bae6ff62f951a
Add the users current wallet balance to the endpoints for claiming and listing storjscan wallets. Also prevent a user with a claimed wallet address from claiming a new wallet.
Change-Id: I0dbf1303699f924d05c8c52359038dc5ef6c42a1
Adds USDollarsMicro currency to the billing DB which support fraction of a cent with decimal places for better billing amounts accuracy.
Change-Id: Id07dfae104d94e27c7b22ab8f5781010e16c4c8e
Removed batch insert of payments since they do not guarantee order. Order of payments sent to the payments DB is important, because the billing chore will request new payments based on the last received payment. If the last payment inserted is not the last payment received, duplicate payments will be inserted into the billing table.
Change-Id: Ic3335c89fa8031f7bc16f417ca23ed83301ef8f6
This concludes the saga that began with commit c053bdbd, migrating away
from using gob-encoded big.Float values in the database to using
integers with an implied number of decimal places.
Nothing is using or accessing or expecting the _gob columns to exist
anymore. The transition code and the migration chore are gone. All that
remains is to drop the columns.
Change-Id: I9b15ee52f7781510a6dc91cf7c54f7f9022b1210
Sessions now expire after a much shorter amount of time, requiring
clients to issue API requests for session extension. This is handled
behind the scenes as the user interacts with the page, but once session
expiration is imminent, a modal appears which informs the user of his
inactivity and presents him with the choice of loging out or preserving
his session.
Change-Id: I68008d45859c814a835d65d882ad5ad2199d618e
This change tracks signup captcha scores in the signup_captcha column in the users table.
It slightly modifies the captcha verify method to return both the score and success.
see: https://github.com/storj/storj/issues/5067
Change-Id: I7b3993e44958cfcf179806c7df19d6887fe3eda9
Use the provided ConstraintViolation method of the pgutil package rather than importing jackc pgxerrcode directly.
Change-Id: I4e86713000b3f5f0aadd54beee8ee239f0c8df8e
Following the changes made to fix the storage usage graph
on the storagenode dashboard, we added a new
interval_end_time column to the accounting_rollups table.
We noticed a two-day delay in the graph, turns out the sub-query
was wrong due to the conflicting interval_end_time column
in both tables so we have to explicitly state which table
column we are referring to.
Also, set the default for interval_end_time in the accounting
rollups to the start_time if the interval_end_time is null
which will be removed once we backfill the column and alter
it to be a non-nullable column.
Updates https://github.com/storj/storj/issues/4178
Change-Id: Iff32b261d07b6ee219d2b6b6542377f0c54633a1
Database migration tests are rather slow, reduce them to
only the last 10 migrations, which should be sufficient.
Change-Id: Ib9d964fe6ec86ddeeef26c66b6ea9207b7868855
The signup_captcha column will hold the captcha scores of new users.
see: https://github.com/storj/storj/issues/5067
Change-Id: Ia322af29a3b5b019b417843272506a3dbd1397e4
This is in response to community feedback that our existing reputation
calculation is too likely to disqualify storage nodes unfairly with
extreme swings up and down.
For details and analysis, please see the data_loss_vs_dq_chance_sim.py
tool, the "tuning reputation further.ipynb" Jupyter notebook in the
storj/datascience repository, and the discussion at
https://forum.storj.io/t/tuning-audit-scoring/14084
In brief: changing the lambda and initial-alpha parameters in this way
causes the swings in reputation to be smaller and less likely to put a
node past the disqualification threshold unfairly.
Note: this change will cause a one-time reset of all (non-disqualified)
node reputations, because the new initial alpha value of 1000 is
dramatically different, and the disqualification threshold is going to
be much higher.
Change-Id: Id6dc4ba8fde1be3db4255b72282207bab5491ca3
Adding an index to the timestamp field of the billing transaction table to improve query performance. This should prevent having to do a full table scan when we query for the last billing transaction of a particular source and/or type.
Change-Id: I581f09494cc8662a12efba4302022a07121ba309
Adding an index to the wallet address field of the storjscan wallet table to improve query performance. This should prevent having to do a full table scan when we query for one or more wallet addresses for a given user in our queries.
Change-Id: Ic1b5d06c2258489e5464d186fed5270172f8cba5
The new dashboard currently gets stuck on loading and displays an error when
it fails to get usage data. Failure happens on satelliteDb due to a cockroach transaction error
caused by reading data before using AS OF SYSTEM TIME in the same transaction.
This change reverses the order of daily usage queries to avoid this error.
And hides the loaders on the dashboard if/when an error occurs.
see: https://github.com/storj/storj/issues/5012
Change-Id: I06b6ee434f72242f9b7d21dec7aaf39d1d622f1e
I don't know why the go people thought this was a good idea, because
this automatic reformatting is bound to do the wrong thing sometimes,
which is very annoying. But I don't see a way to turn it off, so best to
get this change out of the way.
Change-Id: Ib5dbbca6a6f6fc944d76c9b511b8c904f796e4f3
During an update to the billing DB, there is a special case failure that can occur if multiple updates to the table happen concurrently. In this case, the update would normally fail silently due to the balance constraint during update, and the subsequent insert for a new record fails because the user already exists in the table. The solution for this case, is to simply retry the insert with some limit to prevent infinite loops.
Change-Id: Ibe70fec2c386c25bd2484fe91f49a6a962357706
Update the billing table to use generated IDs, to include the source and status fields, and to add metadata jsonb field that can be used for fields specific to a particular type of billing transaction. Additionally a new table was added to keep track of user balances
Change-Id: Ieb3a63aafd8fe21fc3386bafd43d52081b7d2838
satellite/{payments/billing,satellitedb}: refactor billing DB
Update the billing table to use generated IDs, to include the source and status fields, and to add metadata jsonb field that can be used for fields specific to a particular type of billing transaction. Additionally a new table was added to keep track of user balances
Change-Id: Ieb3a63aafd8fe21fc3386bafd43d52081b7d2838
Adding an interval_end_time column to the accounting_rollups table
to keep the last interval_end_time for each daily storage tallies.
Updates https://github.com/storj/storj/issues/4178
Change-Id: If7a8210c5e9fe2fc9df84b137a8b6e3db2471c58
Currently we have a significant number of tallies that need to be
deleted together. Add a limit (by default 10k) to how many will
be deleted at the same time.
Change-Id: If530383f19b4d3bb83ed5fe956610a2e52f130a1
Some nodes were added to the nodes table due to a bug in quic
based storagenode contact code. This is a tool to clean up these nodes.
Delete with batch-size 1k seems to take ~400ms on local CockroachDB.
Change-Id: Ic0c1180528c27952e19c431fc9cc327292a10a5f
Add payments method to payments to DepositWallets interface.
Exposes payments retrieval API for a particular wallet to
other systems such as console billing API.
Change-Id: Ifcb3a35514aab50be00f6360007954980b5d8b38
The users.Update method in the satellitedb package takes a console.User
as an argument. It reads some of the fields on this struct and assigns
the value to dbx.User_Update_Fields. However, you cannot optionally
update only some of the fields. They all will always be updated. This means
that if you only want to update FullName, you still need to read the
user info from the DB to avoid updating the rest of the fields to zero.
This is not good because concurrent updates can overwrite each other.
This change introduces a new struct type, UpdateUserRequest, which
contains pointers for all the fields that are updated by satellite db
users.Update. Now the update method will check if a field is nil before
assigning the value to be updated in the db, so you only need to set the
field you want updated. For nullable columns, the respective field is a
double pointer. This allows us to update a column to NULL if the outer
pointer is not nil, but the inner pointer is.
Change-Id: I27f842d283c2711e24d51dcab622e57eeb9157f1
There are multiple entries in the users table with the same email
address. This is because in the past users were able to register
multiple times if the email was not verified. This is no longer
the case. If a user tries to register with an unverified email
already in the DB, we send a verification email instead of
creating another entry. However, since these old entries in the
table with duplicate emails were never cleaned up, the email
reminder chore will send out email verification reminders to them.
A single person will get one separate email per entry in the DB
with their email and where status = 0.
Since the multiple entries with the same email problem was solved
a while ago, just add a constraint to GetUnverifiedNeedingReminder
to only select users created after a cutoff. Once the DB is
migrated to remove the duplicate emails, we can remove the cutoff.
github issue: https://github.com/storj/storj/issues/4853
Change-Id: I07d77d43109bcacc8909df61d4fb49165a99527c
The ApplyUpdates() method on the reputation.DB interface acts like the
similar Update() method, but can allow for applying the changes from
multiple audit events, instead of only one.
This will be necessary for the reputation write cache, which will batch
up changes to each node's reputation in order to flush them
periodically.
Refs: https://github.com/storj/storj/issues/4601
Change-Id: I44cc47767ea2d9423166bb8fed080c8a11182041
Updated daily project usage query to return correct allocated traffic.
If allocated egress has expired then we return settled egress.
If not then we return allocated egress - dead egress.
Fix for this issue
https://github.com/storj/storj/issues/4563
Change-Id: Ia15a50d3bb8d8cb1106936e17dbe0f1f5a40fa87
Fixed daily usage query returning single bucket usage.
We sum up bucket usages now.
Also fixed https://github.com/storj/storj/issues/4559.
Change-Id: I2eb6299f1ef500d68150879195011b6fbb5f37ed
Add billing DB to the satellite. This DB will hold all transactions on the users account and can be used to compute the users current usable account balance.
Change-Id: I056416efc169e5e5e30c9f30cd8bc766b7bc8073
This has been a cause of some confusion, even though the fields are
labeled as being copies of config values.
Having them be under a field explicitly named "Config" makes this
clearer, plus, allows the values to be passed in simply as a copy
of the Config struct from the satellite, rather than copying the fields
individually (which can be error-prone, particularly as the AuditCount
field in UpdateRequest is apparently not the same thing as the
AuditCount field in reputation.Config).
Refs: https://github.com/storj/storj/issues/4601
Change-Id: I386953347b71068596618616934aa28e3245cdc1
Add storjscan wallets DB to the satellite. For now this DB is a one to one mapping of the users account to a storjscan wallet that can be used by the account holder to make payments on their Storj account.
relates to https://github.com/storj/storj/issues/4347
Change-Id: I6e65b15817b90ceb75641244f9bf173c3b4228a7
The two protobuf types are identical except that one is in our common/pb
package, and the other is in internalpb. Since the type is public
already, and there is no difference in the internal one, it seems better
to use the public one for all satellite needs.
There is also another type which is essentially identical, but which is
not a protobuf type, also called "AuditHistory". It looks like we don't
ever actually need to have a separate type from the protobuf one.
This change makes us use "storj/common/pb".AuditHistory for all of our
AuditHistory needs.
Refs: https://github.com/storj/storj/issues/4601
Change-Id: If845fde21bb31c801db6d67ffc9a146d1617b991
This functionality will be needed in both packages, so here we move it
into the more general reputation-code package and export it for use in
satellitedb.
This also removes the related UpdateAuditHistory() signature from the
reputation DB interface, since it doesn't have anything to do with the
db. It doesn't need to be a method, either.
Finally, this changes the test for addAudit to be a plain test function
instead of using testplanet.Run(). It didn't need a whole testplanet
setup or any databases.
Refs: https://github.com/storj/storj/issues/4601
Change-Id: I90f6a909e5404f03ad776b95cfa2f248308c57c1
Márton found out that DROP DATABASE is rather slow on CRDB, and it makes
a significant impact when running the whole testsuite. In sum of test
times it's ~2.5h compared to ~2h. And the end-to-end ~20m to ~16m.
This adds a new flag STORJ_TEST_COCKROACH_NODROP for enabling this
behavior in the CI environment.
Fixes https://github.com/storj/dev-enablement/issues/6
Change-Id: I5a6616c32dc6596a96ba3d203f409368307d7438
This will let us update our reputation cache when writing through to the
db.
Since the information is already being fetched from the db and returned
to the application, the extra cpu load here should be minimal.
Refs: https://github.com/storj/storj/issues/4601
Change-Id: I2b8619f2c0d541893c7d3e7d33b1863b96775ebd
We want to remind unverified users to verify their emails:
once after 24 hours has passed and again after 5 days has passed.
Add mailservice.Service to satellite core because it is needed by the
chore for sending emails. To add the mailservice.Service to the core,
we create a helper function in satellite/peer.go to avoid duplicating
the code in both api.go and core.go. In addition to the chore, this
change adds methods to users.DB to get unverified users in need of
reminder.
Change-Id: I4e515bdf43f922788b4f965b2efb34fa32288bd1
We added nodes.disqualification_reason recently, but we didn't add a
corresponding column in the reputations table (despite having a
corresponding `disqualified` column there).
Without this change, the (very useful and informative) assignments to
updateFields.DisqualificationReason in reputations.go have no effect.
Refs: https://github.com/storj/storj/issues/4601
Change-Id: I77404902ca64b56aed72f1de76b303fe82b76aab
When a new user registers, we send a verification request to their
email. Currently, if they do not verify their email, we take no further
action. We want to send these users reminders: one after about one day
and one after about 5 days. To do this we will use this new
verification_reminders column.
It will look something like this:
```
SELECT email FROM users
WHERE status = 0
AND (
(verification_reminders = 0 AND created_at < now() - 'INTERVAL 1d')
OR (verification_reminders = 1 AND created_at < now() - 'INTERVAL 5d')
)
```
Change-Id: If0620e08c97e9e337c9563481d665c5bd462693b
Fix for this customer issue
https://github.com/storj/customer-issues/issues/34
By this change we fetch bucket usage since its creation instead of using project's createdAt timestamp.
Change-Id: Ic0ea5d169056a5bd64ed143d13954d794da6e1d2
Things that make debugging easier.
* Added logging to automatic link clicking to make it obvious, when it
fails.
* Added monitoring to oidc.
* Made dbx create calls noreturn for oauth_*
Change-Id: I37397b4e84ce5bfd82954aed9c38fdfd52595f24
We were already able to override (or not) metadata with this method
but to be explicit we are introducting new option to control storing
metadata with object. Separate option should be less error prone.
https://github.com/storj/team-metainfo/issues/105
Change-Id: I4c5bce953a633a0009b05c5ca84266ca6ceefc26
Last part of backwards compatible db migration to remove "suspended" column.
Removes exeption which removes "suspended" column in tests from `migrate_test.go`.
Adds DB migration to remove "suspended" column from 'nodes' and 'reputations' tables.
Change-Id: I02051279f6f4181e966c567919af0e774583f165
Set disqualification reason when reputations stats are updated on DB.Update.
Added tests for DisqualifyNode and for disqualification cases which happens during Update.
Change-Id: I00130ab5d9722422805159ad2f183c205de60f7e
Attribution is attached to bucket usage, but that's more granular than
necessary for the attribution report. This change iterates over the
bucket attributions, parses the user agent, converts the first entry
to lower case, and uses that as the key to a map which holds the
attribution totals for each unique user agent.
Change-Id: Ib2962ba0f57daa8a7298f11fcb1ac44a8bb97875
Migrate free tier users to have default limits if their limits were set to 0.
They were affected by incorrect working of Update user query.
Change-Id: I4c49c8d99b12dba2b9b0ab61b2175085976dcc95
Added failed_login_count and login_lockout_expiration columns to users table to control users failed login attempts.
We want to prevent brute forcing of user login so this is the first step.
Change-Id: I06b0b9f5415a1922e08cd9908893b2fd3c26bca0
Before, the VA query was summing the total and dividing by the number of
rows. This gives the average bytes stored per hour, but we charge for
usage with byte-hours. Why not do value attribution the same way?
To do that, we don't divide by the number of rows. We also have object
and segment fees so return segment-hours and object-hours too.
Change-Id: I1f18b7e1b2bae1d3fae1ca3b93bfc24db5b9b0e6
This sets the corresponding _numeric columns to be NOT NULL (it has been
verified manually that there are no more NULL _numeric values on any
known satellites, and it should be impossible with current code to get
new NULL values in the _numeric columns.
We can't drop the _gob columns immediately, as there will still be code
running that expects them, but once this version is deployed we can
finally drop them and be totally done with this crazy 5-step migration.
Change-Id: I518302528d972090d56b3eedc815656610ac8e73
We are in the process of creating an api to allow users to manage their
accounts programmatically. We would like to use api keys for
authorization. We were originally going to create an entirely new table
for these api keys, but seeing as we already have 2 other tables for
keys/tokens, api_keys and oauth_tokens, we thought it might be better to
use one of these. We're using oauth_tokens.
We create a new oidc.OAuthTokenKind for account management api keys:
KindAccountManagementTokenV0. We made the key versioned because we
likely want to improve the implementation in the future, but we want to
get something functional out the door ASAP because the account management
api feature is highly desired.
Add a new method to oidc.OAuthTokens interface for revoking v0 account
management api keys, RevokeAccountManagementTokenV0. Add update method
to dbx implementation to allow updating the expiration. We will revoke
these keys by setting the expiration to 0 so they are expired.
Change-Id: Ideb8ae04b23aa55d5825b064b5e43e32eadc1fba
We have an issue with latest CRDB. Single query cannot modify
the same table multiple times. Now build is blocked.
This change is unblocking build by:
* adjusting query for inserting into repair queue
* temporary removing code for deletion for server-side copy
* temporary disable backward compatibility tests for CRDB
Change-Id: Idd9744ebd228e5dc05bdaf65cfc8f779472a975d
Remove redundant suspension timestamp column from nodes and reputation tables.
Suspended timestamp was moved to unknown_audit_suspended and suspended column is
no longer used so there is no point in keeping both.
Change-Id: Ieea3f12141b33ec9efe7594f4c9dbc7e10675b0e
When performing re-authorizations for OAuth, we need to pull up an
APIKey using it's project id and name. This change also updates the
APIKeyInfo struct to return the head value associated with an API
key.
Change-Id: I4b40f7f13fb9b58a1927dd283b42a39015ea550e
Update the user to the default paid tier project limit, which is currently 3 projects, when the user upgrades to a paid account.
Change-Id: I95b19d62cebc7d878b716355f2ebcaf0b51ca3f7
For nodes in excluded areas, we don't necessarily want to remove them
from the pointer, but we do want to increase the number of pieces in the
segment in case those excluded area nodes go down. To do that, we
increase the number of pieces repaired by the number of pieces in
excluded areas.
Change-Id: I0424f1bcd7e93f33eb3eeeec79dbada3b3ea1f3a
We decided that we want to have segment limit for paying users high
enough to not have to change it too often.
Fixes https://github.com/storj/storj/issues/4590
Change-Id: Ic1c38bf3e2fcc000548ff4c7e7004647b39fbecf
Added new projectaccounting query to get project's single bucket usage rollup.
Added new service method to call new query.
Added implementation for IsAuthenticated method which is used by new generated API.
Change-Id: I7cde5656d489953b6c7d109f236362eb465fa64a
Add a RepairExcludedCountryCodes config flag for overlay for providing a list of country codes to exclude nodes from target repair selection.
Mark segments with less than repairThreshold pieces in countries not in the RepairExcludedCountryCodes as not healthy.
With this change, the repair process is not affected. The segment will be removed from the repair queue by the repairer.
Another change will handle the logic at the repairer level.
Fixes https://github.com/storj/team-metainfo/issues/95
Change-Id: I9231b32de117a116488de055a3e94efcabb46e81
Remove special characters from the test database name to improve
compatibility with external tools like the Cockroach web gui.
Change-Id: I3a6a368a5051e3064e7279b14eec4f2d4ff3c435
All code on known satellites at this moment in time should know how to
populate and use the new numeric columns on the
stripecoinpayments_tx_conversion_rates and coinpayments_transactions
tables in the satellite db. However, there are still gob-encoded
big.Float values in the database from before these columns existed. To
get rid of those values, so that we can excise the gob-decoding code
from the relevant sections, however, we need something to read the gob
bytestrings and convert them to numeric values, a few at a time, until
they're all gone.
To accomplish that, this change adds two chores to be run in the
satellite core process- one for the coinpayments_transactions table, and
one for the stripecoinpayments_tx_conversion_rates table. They should
run relatively infrequently, so that we do not impose any undue load on
processing resources or the db.
Both of these chores work without using explicit sql transactions, but
should still be concurrent-safe, since they work by way of
compare-and-swap type operations.
If the satellite core process needs to be restarted, both of these
chores will start scanning for migrateable rows from the beginning of
the id space again. This is not ideal, but shouldn't be a problem (as
far as I can tell, there are only a few thousand rows at most in either
of these tables on any production satellite).
Change-Id: I733b7cd96760d506a1cf52735f598c6c3aa19735
For a thorough explanation of the overall transition, see the message on
commit c053bdbd70.
This change will rename the columns containing gob-encoded big.Floats
and add new columns which will contain the equivalent data in a more
sql-friendly format.
The change should *not* break already-running satellite processes,
because all functionality touching these tables has already been taught
to work with these new columns if it sees any "undefined column" errors.
Change-Id: I229324376533e383c5d05064b8aedad149cf825b
This change adds some more checks to the deletion process for projects and
users, since we ran into a race condition during invoicing, where projects
have been deleted before the invoicing was finished, leading to missing
references.
This PR changes the logic to block user deletion if we are in exactly that period,
while also allowing the deletion of projects/users on free tier during the month.
Change-Id: Ic0735205e6633762fb7e3c2fa13e744cdfa5ec32
Finished implementing queries for both bandwidth and storage using pgx.Batch.
Fixed CSP styling issue.
Change-Id: I5f9e10abe8096be3115b4e1f6ed3b13f1e7232df
We have here two migrations in fact. One is for existing users,
we need to check if its paying user (paid_tier) and set 1M for
them and 150K for others.
Second migration is to set limits for projects depends on owner.
If owner is a paying user (paid_tier) then project should have
1M limit, otherwise it should be 150K. In this case to make
migration faster initially projects table segment_limit was set
to 1M by default. With migration we are selecting all paying
users and we are setting 150K limit for all projects which owners
are not in paying users set.
Initially we had a concern if that query wil lbe quick enough to
be executed during deployment but after investigation CRDB
team confirms that this should take seconds for out DBs.
Fixes https://github.com/storj/team-metainfo/issues/70
Change-Id: I8be06e9f949b68b993e043cc15525e8483bf49ea
Implemented endpoint and query to get bandwidth chart data for new project dashboard.
Connected backend with frontend.
Storage chart data is mocked right now.
Change-Id: Ib24d28614dc74bcc31b81ee3b8aa68b9898fa87b
A few months ago we removed all references to the contained
column in nodes and reputations
bb21551a9c
and
56fe636123
But we never did the migration to drop the columns.
This commit will finally do that.
Change-Id: I82aa2f257b1fb14a2f1c4c4a1589f80895360ae4
inconsistency
The original design had a flaw which can potentially cause discrepancy
for nodes reputation status between reputations table and nodes table.
In the event of a failure(network issue, db failure, satellite failure, etc.)
happens between update to reputations table and update to nodes table, data
can be out of sync.
This PR tries to fix above issue by passing through node's reputation from
the beginning of an audit/repair(this data is from nodes table) to the next
update in reputation service. If the updated reputation status from the service
is different from the existing node status, the service will try to update nodes
table. In the case of a failure, the service will be able to try update nodes
table again since it can see the discrepancy of the data. This will allow
both tables to be in-sync eventually.
Change-Id: Ic22130b4503a594b7177237b18f7e68305c2f122
Batching of the order submissions can lead to combining the allocated
traffic totals for two completely different time windows, resulting
in incorrect customer accounting. This change will group the batched
order submissions by projectID as well as time window, leading to
distinct updates of a buckets bandwidth rollup based on the hour
window in which the order was created.
Change-Id: Ifb4d67923eec8a533b9758379914f17ff7abea32
We want to issue a reminder to users when they don't verify their email within 24hrs of registering. This change only adds a column to the users table.
Change-Id: I92e2baeabf179338ffec01574d4752c0ccdba88b
All limits we have for projects have also parent limits stored
with user data. New created project is first taking limits from
owner (user) limits.
This change is extending users table with project_segment_limit
column and adds functionality to get and set value for this
column.
Change-Id: Iff5e36c62b517652390b649fc05992475916ecff
Exposes functionality to get and update project segment
limit. It will be used to limit number of segments per project
while uploading object.
Change-Id: I971d48eebb4e7db8b01535c3091829e73437f48d
We want to set maximum number of segments per
project. This change adds only column to projects table.
Default value 1M is set to make later migration easier as
we need to set 1M for paid tier users and 140K for free
tier users.
Change-Id: I8e83712e08c5bd91dfa59f652d17e45c14240a36
Added new query to get project object and segment count.
Added appropriate object and segment count view for new project dashboard.
Change-Id: I69a2e55442f318c51dc365c0c578b964f2f06c7f
This reverts commit 2c0a360a14.
Avoid big transactions. We'll do it outside of the migration pipeline.
Change-Id: Iade810d81bb2453c9e351149cb84662b207ee527
Value attribution codes were converted into UUIDs and stored in the users, projects, api_keys, bucket_metainfos, and value_attributions tables in the partner_id column. This migration will lookup the appropriate partner name associated with each of these UUIDs, and store the partner name directly in the user_agent column within each table. If an error occurs during the partner ID to partner name conversion, the partner ID value will be migrated to user_agent.
A note on the migration test data, postgres.v182.sql:
With one exception, all preexisting rows in the relevant tables had a NULL partner_ID. Therefore, we needed to insert new rows with partner_ID set under the OLD DATA section in order to test that the migration works. For each affected table, we insert one row with a valid partner ID which has a corresponding partner name, and one row with a partner ID which would return an error during the conversion to the partner name.
Change-Id: Iad977d72df0ce95a0c5ca80a065c4276ec1f2354
The main motivation is to wrap the bucket DB and metainfo DB, so we
could check if a bucket is empty before applying geofencing config.
Change-Id: I8bac21555e01d51a663fb557bc1acfc8106bc2e1
To allow for changing limits for new users, while leaving existing users limits as they are, we must store the project limits for each user. We currently store the limit for the number of projects a user can create in the user DB table. This change would also store the project bandwidth and storage limits in the same table.
Change-Id: If8d79b39de020b969f3445ef2fcc370e51d706c6
Alter satellites DB nodes table to add `disqualification_reason` int column
which contains disqualification type enum of why node has been disqualified.
Change-Id: Ia514557018ca27e1984216dc5004346d59869d16
We had a bug in the stray nodes chore where nodes who had not been seen
in several months were not being DQd. We figured out that this was
happening because we were using two queries: The first to grab
nodes where last_contact_success < some cutoff, the second to DQ them
unless last_contact_success == '0001-01-01 00:00:00+00'. The problem
is that if all of the nodes returned from the first query had
last_contact_success of '0001-01-01 00:00:00+00', we would pass them to
the second query which would not DQ them. This would result in the stray
nodes DQ loop ending since we found a number of nodes to DQ less than the
limit.
The fix: add the "WHERE last_contact_success != '0001-01-01
00:00:00+00'::timestamptz" to the selection query.
Change-Id: I4e60de90b68d8745d641b4467c2b23e0e56f7dff
Even though we want to start charging segment fee instead of object fee,
it's hard for users to understand what a segment is. This PR adds the
object count back in the UI alongside with segment count to help address
the issue.
Change-Id: I92eb42c769d350eba68a72443deffec5c278359c
table and drop not null constraint on objects column
Since, we want to move from charging our customers by object count to
segment count, this PR prepares the database to be able to record segments count
instead of objects count for satellite's billing system
Change-Id: Ie91ef354e78d24a268bc1cdc4327c182f733321e
listPendingTransitionShim is a temporary transition shim intended to
make existing API processes keep working when a future DB schema change
is executed. For more explanation, see the message on commit c053bdbd70.
However, the shim has a small bug: it is missing the ORDER BY clause
that appears in the original ListPending method. This transition shim
code won't ever run until we make the DB schema change, so this bug
hasn't hurt anything yet; it's just important that we fix it before the
DB schema change happens.
Change-Id: I5953651583ee236500c2c07141dfc9d690a95118
This update is to set up users being able to register with a promo code added to their account in place of the free tier coupon.
Change-Id: I7badf87937b12664f145520b6dcc4b26fe750407
We don't need column uses_segment_transfer_queue in graceful_exit_progress
as now all exiting nodes are using graceful_exit_segment_transfer_queue and
table graceful_exit_transfer_queue has been dropped.
Change-Id: I4b7c087433f04138cf09bcf8ad3d8de2c185502a
Multipart upload limits added. Last part has no size limit.
Max number of parts: 10000, min part size: 5 MiB
Change-Id: Ic2262ce25f989b34d92f662bde720d4c4d0dc93d
Union query for both reputable and new nodes didn't properly work. The
top level query is required to have an `AS OF SYSTEM TIME` statement as
well.
Change-Id: I8ee6dd5b700c2b1ed2aa562962bfa72be7eec30a
Uplink needs only part of columns we are reading from DB.
To improve performance we should read only those that are
realy needed.
Change-Id: Ib39259318169c46afe5fa4c6ce2184da82e960c8
We don't use this column for anything. If you want to know if a node is
contained, you can check the pending_audits table.
Change-Id: I5671722a5fc6e1749d3a49e187a56556000ff941
We don't use this column for anything. If you want to know if a node is
contained, you can check the pending_audits table.
Change-Id: I8da1d8e01a2dcaff63c5067a7927b5451424ad04
Removes database tables and functionality related to our custom
coupon implementation because it has been superseded by the Stripe
coupon and promo code system. Requires implementations of the
payments Invoices interface to return coupon usages along with
invoices.
Change-Id: Iac52d2ff64afca8cc4dbb2d1f20e6ad4b39ddfde
Cockroach doesn't like concurrent database creation, add a limiter to
avoid overloading the database.
Also limit the cockroach testing to the last 10 migrations.
Change-Id: Ie73c516ac2d362b251f605049e51eb25888432bd
Populate the egress_dead column for taking into account allocated bandwidth that can be removed because orders have been sent by the storage nodes. The bandwidth not used in these orders can be allocated again.
Change-Id: I78c333a03945cd7330aec052edd3562ec671118e
Drop table graceful_exit_transfer_queue which is not used anymore (replaced by graceful_exit_segment_transfer_queue).
Change-Id: Ie254fe9a54fb0784e350a439ce7a9bc99a3a58b5
Migration tests are very heavy on database schema changes, which may
cause delays and retries. Separate out the migration tests and ensure
that they do not run concurrently on the same database.
Change-Id: I35b17525f18fd923546ce1fcc12d805c95073b6b
Why: big.Float is not an ideal type for dealing with monetary amounts,
because no matter how high the precision, some non-integer decimal
values can not be represented exactly in base-2 floating point. Also,
storing gob-encoded big.Float values in the database makes it very hard
to use those values in meaningful queries, making it difficult to do
any sort of analysis on billing.
Now that we have amounts represented using monetary.Amount, we can
simply store them in the database using integers (as given by the
.BaseUnits() method on monetary.Amount).
We should move toward storing the currency along with any monetary
amount, wherever we are storing amounts, because satellites might want
to deal with currencies other than STORJ and USD. Even better, it
becomes much clearer what currency each monetary value is _supposed_ to
be in (I had to dig through code to find that out for our current
monetary columns).
Deployment
----------
Getting rid of the big.Float columns will take multiple deployment
steps. There does not seem to be any way to make the change in a way
that lets existing queries continue to work on CockroachDB (it could be
done with rules and triggers and a stored procedure that knows how to
gob-decode big.Float objects, but CockroachDB doesn't have rules _or_
triggers _or_ stored procedures). Instead, in this first step, we make
no changes to the database schema, but add code that knows how to deal
with the planned changes to the schema when they are made in a future
"step 2" deployment. All functions that deal with the
coinbase_transactions table have been taught to recognize the "undefined
column" error, and when it is seen, to call a separate "transition shim"
function to accomplish the task. Once all the services are running this
code, and the step 2 deployment makes breaking changes to the schema,
any services that are still running and connected to the database will
keep working correctly because of the fallback code included here. The
step 2 deployment can be made without these transition shims included,
because it will apply the database schema changes before any of its code
runs.
Step 1:
No schema changes; just include code that recognizes the
"undefined column" error when dealing with the
coinbase_transactions or stripecoinpayments_tx_conversion_rates
tables, and if found, assumes that the column changes from Step
2 have already been made.
Step 2:
In coinbase_transactions:
* change the names of the 'amount' and 'received' columns to
'amount_gob' and 'received_gob' respectively
* add new 'amount_numeric' and 'received_numeric' columns with
INT8 type.
In stripecoinpayments_tx_conversion_rates:
* change the name of the 'rate' column to 'rate_gob'
* add new 'rate_numeric' column with NUMERIC(8, 8) type
Code reading from either of these tables must query both the X_gob
and X_numeric columns. If X_numeric is not null, its value should
be used; otherwise, the gob-encoded big.Float in X_gob should be
used. A chore might be included in this step that transitions values
from X_gob to X_numeric a few rows at a time.
Step 3:
Once all prod satellites have no values left in the _gob columns, we
can drop those columns and add NOT NULL constraints to the _numeric
columns.
Change-Id: Id6db304b404e6fde44f5a8c23cdaeeaaa2324f20
Why: big.Float is not an ideal type for dealing with monetary amounts,
because no matter how high the precision, some non-integer decimal
values can not be represented exactly in base-2 floating point. Also,
storing gob-encoded big.Float values in the database makes it very hard
to use those values in meaningful queries, making it difficult to do
any sort of analysis on billing.
For better accuracy, then, we can just represent monetary values as
integers (in whatever base units are appropriate for the currency). For
example, STORJ tokens or Bitcoins can not be split into pieces smaller
than 10^-8, so we can store amounts of STORJ or BTC with precision
simply by moving the decimal point 8 digits to the right. For USD values
(assuming we don't want to deal with fractional cents), we can move the
decimal point 2 digits to the right.
To make it easier and less error-prone to deal with the math involved, I
introduce here a new type, monetary.Amount, instances of which have an
associated value _and_ a currency.
Change-Id: I03395d52f0e2473cf301361f6033722b54640265
This PR utilize the new burst limit column from projects table to allow
control on the limit for request per seconds and token bucket size
When no burst limit is explicitly set, rate limit is applied to both so
we don't limit how quickly request can be made in a second.
Change-Id: I883235c60c5d6416aeadd1c80ed2ebd193aa4d9f
In order to limit the amount of overall requests a user can issue in a
time span, we need to have the ability to define such limit separate
from per second request rate.
This PR adds a new column on the projects table to store the burst limit
per project.
Change-Id: I7efc2ccdda4579252347cc6878cf846b85146dc7
Remove the logic associated to the old transfer queue.
A new transfer queue (gracefulexit_segment_transfer_queue) has been created for migration to segmentLoop.
Transfers from the old queue were not moved to the new queue.
Instead, it was still used for nodes which have initiated graceful exit before migration.
There is no such node left, so we can remove all this logic.
In a next step, we will drop the table.
Change-Id: I3aa9bc29b76065d34b57a73f6e9c9d0297587f54
nodes and audit_history tables
This PR removes all code reference to audit_histories table and
```
audit_reputation_alpha, audit_reputation_beta,
unknown_audit_reputation_alpha, unknown_audit_reputation_beta,
```
columns from nodes table.
It also drops audit_histories table from the db since the code
that's referencing it currently are not being used.
Change-Id: Ifcda8db36afb3a333d487ff831f2fdefc8b02a4c
gracefulexit
With the effort to move audit related data into reputation store, this
PR updates gracefulexit endpoint to use reputation service to get a
node's audit score
Change-Id: Iad93ea689ad67ff9c57c7be16687e21e715fab7a
Currently, reputation table is only populated when a node has been
audited. This is ok in production, however a lot of our tests doesn't
upload any data or trigger audits.
This PR adds an initialization step in testplanet to populate reputation
table with zero value for nodes reputation.
Change-Id: I11b381236669db346dc68a48a6d4a27334a0a8b8
package in audit
This PR implements reputation store and replace overlay in audit service
to use such store for storing node's audit stats.
In order to keep the changeset smaller, most of the changes in this PR is for copying audit logic in overlay to
reputation package. In a following PR, the duplicating code will be
removed from overlay.
Change-Id: I16c12494a0970f44c422b26cf603c1dc489e5bc1