The audit chore will be pushing a large number of segments to be
audited, and the db might choke on that large insert when under load.
This change divides the insert up into batches, which can be sized
however is optimal for the backing database. It also arranges for
segments to be inserted in the order of the primary key, which helps
performance on some systems.
Refs: https://github.com/storj/storj/issues/5228
Change-Id: I941f580f690d681b80c86faf4abca2995e37135d
As part of the effort of splitting out the auditor workers to their own
process, we are transitioning the communication between the auditor
chore and the verification workers to a queue implemented in the
database, rather than the sequence of in-memory queues we used to use.
This logical database is safely partitionable from the rest of
satelliteDB.
Refs: https://github.com/storj/storj/issues/5251
Change-Id: I6cd31ac5265423271fbafe6127a86172c5cb53dc
read one is the wrong method when trying to select one row when there
are multiple. It returns TooManyRows error. Read first is the correct
method.
Change-Id: Ic6c92795486892ac041befd118b6945314bffeaa
Add LastOfflineEmail to overlay.NodeDossier. This is the last time a
node got an offline email. Add two new overlay db methods,
GetOfflineNodesForEmail and UpdateLastOfflineEmail. Edit db method
UpdateCheckIn to nullify last_offline_email if node is up.
Change-Id: I1ee60e7d98dd1b68348a57f9a4fb77c6c9895d6d
This change modifies the method responsible for returning project
usage summaries such that the end date of the given time period
is excluded to prevent overlap.
Change-Id: If06155efff5c6fce3865f5f6e4344873abe3e432
When a node checks in and its version is below the minimum, insert
BelowMinVersion event into node events
Change-Id: I0e437ac34496778369515cbc40c15676da8b27ae
We want to be able to exclude contained nodes from nodes selection. For this, we add a 'contained' column to the nodes table to track the containment status.
Fixes https://github.com/storj/storj/issues/5231
Change-Id: Id78e645f172145adcb8664646e8ebf14e218b57b
Since the auditor will be moving to a different process from the
metainfo loop, we need a way of communicating which segments have
been chosen for audit. This queue will be that communication, for now.
Contrast this with the queue for _re_verifications in commit 9c67f62f.
Refs: https://github.com/storj/storj/issues/5251
Change-Id: I9a269c7ef21e6c5e9c6e5e1f3db298fe159a8a79
* Mark node events table as "safely partitionable", meaning that it
is/will not be queried relationally along with other tables. This way,
we can safely use this table in Postgres rather than CockroachDB,
where most of our other satellite tables are running.
* Add a dbx-generated delete function to the node events table, to allow
us to easily delete entries created before a provided time. This
allows us to keep the table clean, since there is no need to persist
entries after emails have been sent.
Change-Id: I25e8a5c4092fe49dcfa6c8bb73f2043646bb611f
since amount of objects is growing and looping through all of them
starts taking lot of time, we are switching for SQL query to do it
in chunks of tallies per bucket. 2nd part of issue fix.
Closes https://github.com/storj/team-metainfo/issues/125
Change-Id: Ia26bcac0a7e2c6503df9ebbf4817a636841d3284
Implement node events DB with Insert and GetLatestByEmailAndEvent. Get
was changed to GetLatestByEmailAndEvent so we can verify items are being
inserted into the table without needing the ID, which is not available
to us in the tests.
Change-Id: I4abe63631c44774cd7e795fbab0cbab4d801db4c
Change node_events schema to use an id column as primary key rather than
node id because there can be multiple events per node id.
Change-Id: I518d8ef9ea658764876483e282a4058d3c4910f4
Add new table for node events. We can use this to notify node
operators of certain node events. Further, we can squash events for
multiple nodes with the same email into a single notification.
Change-Id: Icea6dd939df8fe4a98806bd79c014e21d239c43e
This table will be used as a queue for pieces that need to be reverified
(a regular audit timed out on the owning node, so now that node is
contained and we need to validate the piece before un-containing it).
Refs: https://github.com/storj/storj/issues/5228
Change-Id: I5dcd26b6adced8674cbd81884c1543a61ea9d4c8
This change increments users' failed_login_count in the database layer to avoid potential data race.
It also updates the login_lockout_expiration as well in one operation.
see: https://github.com/storj/storj/issues/4986
Change-Id: I74624f1bee31667b269cb205d74d16e79daabcb6
After you create a brand new cluster (with storj-up, for example) the project usage fails during the first 5 minutes.
The problem is the usage of `AS OF SYSTEM TIME` which points to a time where the master database didn't exist.
In this specific case the database not found error can be ignored to avoid such messages. (if the database is really missing, we will have problems way more earlier, eg. at the login)
Change-Id: I51ee78994d91fc2a14b56646402faaaa8154c934
This patch addresses the following issues:
1. Running full migration in cockroachdb is quite slow. We already have an approach for unit tests to start from the latest snapshot. This patch makes it possible to use it for integrations tests.
2. Migration requires executing a separated command which makes it hard to run application in containerized test environments (like storj-up) or from IDE. This patch introduces a hidden flag to run migration.
3. Test user creation is painful. We do it with calling GraphQL + admin API. Providing an option with testuser makes the integration tests significant more simple (especially as the projectID -> access grant can be predictable)
Change-Id: I61010728727b490ea6aac32620f2da0484966727
Add new project db method, GetSalt, to get project salt. If salt
column is empty, return the sha-256 hash of the project ID. This
new method is used in metainfo endpoint ProjectInfo to return the
project salt to the client. This is backwards compatible because
the salt column is not populated yet. The updated endpoint will
do the same thing as the current endpoint.
Change-Id: I7eba376c865e10995a5a916302feca7cd7c7efa2
Pieces count in DB are stored as int64 and we would like to align bloom
filter processing with this type.
Change-Id: Iaec767e609a40d802077ae057520541805a7c44f
Remove pkg satellite/payments/monetary as it moved to storj.io/common.
Update all code pkg references from monetary to common/currency.
Change-Id: If2519f4c80cf315a9299e6521a6b9bbc6c399156
This change adds the following endpoints:
- projects/apikeys/{id}: returns a paged list of API keys for the
project specified by the given ID
- apikeys/delete/{id}: deletes the API key specified by the given ID
Additionally, the API Go code generator has been given the ability to
process unsigned integer parameters.
Change-Id: I5ff24e012da24a3f06bea1ebb62bae6ff62f951a
Add the users current wallet balance to the endpoints for claiming and listing storjscan wallets. Also prevent a user with a claimed wallet address from claiming a new wallet.
Change-Id: I0dbf1303699f924d05c8c52359038dc5ef6c42a1
Adds USDollarsMicro currency to the billing DB which support fraction of a cent with decimal places for better billing amounts accuracy.
Change-Id: Id07dfae104d94e27c7b22ab8f5781010e16c4c8e
Removed batch insert of payments since they do not guarantee order. Order of payments sent to the payments DB is important, because the billing chore will request new payments based on the last received payment. If the last payment inserted is not the last payment received, duplicate payments will be inserted into the billing table.
Change-Id: Ic3335c89fa8031f7bc16f417ca23ed83301ef8f6
This concludes the saga that began with commit c053bdbd, migrating away
from using gob-encoded big.Float values in the database to using
integers with an implied number of decimal places.
Nothing is using or accessing or expecting the _gob columns to exist
anymore. The transition code and the migration chore are gone. All that
remains is to drop the columns.
Change-Id: I9b15ee52f7781510a6dc91cf7c54f7f9022b1210
Sessions now expire after a much shorter amount of time, requiring
clients to issue API requests for session extension. This is handled
behind the scenes as the user interacts with the page, but once session
expiration is imminent, a modal appears which informs the user of his
inactivity and presents him with the choice of loging out or preserving
his session.
Change-Id: I68008d45859c814a835d65d882ad5ad2199d618e
This change tracks signup captcha scores in the signup_captcha column in the users table.
It slightly modifies the captcha verify method to return both the score and success.
see: https://github.com/storj/storj/issues/5067
Change-Id: I7b3993e44958cfcf179806c7df19d6887fe3eda9
Use the provided ConstraintViolation method of the pgutil package rather than importing jackc pgxerrcode directly.
Change-Id: I4e86713000b3f5f0aadd54beee8ee239f0c8df8e
Following the changes made to fix the storage usage graph
on the storagenode dashboard, we added a new
interval_end_time column to the accounting_rollups table.
We noticed a two-day delay in the graph, turns out the sub-query
was wrong due to the conflicting interval_end_time column
in both tables so we have to explicitly state which table
column we are referring to.
Also, set the default for interval_end_time in the accounting
rollups to the start_time if the interval_end_time is null
which will be removed once we backfill the column and alter
it to be a non-nullable column.
Updates https://github.com/storj/storj/issues/4178
Change-Id: Iff32b261d07b6ee219d2b6b6542377f0c54633a1
Database migration tests are rather slow, reduce them to
only the last 10 migrations, which should be sufficient.
Change-Id: Ib9d964fe6ec86ddeeef26c66b6ea9207b7868855
The signup_captcha column will hold the captcha scores of new users.
see: https://github.com/storj/storj/issues/5067
Change-Id: Ia322af29a3b5b019b417843272506a3dbd1397e4
This is in response to community feedback that our existing reputation
calculation is too likely to disqualify storage nodes unfairly with
extreme swings up and down.
For details and analysis, please see the data_loss_vs_dq_chance_sim.py
tool, the "tuning reputation further.ipynb" Jupyter notebook in the
storj/datascience repository, and the discussion at
https://forum.storj.io/t/tuning-audit-scoring/14084
In brief: changing the lambda and initial-alpha parameters in this way
causes the swings in reputation to be smaller and less likely to put a
node past the disqualification threshold unfairly.
Note: this change will cause a one-time reset of all (non-disqualified)
node reputations, because the new initial alpha value of 1000 is
dramatically different, and the disqualification threshold is going to
be much higher.
Change-Id: Id6dc4ba8fde1be3db4255b72282207bab5491ca3
Adding an index to the timestamp field of the billing transaction table to improve query performance. This should prevent having to do a full table scan when we query for the last billing transaction of a particular source and/or type.
Change-Id: I581f09494cc8662a12efba4302022a07121ba309
Adding an index to the wallet address field of the storjscan wallet table to improve query performance. This should prevent having to do a full table scan when we query for one or more wallet addresses for a given user in our queries.
Change-Id: Ic1b5d06c2258489e5464d186fed5270172f8cba5
The new dashboard currently gets stuck on loading and displays an error when
it fails to get usage data. Failure happens on satelliteDb due to a cockroach transaction error
caused by reading data before using AS OF SYSTEM TIME in the same transaction.
This change reverses the order of daily usage queries to avoid this error.
And hides the loaders on the dashboard if/when an error occurs.
see: https://github.com/storj/storj/issues/5012
Change-Id: I06b6ee434f72242f9b7d21dec7aaf39d1d622f1e