This change increments users' failed_login_count in the database layer to avoid potential data race.
It also updates the login_lockout_expiration as well in one operation.
see: https://github.com/storj/storj/issues/4986
Change-Id: I74624f1bee31667b269cb205d74d16e79daabcb6
After you create a brand new cluster (with storj-up, for example) the project usage fails during the first 5 minutes.
The problem is the usage of `AS OF SYSTEM TIME` which points to a time where the master database didn't exist.
In this specific case the database not found error can be ignored to avoid such messages. (if the database is really missing, we will have problems way more earlier, eg. at the login)
Change-Id: I51ee78994d91fc2a14b56646402faaaa8154c934
This patch addresses the following issues:
1. Running full migration in cockroachdb is quite slow. We already have an approach for unit tests to start from the latest snapshot. This patch makes it possible to use it for integrations tests.
2. Migration requires executing a separated command which makes it hard to run application in containerized test environments (like storj-up) or from IDE. This patch introduces a hidden flag to run migration.
3. Test user creation is painful. We do it with calling GraphQL + admin API. Providing an option with testuser makes the integration tests significant more simple (especially as the projectID -> access grant can be predictable)
Change-Id: I61010728727b490ea6aac32620f2da0484966727
Add new project db method, GetSalt, to get project salt. If salt
column is empty, return the sha-256 hash of the project ID. This
new method is used in metainfo endpoint ProjectInfo to return the
project salt to the client. This is backwards compatible because
the salt column is not populated yet. The updated endpoint will
do the same thing as the current endpoint.
Change-Id: I7eba376c865e10995a5a916302feca7cd7c7efa2
Pieces count in DB are stored as int64 and we would like to align bloom
filter processing with this type.
Change-Id: Iaec767e609a40d802077ae057520541805a7c44f
Remove pkg satellite/payments/monetary as it moved to storj.io/common.
Update all code pkg references from monetary to common/currency.
Change-Id: If2519f4c80cf315a9299e6521a6b9bbc6c399156
This change adds the following endpoints:
- projects/apikeys/{id}: returns a paged list of API keys for the
project specified by the given ID
- apikeys/delete/{id}: deletes the API key specified by the given ID
Additionally, the API Go code generator has been given the ability to
process unsigned integer parameters.
Change-Id: I5ff24e012da24a3f06bea1ebb62bae6ff62f951a
Add the users current wallet balance to the endpoints for claiming and listing storjscan wallets. Also prevent a user with a claimed wallet address from claiming a new wallet.
Change-Id: I0dbf1303699f924d05c8c52359038dc5ef6c42a1
Adds USDollarsMicro currency to the billing DB which support fraction of a cent with decimal places for better billing amounts accuracy.
Change-Id: Id07dfae104d94e27c7b22ab8f5781010e16c4c8e
Removed batch insert of payments since they do not guarantee order. Order of payments sent to the payments DB is important, because the billing chore will request new payments based on the last received payment. If the last payment inserted is not the last payment received, duplicate payments will be inserted into the billing table.
Change-Id: Ic3335c89fa8031f7bc16f417ca23ed83301ef8f6
This concludes the saga that began with commit c053bdbd, migrating away
from using gob-encoded big.Float values in the database to using
integers with an implied number of decimal places.
Nothing is using or accessing or expecting the _gob columns to exist
anymore. The transition code and the migration chore are gone. All that
remains is to drop the columns.
Change-Id: I9b15ee52f7781510a6dc91cf7c54f7f9022b1210
Sessions now expire after a much shorter amount of time, requiring
clients to issue API requests for session extension. This is handled
behind the scenes as the user interacts with the page, but once session
expiration is imminent, a modal appears which informs the user of his
inactivity and presents him with the choice of loging out or preserving
his session.
Change-Id: I68008d45859c814a835d65d882ad5ad2199d618e
This change tracks signup captcha scores in the signup_captcha column in the users table.
It slightly modifies the captcha verify method to return both the score and success.
see: https://github.com/storj/storj/issues/5067
Change-Id: I7b3993e44958cfcf179806c7df19d6887fe3eda9
Use the provided ConstraintViolation method of the pgutil package rather than importing jackc pgxerrcode directly.
Change-Id: I4e86713000b3f5f0aadd54beee8ee239f0c8df8e
Following the changes made to fix the storage usage graph
on the storagenode dashboard, we added a new
interval_end_time column to the accounting_rollups table.
We noticed a two-day delay in the graph, turns out the sub-query
was wrong due to the conflicting interval_end_time column
in both tables so we have to explicitly state which table
column we are referring to.
Also, set the default for interval_end_time in the accounting
rollups to the start_time if the interval_end_time is null
which will be removed once we backfill the column and alter
it to be a non-nullable column.
Updates https://github.com/storj/storj/issues/4178
Change-Id: Iff32b261d07b6ee219d2b6b6542377f0c54633a1
Database migration tests are rather slow, reduce them to
only the last 10 migrations, which should be sufficient.
Change-Id: Ib9d964fe6ec86ddeeef26c66b6ea9207b7868855
The signup_captcha column will hold the captcha scores of new users.
see: https://github.com/storj/storj/issues/5067
Change-Id: Ia322af29a3b5b019b417843272506a3dbd1397e4
This is in response to community feedback that our existing reputation
calculation is too likely to disqualify storage nodes unfairly with
extreme swings up and down.
For details and analysis, please see the data_loss_vs_dq_chance_sim.py
tool, the "tuning reputation further.ipynb" Jupyter notebook in the
storj/datascience repository, and the discussion at
https://forum.storj.io/t/tuning-audit-scoring/14084
In brief: changing the lambda and initial-alpha parameters in this way
causes the swings in reputation to be smaller and less likely to put a
node past the disqualification threshold unfairly.
Note: this change will cause a one-time reset of all (non-disqualified)
node reputations, because the new initial alpha value of 1000 is
dramatically different, and the disqualification threshold is going to
be much higher.
Change-Id: Id6dc4ba8fde1be3db4255b72282207bab5491ca3
Adding an index to the timestamp field of the billing transaction table to improve query performance. This should prevent having to do a full table scan when we query for the last billing transaction of a particular source and/or type.
Change-Id: I581f09494cc8662a12efba4302022a07121ba309
Adding an index to the wallet address field of the storjscan wallet table to improve query performance. This should prevent having to do a full table scan when we query for one or more wallet addresses for a given user in our queries.
Change-Id: Ic1b5d06c2258489e5464d186fed5270172f8cba5
The new dashboard currently gets stuck on loading and displays an error when
it fails to get usage data. Failure happens on satelliteDb due to a cockroach transaction error
caused by reading data before using AS OF SYSTEM TIME in the same transaction.
This change reverses the order of daily usage queries to avoid this error.
And hides the loaders on the dashboard if/when an error occurs.
see: https://github.com/storj/storj/issues/5012
Change-Id: I06b6ee434f72242f9b7d21dec7aaf39d1d622f1e
I don't know why the go people thought this was a good idea, because
this automatic reformatting is bound to do the wrong thing sometimes,
which is very annoying. But I don't see a way to turn it off, so best to
get this change out of the way.
Change-Id: Ib5dbbca6a6f6fc944d76c9b511b8c904f796e4f3
During an update to the billing DB, there is a special case failure that can occur if multiple updates to the table happen concurrently. In this case, the update would normally fail silently due to the balance constraint during update, and the subsequent insert for a new record fails because the user already exists in the table. The solution for this case, is to simply retry the insert with some limit to prevent infinite loops.
Change-Id: Ibe70fec2c386c25bd2484fe91f49a6a962357706
Update the billing table to use generated IDs, to include the source and status fields, and to add metadata jsonb field that can be used for fields specific to a particular type of billing transaction. Additionally a new table was added to keep track of user balances
Change-Id: Ieb3a63aafd8fe21fc3386bafd43d52081b7d2838
satellite/{payments/billing,satellitedb}: refactor billing DB
Update the billing table to use generated IDs, to include the source and status fields, and to add metadata jsonb field that can be used for fields specific to a particular type of billing transaction. Additionally a new table was added to keep track of user balances
Change-Id: Ieb3a63aafd8fe21fc3386bafd43d52081b7d2838
Adding an interval_end_time column to the accounting_rollups table
to keep the last interval_end_time for each daily storage tallies.
Updates https://github.com/storj/storj/issues/4178
Change-Id: If7a8210c5e9fe2fc9df84b137a8b6e3db2471c58
Currently we have a significant number of tallies that need to be
deleted together. Add a limit (by default 10k) to how many will
be deleted at the same time.
Change-Id: If530383f19b4d3bb83ed5fe956610a2e52f130a1
Some nodes were added to the nodes table due to a bug in quic
based storagenode contact code. This is a tool to clean up these nodes.
Delete with batch-size 1k seems to take ~400ms on local CockroachDB.
Change-Id: Ic0c1180528c27952e19c431fc9cc327292a10a5f
Add payments method to payments to DepositWallets interface.
Exposes payments retrieval API for a particular wallet to
other systems such as console billing API.
Change-Id: Ifcb3a35514aab50be00f6360007954980b5d8b38
The users.Update method in the satellitedb package takes a console.User
as an argument. It reads some of the fields on this struct and assigns
the value to dbx.User_Update_Fields. However, you cannot optionally
update only some of the fields. They all will always be updated. This means
that if you only want to update FullName, you still need to read the
user info from the DB to avoid updating the rest of the fields to zero.
This is not good because concurrent updates can overwrite each other.
This change introduces a new struct type, UpdateUserRequest, which
contains pointers for all the fields that are updated by satellite db
users.Update. Now the update method will check if a field is nil before
assigning the value to be updated in the db, so you only need to set the
field you want updated. For nullable columns, the respective field is a
double pointer. This allows us to update a column to NULL if the outer
pointer is not nil, but the inner pointer is.
Change-Id: I27f842d283c2711e24d51dcab622e57eeb9157f1
There are multiple entries in the users table with the same email
address. This is because in the past users were able to register
multiple times if the email was not verified. This is no longer
the case. If a user tries to register with an unverified email
already in the DB, we send a verification email instead of
creating another entry. However, since these old entries in the
table with duplicate emails were never cleaned up, the email
reminder chore will send out email verification reminders to them.
A single person will get one separate email per entry in the DB
with their email and where status = 0.
Since the multiple entries with the same email problem was solved
a while ago, just add a constraint to GetUnverifiedNeedingReminder
to only select users created after a cutoff. Once the DB is
migrated to remove the duplicate emails, we can remove the cutoff.
github issue: https://github.com/storj/storj/issues/4853
Change-Id: I07d77d43109bcacc8909df61d4fb49165a99527c
The ApplyUpdates() method on the reputation.DB interface acts like the
similar Update() method, but can allow for applying the changes from
multiple audit events, instead of only one.
This will be necessary for the reputation write cache, which will batch
up changes to each node's reputation in order to flush them
periodically.
Refs: https://github.com/storj/storj/issues/4601
Change-Id: I44cc47767ea2d9423166bb8fed080c8a11182041
Updated daily project usage query to return correct allocated traffic.
If allocated egress has expired then we return settled egress.
If not then we return allocated egress - dead egress.
Fix for this issue
https://github.com/storj/storj/issues/4563
Change-Id: Ia15a50d3bb8d8cb1106936e17dbe0f1f5a40fa87
Fixed daily usage query returning single bucket usage.
We sum up bucket usages now.
Also fixed https://github.com/storj/storj/issues/4559.
Change-Id: I2eb6299f1ef500d68150879195011b6fbb5f37ed
Add billing DB to the satellite. This DB will hold all transactions on the users account and can be used to compute the users current usable account balance.
Change-Id: I056416efc169e5e5e30c9f30cd8bc766b7bc8073
This has been a cause of some confusion, even though the fields are
labeled as being copies of config values.
Having them be under a field explicitly named "Config" makes this
clearer, plus, allows the values to be passed in simply as a copy
of the Config struct from the satellite, rather than copying the fields
individually (which can be error-prone, particularly as the AuditCount
field in UpdateRequest is apparently not the same thing as the
AuditCount field in reputation.Config).
Refs: https://github.com/storj/storj/issues/4601
Change-Id: I386953347b71068596618616934aa28e3245cdc1
Add storjscan wallets DB to the satellite. For now this DB is a one to one mapping of the users account to a storjscan wallet that can be used by the account holder to make payments on their Storj account.
relates to https://github.com/storj/storj/issues/4347
Change-Id: I6e65b15817b90ceb75641244f9bf173c3b4228a7
The two protobuf types are identical except that one is in our common/pb
package, and the other is in internalpb. Since the type is public
already, and there is no difference in the internal one, it seems better
to use the public one for all satellite needs.
There is also another type which is essentially identical, but which is
not a protobuf type, also called "AuditHistory". It looks like we don't
ever actually need to have a separate type from the protobuf one.
This change makes us use "storj/common/pb".AuditHistory for all of our
AuditHistory needs.
Refs: https://github.com/storj/storj/issues/4601
Change-Id: If845fde21bb31c801db6d67ffc9a146d1617b991
This functionality will be needed in both packages, so here we move it
into the more general reputation-code package and export it for use in
satellitedb.
This also removes the related UpdateAuditHistory() signature from the
reputation DB interface, since it doesn't have anything to do with the
db. It doesn't need to be a method, either.
Finally, this changes the test for addAudit to be a plain test function
instead of using testplanet.Run(). It didn't need a whole testplanet
setup or any databases.
Refs: https://github.com/storj/storj/issues/4601
Change-Id: I90f6a909e5404f03ad776b95cfa2f248308c57c1
Márton found out that DROP DATABASE is rather slow on CRDB, and it makes
a significant impact when running the whole testsuite. In sum of test
times it's ~2.5h compared to ~2h. And the end-to-end ~20m to ~16m.
This adds a new flag STORJ_TEST_COCKROACH_NODROP for enabling this
behavior in the CI environment.
Fixes https://github.com/storj/dev-enablement/issues/6
Change-Id: I5a6616c32dc6596a96ba3d203f409368307d7438
This will let us update our reputation cache when writing through to the
db.
Since the information is already being fetched from the db and returned
to the application, the extra cpu load here should be minimal.
Refs: https://github.com/storj/storj/issues/4601
Change-Id: I2b8619f2c0d541893c7d3e7d33b1863b96775ebd
We want to remind unverified users to verify their emails:
once after 24 hours has passed and again after 5 days has passed.
Add mailservice.Service to satellite core because it is needed by the
chore for sending emails. To add the mailservice.Service to the core,
we create a helper function in satellite/peer.go to avoid duplicating
the code in both api.go and core.go. In addition to the chore, this
change adds methods to users.DB to get unverified users in need of
reminder.
Change-Id: I4e515bdf43f922788b4f965b2efb34fa32288bd1
We added nodes.disqualification_reason recently, but we didn't add a
corresponding column in the reputations table (despite having a
corresponding `disqualified` column there).
Without this change, the (very useful and informative) assignments to
updateFields.DisqualificationReason in reputations.go have no effect.
Refs: https://github.com/storj/storj/issues/4601
Change-Id: I77404902ca64b56aed72f1de76b303fe82b76aab
When a new user registers, we send a verification request to their
email. Currently, if they do not verify their email, we take no further
action. We want to send these users reminders: one after about one day
and one after about 5 days. To do this we will use this new
verification_reminders column.
It will look something like this:
```
SELECT email FROM users
WHERE status = 0
AND (
(verification_reminders = 0 AND created_at < now() - 'INTERVAL 1d')
OR (verification_reminders = 1 AND created_at < now() - 'INTERVAL 5d')
)
```
Change-Id: If0620e08c97e9e337c9563481d665c5bd462693b
Fix for this customer issue
https://github.com/storj/customer-issues/issues/34
By this change we fetch bucket usage since its creation instead of using project's createdAt timestamp.
Change-Id: Ic0ea5d169056a5bd64ed143d13954d794da6e1d2
Things that make debugging easier.
* Added logging to automatic link clicking to make it obvious, when it
fails.
* Added monitoring to oidc.
* Made dbx create calls noreturn for oauth_*
Change-Id: I37397b4e84ce5bfd82954aed9c38fdfd52595f24
We were already able to override (or not) metadata with this method
but to be explicit we are introducting new option to control storing
metadata with object. Separate option should be less error prone.
https://github.com/storj/team-metainfo/issues/105
Change-Id: I4c5bce953a633a0009b05c5ca84266ca6ceefc26
Last part of backwards compatible db migration to remove "suspended" column.
Removes exeption which removes "suspended" column in tests from `migrate_test.go`.
Adds DB migration to remove "suspended" column from 'nodes' and 'reputations' tables.
Change-Id: I02051279f6f4181e966c567919af0e774583f165
Set disqualification reason when reputations stats are updated on DB.Update.
Added tests for DisqualifyNode and for disqualification cases which happens during Update.
Change-Id: I00130ab5d9722422805159ad2f183c205de60f7e
Attribution is attached to bucket usage, but that's more granular than
necessary for the attribution report. This change iterates over the
bucket attributions, parses the user agent, converts the first entry
to lower case, and uses that as the key to a map which holds the
attribution totals for each unique user agent.
Change-Id: Ib2962ba0f57daa8a7298f11fcb1ac44a8bb97875
Migrate free tier users to have default limits if their limits were set to 0.
They were affected by incorrect working of Update user query.
Change-Id: I4c49c8d99b12dba2b9b0ab61b2175085976dcc95
Added failed_login_count and login_lockout_expiration columns to users table to control users failed login attempts.
We want to prevent brute forcing of user login so this is the first step.
Change-Id: I06b0b9f5415a1922e08cd9908893b2fd3c26bca0
Before, the VA query was summing the total and dividing by the number of
rows. This gives the average bytes stored per hour, but we charge for
usage with byte-hours. Why not do value attribution the same way?
To do that, we don't divide by the number of rows. We also have object
and segment fees so return segment-hours and object-hours too.
Change-Id: I1f18b7e1b2bae1d3fae1ca3b93bfc24db5b9b0e6
This sets the corresponding _numeric columns to be NOT NULL (it has been
verified manually that there are no more NULL _numeric values on any
known satellites, and it should be impossible with current code to get
new NULL values in the _numeric columns.
We can't drop the _gob columns immediately, as there will still be code
running that expects them, but once this version is deployed we can
finally drop them and be totally done with this crazy 5-step migration.
Change-Id: I518302528d972090d56b3eedc815656610ac8e73
We are in the process of creating an api to allow users to manage their
accounts programmatically. We would like to use api keys for
authorization. We were originally going to create an entirely new table
for these api keys, but seeing as we already have 2 other tables for
keys/tokens, api_keys and oauth_tokens, we thought it might be better to
use one of these. We're using oauth_tokens.
We create a new oidc.OAuthTokenKind for account management api keys:
KindAccountManagementTokenV0. We made the key versioned because we
likely want to improve the implementation in the future, but we want to
get something functional out the door ASAP because the account management
api feature is highly desired.
Add a new method to oidc.OAuthTokens interface for revoking v0 account
management api keys, RevokeAccountManagementTokenV0. Add update method
to dbx implementation to allow updating the expiration. We will revoke
these keys by setting the expiration to 0 so they are expired.
Change-Id: Ideb8ae04b23aa55d5825b064b5e43e32eadc1fba
We have an issue with latest CRDB. Single query cannot modify
the same table multiple times. Now build is blocked.
This change is unblocking build by:
* adjusting query for inserting into repair queue
* temporary removing code for deletion for server-side copy
* temporary disable backward compatibility tests for CRDB
Change-Id: Idd9744ebd228e5dc05bdaf65cfc8f779472a975d
Remove redundant suspension timestamp column from nodes and reputation tables.
Suspended timestamp was moved to unknown_audit_suspended and suspended column is
no longer used so there is no point in keeping both.
Change-Id: Ieea3f12141b33ec9efe7594f4c9dbc7e10675b0e
When performing re-authorizations for OAuth, we need to pull up an
APIKey using it's project id and name. This change also updates the
APIKeyInfo struct to return the head value associated with an API
key.
Change-Id: I4b40f7f13fb9b58a1927dd283b42a39015ea550e
Update the user to the default paid tier project limit, which is currently 3 projects, when the user upgrades to a paid account.
Change-Id: I95b19d62cebc7d878b716355f2ebcaf0b51ca3f7
For nodes in excluded areas, we don't necessarily want to remove them
from the pointer, but we do want to increase the number of pieces in the
segment in case those excluded area nodes go down. To do that, we
increase the number of pieces repaired by the number of pieces in
excluded areas.
Change-Id: I0424f1bcd7e93f33eb3eeeec79dbada3b3ea1f3a
We decided that we want to have segment limit for paying users high
enough to not have to change it too often.
Fixes https://github.com/storj/storj/issues/4590
Change-Id: Ic1c38bf3e2fcc000548ff4c7e7004647b39fbecf
Added new projectaccounting query to get project's single bucket usage rollup.
Added new service method to call new query.
Added implementation for IsAuthenticated method which is used by new generated API.
Change-Id: I7cde5656d489953b6c7d109f236362eb465fa64a
Add a RepairExcludedCountryCodes config flag for overlay for providing a list of country codes to exclude nodes from target repair selection.
Mark segments with less than repairThreshold pieces in countries not in the RepairExcludedCountryCodes as not healthy.
With this change, the repair process is not affected. The segment will be removed from the repair queue by the repairer.
Another change will handle the logic at the repairer level.
Fixes https://github.com/storj/team-metainfo/issues/95
Change-Id: I9231b32de117a116488de055a3e94efcabb46e81
Remove special characters from the test database name to improve
compatibility with external tools like the Cockroach web gui.
Change-Id: I3a6a368a5051e3064e7279b14eec4f2d4ff3c435
All code on known satellites at this moment in time should know how to
populate and use the new numeric columns on the
stripecoinpayments_tx_conversion_rates and coinpayments_transactions
tables in the satellite db. However, there are still gob-encoded
big.Float values in the database from before these columns existed. To
get rid of those values, so that we can excise the gob-decoding code
from the relevant sections, however, we need something to read the gob
bytestrings and convert them to numeric values, a few at a time, until
they're all gone.
To accomplish that, this change adds two chores to be run in the
satellite core process- one for the coinpayments_transactions table, and
one for the stripecoinpayments_tx_conversion_rates table. They should
run relatively infrequently, so that we do not impose any undue load on
processing resources or the db.
Both of these chores work without using explicit sql transactions, but
should still be concurrent-safe, since they work by way of
compare-and-swap type operations.
If the satellite core process needs to be restarted, both of these
chores will start scanning for migrateable rows from the beginning of
the id space again. This is not ideal, but shouldn't be a problem (as
far as I can tell, there are only a few thousand rows at most in either
of these tables on any production satellite).
Change-Id: I733b7cd96760d506a1cf52735f598c6c3aa19735
For a thorough explanation of the overall transition, see the message on
commit c053bdbd70.
This change will rename the columns containing gob-encoded big.Floats
and add new columns which will contain the equivalent data in a more
sql-friendly format.
The change should *not* break already-running satellite processes,
because all functionality touching these tables has already been taught
to work with these new columns if it sees any "undefined column" errors.
Change-Id: I229324376533e383c5d05064b8aedad149cf825b
This change adds some more checks to the deletion process for projects and
users, since we ran into a race condition during invoicing, where projects
have been deleted before the invoicing was finished, leading to missing
references.
This PR changes the logic to block user deletion if we are in exactly that period,
while also allowing the deletion of projects/users on free tier during the month.
Change-Id: Ic0735205e6633762fb7e3c2fa13e744cdfa5ec32
Finished implementing queries for both bandwidth and storage using pgx.Batch.
Fixed CSP styling issue.
Change-Id: I5f9e10abe8096be3115b4e1f6ed3b13f1e7232df
We have here two migrations in fact. One is for existing users,
we need to check if its paying user (paid_tier) and set 1M for
them and 150K for others.
Second migration is to set limits for projects depends on owner.
If owner is a paying user (paid_tier) then project should have
1M limit, otherwise it should be 150K. In this case to make
migration faster initially projects table segment_limit was set
to 1M by default. With migration we are selecting all paying
users and we are setting 150K limit for all projects which owners
are not in paying users set.
Initially we had a concern if that query wil lbe quick enough to
be executed during deployment but after investigation CRDB
team confirms that this should take seconds for out DBs.
Fixes https://github.com/storj/team-metainfo/issues/70
Change-Id: I8be06e9f949b68b993e043cc15525e8483bf49ea