Add billing DB to the satellite. This DB will hold all transactions on the users account and can be used to compute the users current usable account balance.
Change-Id: I056416efc169e5e5e30c9f30cd8bc766b7bc8073
This has been a cause of some confusion, even though the fields are
labeled as being copies of config values.
Having them be under a field explicitly named "Config" makes this
clearer, plus, allows the values to be passed in simply as a copy
of the Config struct from the satellite, rather than copying the fields
individually (which can be error-prone, particularly as the AuditCount
field in UpdateRequest is apparently not the same thing as the
AuditCount field in reputation.Config).
Refs: https://github.com/storj/storj/issues/4601
Change-Id: I386953347b71068596618616934aa28e3245cdc1
Add storjscan wallets DB to the satellite. For now this DB is a one to one mapping of the users account to a storjscan wallet that can be used by the account holder to make payments on their Storj account.
relates to https://github.com/storj/storj/issues/4347
Change-Id: I6e65b15817b90ceb75641244f9bf173c3b4228a7
The two protobuf types are identical except that one is in our common/pb
package, and the other is in internalpb. Since the type is public
already, and there is no difference in the internal one, it seems better
to use the public one for all satellite needs.
There is also another type which is essentially identical, but which is
not a protobuf type, also called "AuditHistory". It looks like we don't
ever actually need to have a separate type from the protobuf one.
This change makes us use "storj/common/pb".AuditHistory for all of our
AuditHistory needs.
Refs: https://github.com/storj/storj/issues/4601
Change-Id: If845fde21bb31c801db6d67ffc9a146d1617b991
This functionality will be needed in both packages, so here we move it
into the more general reputation-code package and export it for use in
satellitedb.
This also removes the related UpdateAuditHistory() signature from the
reputation DB interface, since it doesn't have anything to do with the
db. It doesn't need to be a method, either.
Finally, this changes the test for addAudit to be a plain test function
instead of using testplanet.Run(). It didn't need a whole testplanet
setup or any databases.
Refs: https://github.com/storj/storj/issues/4601
Change-Id: I90f6a909e5404f03ad776b95cfa2f248308c57c1
Márton found out that DROP DATABASE is rather slow on CRDB, and it makes
a significant impact when running the whole testsuite. In sum of test
times it's ~2.5h compared to ~2h. And the end-to-end ~20m to ~16m.
This adds a new flag STORJ_TEST_COCKROACH_NODROP for enabling this
behavior in the CI environment.
Fixes https://github.com/storj/dev-enablement/issues/6
Change-Id: I5a6616c32dc6596a96ba3d203f409368307d7438
This will let us update our reputation cache when writing through to the
db.
Since the information is already being fetched from the db and returned
to the application, the extra cpu load here should be minimal.
Refs: https://github.com/storj/storj/issues/4601
Change-Id: I2b8619f2c0d541893c7d3e7d33b1863b96775ebd
We want to remind unverified users to verify their emails:
once after 24 hours has passed and again after 5 days has passed.
Add mailservice.Service to satellite core because it is needed by the
chore for sending emails. To add the mailservice.Service to the core,
we create a helper function in satellite/peer.go to avoid duplicating
the code in both api.go and core.go. In addition to the chore, this
change adds methods to users.DB to get unverified users in need of
reminder.
Change-Id: I4e515bdf43f922788b4f965b2efb34fa32288bd1
We added nodes.disqualification_reason recently, but we didn't add a
corresponding column in the reputations table (despite having a
corresponding `disqualified` column there).
Without this change, the (very useful and informative) assignments to
updateFields.DisqualificationReason in reputations.go have no effect.
Refs: https://github.com/storj/storj/issues/4601
Change-Id: I77404902ca64b56aed72f1de76b303fe82b76aab
When a new user registers, we send a verification request to their
email. Currently, if they do not verify their email, we take no further
action. We want to send these users reminders: one after about one day
and one after about 5 days. To do this we will use this new
verification_reminders column.
It will look something like this:
```
SELECT email FROM users
WHERE status = 0
AND (
(verification_reminders = 0 AND created_at < now() - 'INTERVAL 1d')
OR (verification_reminders = 1 AND created_at < now() - 'INTERVAL 5d')
)
```
Change-Id: If0620e08c97e9e337c9563481d665c5bd462693b
Fix for this customer issue
https://github.com/storj/customer-issues/issues/34
By this change we fetch bucket usage since its creation instead of using project's createdAt timestamp.
Change-Id: Ic0ea5d169056a5bd64ed143d13954d794da6e1d2
Things that make debugging easier.
* Added logging to automatic link clicking to make it obvious, when it
fails.
* Added monitoring to oidc.
* Made dbx create calls noreturn for oauth_*
Change-Id: I37397b4e84ce5bfd82954aed9c38fdfd52595f24
We were already able to override (or not) metadata with this method
but to be explicit we are introducting new option to control storing
metadata with object. Separate option should be less error prone.
https://github.com/storj/team-metainfo/issues/105
Change-Id: I4c5bce953a633a0009b05c5ca84266ca6ceefc26
Last part of backwards compatible db migration to remove "suspended" column.
Removes exeption which removes "suspended" column in tests from `migrate_test.go`.
Adds DB migration to remove "suspended" column from 'nodes' and 'reputations' tables.
Change-Id: I02051279f6f4181e966c567919af0e774583f165
Set disqualification reason when reputations stats are updated on DB.Update.
Added tests for DisqualifyNode and for disqualification cases which happens during Update.
Change-Id: I00130ab5d9722422805159ad2f183c205de60f7e
Attribution is attached to bucket usage, but that's more granular than
necessary for the attribution report. This change iterates over the
bucket attributions, parses the user agent, converts the first entry
to lower case, and uses that as the key to a map which holds the
attribution totals for each unique user agent.
Change-Id: Ib2962ba0f57daa8a7298f11fcb1ac44a8bb97875
Migrate free tier users to have default limits if their limits were set to 0.
They were affected by incorrect working of Update user query.
Change-Id: I4c49c8d99b12dba2b9b0ab61b2175085976dcc95
Added failed_login_count and login_lockout_expiration columns to users table to control users failed login attempts.
We want to prevent brute forcing of user login so this is the first step.
Change-Id: I06b0b9f5415a1922e08cd9908893b2fd3c26bca0
Before, the VA query was summing the total and dividing by the number of
rows. This gives the average bytes stored per hour, but we charge for
usage with byte-hours. Why not do value attribution the same way?
To do that, we don't divide by the number of rows. We also have object
and segment fees so return segment-hours and object-hours too.
Change-Id: I1f18b7e1b2bae1d3fae1ca3b93bfc24db5b9b0e6
This sets the corresponding _numeric columns to be NOT NULL (it has been
verified manually that there are no more NULL _numeric values on any
known satellites, and it should be impossible with current code to get
new NULL values in the _numeric columns.
We can't drop the _gob columns immediately, as there will still be code
running that expects them, but once this version is deployed we can
finally drop them and be totally done with this crazy 5-step migration.
Change-Id: I518302528d972090d56b3eedc815656610ac8e73
We are in the process of creating an api to allow users to manage their
accounts programmatically. We would like to use api keys for
authorization. We were originally going to create an entirely new table
for these api keys, but seeing as we already have 2 other tables for
keys/tokens, api_keys and oauth_tokens, we thought it might be better to
use one of these. We're using oauth_tokens.
We create a new oidc.OAuthTokenKind for account management api keys:
KindAccountManagementTokenV0. We made the key versioned because we
likely want to improve the implementation in the future, but we want to
get something functional out the door ASAP because the account management
api feature is highly desired.
Add a new method to oidc.OAuthTokens interface for revoking v0 account
management api keys, RevokeAccountManagementTokenV0. Add update method
to dbx implementation to allow updating the expiration. We will revoke
these keys by setting the expiration to 0 so they are expired.
Change-Id: Ideb8ae04b23aa55d5825b064b5e43e32eadc1fba
We have an issue with latest CRDB. Single query cannot modify
the same table multiple times. Now build is blocked.
This change is unblocking build by:
* adjusting query for inserting into repair queue
* temporary removing code for deletion for server-side copy
* temporary disable backward compatibility tests for CRDB
Change-Id: Idd9744ebd228e5dc05bdaf65cfc8f779472a975d
Remove redundant suspension timestamp column from nodes and reputation tables.
Suspended timestamp was moved to unknown_audit_suspended and suspended column is
no longer used so there is no point in keeping both.
Change-Id: Ieea3f12141b33ec9efe7594f4c9dbc7e10675b0e
When performing re-authorizations for OAuth, we need to pull up an
APIKey using it's project id and name. This change also updates the
APIKeyInfo struct to return the head value associated with an API
key.
Change-Id: I4b40f7f13fb9b58a1927dd283b42a39015ea550e
Update the user to the default paid tier project limit, which is currently 3 projects, when the user upgrades to a paid account.
Change-Id: I95b19d62cebc7d878b716355f2ebcaf0b51ca3f7
For nodes in excluded areas, we don't necessarily want to remove them
from the pointer, but we do want to increase the number of pieces in the
segment in case those excluded area nodes go down. To do that, we
increase the number of pieces repaired by the number of pieces in
excluded areas.
Change-Id: I0424f1bcd7e93f33eb3eeeec79dbada3b3ea1f3a
We decided that we want to have segment limit for paying users high
enough to not have to change it too often.
Fixes https://github.com/storj/storj/issues/4590
Change-Id: Ic1c38bf3e2fcc000548ff4c7e7004647b39fbecf
Added new projectaccounting query to get project's single bucket usage rollup.
Added new service method to call new query.
Added implementation for IsAuthenticated method which is used by new generated API.
Change-Id: I7cde5656d489953b6c7d109f236362eb465fa64a
Add a RepairExcludedCountryCodes config flag for overlay for providing a list of country codes to exclude nodes from target repair selection.
Mark segments with less than repairThreshold pieces in countries not in the RepairExcludedCountryCodes as not healthy.
With this change, the repair process is not affected. The segment will be removed from the repair queue by the repairer.
Another change will handle the logic at the repairer level.
Fixes https://github.com/storj/team-metainfo/issues/95
Change-Id: I9231b32de117a116488de055a3e94efcabb46e81
Remove special characters from the test database name to improve
compatibility with external tools like the Cockroach web gui.
Change-Id: I3a6a368a5051e3064e7279b14eec4f2d4ff3c435
All code on known satellites at this moment in time should know how to
populate and use the new numeric columns on the
stripecoinpayments_tx_conversion_rates and coinpayments_transactions
tables in the satellite db. However, there are still gob-encoded
big.Float values in the database from before these columns existed. To
get rid of those values, so that we can excise the gob-decoding code
from the relevant sections, however, we need something to read the gob
bytestrings and convert them to numeric values, a few at a time, until
they're all gone.
To accomplish that, this change adds two chores to be run in the
satellite core process- one for the coinpayments_transactions table, and
one for the stripecoinpayments_tx_conversion_rates table. They should
run relatively infrequently, so that we do not impose any undue load on
processing resources or the db.
Both of these chores work without using explicit sql transactions, but
should still be concurrent-safe, since they work by way of
compare-and-swap type operations.
If the satellite core process needs to be restarted, both of these
chores will start scanning for migrateable rows from the beginning of
the id space again. This is not ideal, but shouldn't be a problem (as
far as I can tell, there are only a few thousand rows at most in either
of these tables on any production satellite).
Change-Id: I733b7cd96760d506a1cf52735f598c6c3aa19735
For a thorough explanation of the overall transition, see the message on
commit c053bdbd70.
This change will rename the columns containing gob-encoded big.Floats
and add new columns which will contain the equivalent data in a more
sql-friendly format.
The change should *not* break already-running satellite processes,
because all functionality touching these tables has already been taught
to work with these new columns if it sees any "undefined column" errors.
Change-Id: I229324376533e383c5d05064b8aedad149cf825b
This change adds some more checks to the deletion process for projects and
users, since we ran into a race condition during invoicing, where projects
have been deleted before the invoicing was finished, leading to missing
references.
This PR changes the logic to block user deletion if we are in exactly that period,
while also allowing the deletion of projects/users on free tier during the month.
Change-Id: Ic0735205e6633762fb7e3c2fa13e744cdfa5ec32
Finished implementing queries for both bandwidth and storage using pgx.Batch.
Fixed CSP styling issue.
Change-Id: I5f9e10abe8096be3115b4e1f6ed3b13f1e7232df
We have here two migrations in fact. One is for existing users,
we need to check if its paying user (paid_tier) and set 1M for
them and 150K for others.
Second migration is to set limits for projects depends on owner.
If owner is a paying user (paid_tier) then project should have
1M limit, otherwise it should be 150K. In this case to make
migration faster initially projects table segment_limit was set
to 1M by default. With migration we are selecting all paying
users and we are setting 150K limit for all projects which owners
are not in paying users set.
Initially we had a concern if that query wil lbe quick enough to
be executed during deployment but after investigation CRDB
team confirms that this should take seconds for out DBs.
Fixes https://github.com/storj/team-metainfo/issues/70
Change-Id: I8be06e9f949b68b993e043cc15525e8483bf49ea
Implemented endpoint and query to get bandwidth chart data for new project dashboard.
Connected backend with frontend.
Storage chart data is mocked right now.
Change-Id: Ib24d28614dc74bcc31b81ee3b8aa68b9898fa87b
A few months ago we removed all references to the contained
column in nodes and reputations
bb21551a9c
and
56fe636123
But we never did the migration to drop the columns.
This commit will finally do that.
Change-Id: I82aa2f257b1fb14a2f1c4c4a1589f80895360ae4
inconsistency
The original design had a flaw which can potentially cause discrepancy
for nodes reputation status between reputations table and nodes table.
In the event of a failure(network issue, db failure, satellite failure, etc.)
happens between update to reputations table and update to nodes table, data
can be out of sync.
This PR tries to fix above issue by passing through node's reputation from
the beginning of an audit/repair(this data is from nodes table) to the next
update in reputation service. If the updated reputation status from the service
is different from the existing node status, the service will try to update nodes
table. In the case of a failure, the service will be able to try update nodes
table again since it can see the discrepancy of the data. This will allow
both tables to be in-sync eventually.
Change-Id: Ic22130b4503a594b7177237b18f7e68305c2f122
Batching of the order submissions can lead to combining the allocated
traffic totals for two completely different time windows, resulting
in incorrect customer accounting. This change will group the batched
order submissions by projectID as well as time window, leading to
distinct updates of a buckets bandwidth rollup based on the hour
window in which the order was created.
Change-Id: Ifb4d67923eec8a533b9758379914f17ff7abea32
We want to issue a reminder to users when they don't verify their email within 24hrs of registering. This change only adds a column to the users table.
Change-Id: I92e2baeabf179338ffec01574d4752c0ccdba88b
All limits we have for projects have also parent limits stored
with user data. New created project is first taking limits from
owner (user) limits.
This change is extending users table with project_segment_limit
column and adds functionality to get and set value for this
column.
Change-Id: Iff5e36c62b517652390b649fc05992475916ecff
Exposes functionality to get and update project segment
limit. It will be used to limit number of segments per project
while uploading object.
Change-Id: I971d48eebb4e7db8b01535c3091829e73437f48d
We want to set maximum number of segments per
project. This change adds only column to projects table.
Default value 1M is set to make later migration easier as
we need to set 1M for paid tier users and 140K for free
tier users.
Change-Id: I8e83712e08c5bd91dfa59f652d17e45c14240a36
Added new query to get project object and segment count.
Added appropriate object and segment count view for new project dashboard.
Change-Id: I69a2e55442f318c51dc365c0c578b964f2f06c7f
This reverts commit 2c0a360a14.
Avoid big transactions. We'll do it outside of the migration pipeline.
Change-Id: Iade810d81bb2453c9e351149cb84662b207ee527
Value attribution codes were converted into UUIDs and stored in the users, projects, api_keys, bucket_metainfos, and value_attributions tables in the partner_id column. This migration will lookup the appropriate partner name associated with each of these UUIDs, and store the partner name directly in the user_agent column within each table. If an error occurs during the partner ID to partner name conversion, the partner ID value will be migrated to user_agent.
A note on the migration test data, postgres.v182.sql:
With one exception, all preexisting rows in the relevant tables had a NULL partner_ID. Therefore, we needed to insert new rows with partner_ID set under the OLD DATA section in order to test that the migration works. For each affected table, we insert one row with a valid partner ID which has a corresponding partner name, and one row with a partner ID which would return an error during the conversion to the partner name.
Change-Id: Iad977d72df0ce95a0c5ca80a065c4276ec1f2354
The main motivation is to wrap the bucket DB and metainfo DB, so we
could check if a bucket is empty before applying geofencing config.
Change-Id: I8bac21555e01d51a663fb557bc1acfc8106bc2e1
To allow for changing limits for new users, while leaving existing users limits as they are, we must store the project limits for each user. We currently store the limit for the number of projects a user can create in the user DB table. This change would also store the project bandwidth and storage limits in the same table.
Change-Id: If8d79b39de020b969f3445ef2fcc370e51d706c6
Alter satellites DB nodes table to add `disqualification_reason` int column
which contains disqualification type enum of why node has been disqualified.
Change-Id: Ia514557018ca27e1984216dc5004346d59869d16
We had a bug in the stray nodes chore where nodes who had not been seen
in several months were not being DQd. We figured out that this was
happening because we were using two queries: The first to grab
nodes where last_contact_success < some cutoff, the second to DQ them
unless last_contact_success == '0001-01-01 00:00:00+00'. The problem
is that if all of the nodes returned from the first query had
last_contact_success of '0001-01-01 00:00:00+00', we would pass them to
the second query which would not DQ them. This would result in the stray
nodes DQ loop ending since we found a number of nodes to DQ less than the
limit.
The fix: add the "WHERE last_contact_success != '0001-01-01
00:00:00+00'::timestamptz" to the selection query.
Change-Id: I4e60de90b68d8745d641b4467c2b23e0e56f7dff
Even though we want to start charging segment fee instead of object fee,
it's hard for users to understand what a segment is. This PR adds the
object count back in the UI alongside with segment count to help address
the issue.
Change-Id: I92eb42c769d350eba68a72443deffec5c278359c
table and drop not null constraint on objects column
Since, we want to move from charging our customers by object count to
segment count, this PR prepares the database to be able to record segments count
instead of objects count for satellite's billing system
Change-Id: Ie91ef354e78d24a268bc1cdc4327c182f733321e
listPendingTransitionShim is a temporary transition shim intended to
make existing API processes keep working when a future DB schema change
is executed. For more explanation, see the message on commit c053bdbd70.
However, the shim has a small bug: it is missing the ORDER BY clause
that appears in the original ListPending method. This transition shim
code won't ever run until we make the DB schema change, so this bug
hasn't hurt anything yet; it's just important that we fix it before the
DB schema change happens.
Change-Id: I5953651583ee236500c2c07141dfc9d690a95118
This update is to set up users being able to register with a promo code added to their account in place of the free tier coupon.
Change-Id: I7badf87937b12664f145520b6dcc4b26fe750407
We don't need column uses_segment_transfer_queue in graceful_exit_progress
as now all exiting nodes are using graceful_exit_segment_transfer_queue and
table graceful_exit_transfer_queue has been dropped.
Change-Id: I4b7c087433f04138cf09bcf8ad3d8de2c185502a
Multipart upload limits added. Last part has no size limit.
Max number of parts: 10000, min part size: 5 MiB
Change-Id: Ic2262ce25f989b34d92f662bde720d4c4d0dc93d
Union query for both reputable and new nodes didn't properly work. The
top level query is required to have an `AS OF SYSTEM TIME` statement as
well.
Change-Id: I8ee6dd5b700c2b1ed2aa562962bfa72be7eec30a
Uplink needs only part of columns we are reading from DB.
To improve performance we should read only those that are
realy needed.
Change-Id: Ib39259318169c46afe5fa4c6ce2184da82e960c8
We don't use this column for anything. If you want to know if a node is
contained, you can check the pending_audits table.
Change-Id: I5671722a5fc6e1749d3a49e187a56556000ff941
We don't use this column for anything. If you want to know if a node is
contained, you can check the pending_audits table.
Change-Id: I8da1d8e01a2dcaff63c5067a7927b5451424ad04
Removes database tables and functionality related to our custom
coupon implementation because it has been superseded by the Stripe
coupon and promo code system. Requires implementations of the
payments Invoices interface to return coupon usages along with
invoices.
Change-Id: Iac52d2ff64afca8cc4dbb2d1f20e6ad4b39ddfde
Cockroach doesn't like concurrent database creation, add a limiter to
avoid overloading the database.
Also limit the cockroach testing to the last 10 migrations.
Change-Id: Ie73c516ac2d362b251f605049e51eb25888432bd
Populate the egress_dead column for taking into account allocated bandwidth that can be removed because orders have been sent by the storage nodes. The bandwidth not used in these orders can be allocated again.
Change-Id: I78c333a03945cd7330aec052edd3562ec671118e
Drop table graceful_exit_transfer_queue which is not used anymore (replaced by graceful_exit_segment_transfer_queue).
Change-Id: Ie254fe9a54fb0784e350a439ce7a9bc99a3a58b5
Migration tests are very heavy on database schema changes, which may
cause delays and retries. Separate out the migration tests and ensure
that they do not run concurrently on the same database.
Change-Id: I35b17525f18fd923546ce1fcc12d805c95073b6b
Why: big.Float is not an ideal type for dealing with monetary amounts,
because no matter how high the precision, some non-integer decimal
values can not be represented exactly in base-2 floating point. Also,
storing gob-encoded big.Float values in the database makes it very hard
to use those values in meaningful queries, making it difficult to do
any sort of analysis on billing.
Now that we have amounts represented using monetary.Amount, we can
simply store them in the database using integers (as given by the
.BaseUnits() method on monetary.Amount).
We should move toward storing the currency along with any monetary
amount, wherever we are storing amounts, because satellites might want
to deal with currencies other than STORJ and USD. Even better, it
becomes much clearer what currency each monetary value is _supposed_ to
be in (I had to dig through code to find that out for our current
monetary columns).
Deployment
----------
Getting rid of the big.Float columns will take multiple deployment
steps. There does not seem to be any way to make the change in a way
that lets existing queries continue to work on CockroachDB (it could be
done with rules and triggers and a stored procedure that knows how to
gob-decode big.Float objects, but CockroachDB doesn't have rules _or_
triggers _or_ stored procedures). Instead, in this first step, we make
no changes to the database schema, but add code that knows how to deal
with the planned changes to the schema when they are made in a future
"step 2" deployment. All functions that deal with the
coinbase_transactions table have been taught to recognize the "undefined
column" error, and when it is seen, to call a separate "transition shim"
function to accomplish the task. Once all the services are running this
code, and the step 2 deployment makes breaking changes to the schema,
any services that are still running and connected to the database will
keep working correctly because of the fallback code included here. The
step 2 deployment can be made without these transition shims included,
because it will apply the database schema changes before any of its code
runs.
Step 1:
No schema changes; just include code that recognizes the
"undefined column" error when dealing with the
coinbase_transactions or stripecoinpayments_tx_conversion_rates
tables, and if found, assumes that the column changes from Step
2 have already been made.
Step 2:
In coinbase_transactions:
* change the names of the 'amount' and 'received' columns to
'amount_gob' and 'received_gob' respectively
* add new 'amount_numeric' and 'received_numeric' columns with
INT8 type.
In stripecoinpayments_tx_conversion_rates:
* change the name of the 'rate' column to 'rate_gob'
* add new 'rate_numeric' column with NUMERIC(8, 8) type
Code reading from either of these tables must query both the X_gob
and X_numeric columns. If X_numeric is not null, its value should
be used; otherwise, the gob-encoded big.Float in X_gob should be
used. A chore might be included in this step that transitions values
from X_gob to X_numeric a few rows at a time.
Step 3:
Once all prod satellites have no values left in the _gob columns, we
can drop those columns and add NOT NULL constraints to the _numeric
columns.
Change-Id: Id6db304b404e6fde44f5a8c23cdaeeaaa2324f20
Why: big.Float is not an ideal type for dealing with monetary amounts,
because no matter how high the precision, some non-integer decimal
values can not be represented exactly in base-2 floating point. Also,
storing gob-encoded big.Float values in the database makes it very hard
to use those values in meaningful queries, making it difficult to do
any sort of analysis on billing.
For better accuracy, then, we can just represent monetary values as
integers (in whatever base units are appropriate for the currency). For
example, STORJ tokens or Bitcoins can not be split into pieces smaller
than 10^-8, so we can store amounts of STORJ or BTC with precision
simply by moving the decimal point 8 digits to the right. For USD values
(assuming we don't want to deal with fractional cents), we can move the
decimal point 2 digits to the right.
To make it easier and less error-prone to deal with the math involved, I
introduce here a new type, monetary.Amount, instances of which have an
associated value _and_ a currency.
Change-Id: I03395d52f0e2473cf301361f6033722b54640265
This PR utilize the new burst limit column from projects table to allow
control on the limit for request per seconds and token bucket size
When no burst limit is explicitly set, rate limit is applied to both so
we don't limit how quickly request can be made in a second.
Change-Id: I883235c60c5d6416aeadd1c80ed2ebd193aa4d9f
In order to limit the amount of overall requests a user can issue in a
time span, we need to have the ability to define such limit separate
from per second request rate.
This PR adds a new column on the projects table to store the burst limit
per project.
Change-Id: I7efc2ccdda4579252347cc6878cf846b85146dc7
Remove the logic associated to the old transfer queue.
A new transfer queue (gracefulexit_segment_transfer_queue) has been created for migration to segmentLoop.
Transfers from the old queue were not moved to the new queue.
Instead, it was still used for nodes which have initiated graceful exit before migration.
There is no such node left, so we can remove all this logic.
In a next step, we will drop the table.
Change-Id: I3aa9bc29b76065d34b57a73f6e9c9d0297587f54
nodes and audit_history tables
This PR removes all code reference to audit_histories table and
```
audit_reputation_alpha, audit_reputation_beta,
unknown_audit_reputation_alpha, unknown_audit_reputation_beta,
```
columns from nodes table.
It also drops audit_histories table from the db since the code
that's referencing it currently are not being used.
Change-Id: Ifcda8db36afb3a333d487ff831f2fdefc8b02a4c
gracefulexit
With the effort to move audit related data into reputation store, this
PR updates gracefulexit endpoint to use reputation service to get a
node's audit score
Change-Id: Iad93ea689ad67ff9c57c7be16687e21e715fab7a
Currently, reputation table is only populated when a node has been
audited. This is ok in production, however a lot of our tests doesn't
upload any data or trigger audits.
This PR adds an initialization step in testplanet to populate reputation
table with zero value for nodes reputation.
Change-Id: I11b381236669db346dc68a48a6d4a27334a0a8b8
package in audit
This PR implements reputation store and replace overlay in audit service
to use such store for storing node's audit stats.
In order to keep the changeset smaller, most of the changes in this PR is for copying audit logic in overlay to
reputation package. In a following PR, the duplicating code will be
removed from overlay.
Change-Id: I16c12494a0970f44c422b26cf603c1dc489e5bc1
Added MFA passcode and recovery code field for token requests.
Added endpoints for MFA-related activity: enabling MFA,
disabling MFA, generating a new MFA secret key, and
generating new MFA recovery codes.
Change-Id: Ia1443f05d3a2fecaa7f170f56d73c7a4e9b69ad5
For being able to use the segment metainfo loop, graceful exit transfers have to include the segment stream_id/position instead of the path. For this, we created a new table graceful_exit_segment_transfer_queue that will replace the graceful_exit_transfer_queue. The table has been created in a previous migration and made accessible through graceful exit db in another one.
This changes makes graceful exit enqueue transfer items for new exiting nodes in the new table.
Change-Id: I7bd00de13e749be521d63ef3b80c168df66b9433
For being able to use the segment metainfo loop, graceful exit transfers have to include the segment stream_id/position instead of the path. For this, we created a new table graceful_exit_segment_transfer_queue that will replace the graceful_exit_transfer_queue. The table has been created in a previous migration.
This change gives access to this table.
Graceful Exit doesn't use the table yet, this will be done in a next change.
Change-Id: I6c09cff4cc45f0529813a8898ddb2d14aadb2cb8
Bucket tally calculation will be removed from metaloop and will
use metabase objects iterator directly.
At the moment only bucket tally needs objects so it make no sense
to implement separate objects loop.
Change-Id: Iee60059fc8b9a1bf64d01cafe9659b69b0e27eb1
Columns for MFA status, secret key, and JSON-encoded array of
recovery codes are added to the users table.
Change-Id: Ifed7e50ec9767c1670d9682df1575678984daa60
When a user adds a credit card, switch them to the paid tier and update
their projects with new bandwidth/storage limits. New projects for the
paid tier user will also have the updated limits.
The new limits are:
* storage per project - 50 GB free/25 TB paid
* bandwidth per project - 50 GB free/100 TB paid
Change-Id: I7d6467d077e8bb2bbe4bcf88ab8d75490f83165e
The previous query was making a full table scan. This modifies code to
do the queries separately on each action. It will probably be slower on
a small table, however there should be a several magnitude boost on
large tables.
Change-Id: Ib8885024d8a5a0102bbab4ce09bd6af9047930c9
We want to calculate bucket tally only from iterating objects.
Object currently has an info about totals for bytes and segments.
We need to adjust tallies to keep those totals. Older entries will
be untouched and code will use totals only if available. Change
is adding columns for totals to bucket_storage_tally table and
is adding general handling for them.
Next step is to start using total columns instead of inline/remote.
This will be done with next change.
Change-Id: I37fed1b327789efcf1d0570318aee3045db17fad
We want to use StreamID/Position to identify injured
segment. As it is hard to alter existing injuredsegments
table we are adding a new table that will replace existing
one. Old table will be dropped later.
Change-Id: I0d3b06522645013178b6678c19378ebafe485c49
So that we can easily see whether a user is in the paid tier without
querying for payment methods.
Change-Id: I122566ddd0953203f852741fa12c71795bc1ec5c
Period end was calculated
incorrectly as it was still in current month but
should be the first day of next month.
Change-Id: I37451d29a9b901b69e6c3c401b333c58b3376d61
Adding AS OF SYSTEM TIME to query that is calculating project bandiwdth.
As an addition method for setting interval is added as test doesn't
work well with default interval.
Change-Id: Id1e15be4f6afff13b9dc2b7f595e2edb6de28db9
Currently, pending audit is finding segment by segment location
(path) because we want to move audit to segmentloop and we will
have only StreamID and Position we need to add columns for those
fields. Altering existing table can cause issues while
migration and deployment. Cleaner choise is to make new table.
This change contains migration with new segment_pending_audit
table that will replace pending_audits table and adjustments
to use new table in the code.
Table pending_audits will be dropped with next release.
Change-Id: Id507e29c152da594bac1fd812c78d7ecf45ec51f
table graceful_exit_segment_transfer_queue will be used to replace graceful_exit_transfer_queue. Currently, it uses the path of a segment to keep track of pieces to be transferred. As we want to use the segment metainfo loop, we will need to record stream_id and position of the segment instead of relying on object path.
This change also add a uses_segment_transfer_queue column to the graceful_exit_progress table to be able to know if a transfer has been initiated while using the old table.
Change-Id: Iafb1e8e65ba124e20de4a9ff76da181c3222de7e
Define service and DB interface for storing node reputation data
and updating the overlay cache.
Add overlay service and DB method UpdateReputation.
See https://github.com/storj/storj/pull/4144
Change-Id: Iedd8bd3274457d26c595919303d55327c1464b8c
The reputation table duplicates the reputation information in the
nodes table. It will be used for implementing the reputation
service.
Change-Id: I36c0318e8fa5f535e9d527df95b22a4f9eb365d4
This is part of metaloop refactoring. We plan to remove
irreparable at some point but there was not time for it.
Now instead refatoring it for segmentloop its just easier
to drop it.
Later we still need to drop table with migration step.
Change-Id: I270e77f119273d39a1ecdcf5e1c37a5662a29ab4
The redis key associated to bandwidth usage depended on the current month.
This change makes it depends also on the day, so that we update bandwidth usage
daily to take into account changes associated to expired but not used allocated bandwidth.
It also add a test to to check that the allocated but not settled bandwidth is not counted after 2 days.
Change-Id: Iee9dbe3517cc3b9825438360b276a07a43dfbc64
Migration step for adding a 'egress_dead' column to the project_bandwidth_daily_rollups.
It will be used to track bandwidth allocation that won't be consumned
as the corresponding order has already been processed and has a settled
bandwidth amount lower than the order limit (allocated bandwidth).
Change-Id: Ic07592e69292ae2076e69f6038bb0e0fae79b271
Full prefix: web/satellite, satellite/{console, analytics, satellitedb}
- checkbox added to register view - business tab
- user being saved with new column
- add sales contact choice to Segment calls
- ui fix added to employee count dropdown
Change-Id: Ib976872463b88874ea9714db635d58c79cdbe3a1
We are not using this table so make no sense to put data there.
This change removes only code that is using this table. Before next
release we need to drop table with migration step.
Change-Id: I80f400aa778c717e70324bd00da502b7032c9d9f
Until now, whenever audits were recorded we would try to delete
the node from containment just in case it exists. Since we now
want to treat segment repair downloads as audits, this would
erroneously remove nodes from containment, as repair does not go
through a Reverify step. With this changeset, (Batch)UpdateStats
will not remove nodes from containment. The Reverify method will
remove all necessary nodes from containment.
Change-Id: Iabc9496293076dccba32ddfa028e92580b26167f
Because of recent changes to how coupons for the free tier are handled
(see commit 4c0817bcfb), we no longer want all these $10
non-expiring coupons. After coupons are applied during invoice
generation, if a customer does not have any valid (non expired, non
consumed) coupons, a new promotional coupon is applied.
We could just wait for users to consume all $10 of the non-expiring
coupons, and the new promotional coupon would be applied for the
following billing cycle, but this gets tricky, because if in the final
month, the user is billed for $1 of usage, but only $0.5 of the $10
non-expiring coupon is remaining, the user will be charged for the
remaining $0.5. With the new promotional coupon of $1.65, expiring every
month, this would not be an issue.
So long story short, this commit migrates all non-expiring coupons to
expire within 2 billing periods (all existing non-expiring coupons in
prod were created in early April or later). That way, there is still
enough value in the $10 coupon that we don't have to worry about
customers exceeding it, and the coupons will expire. Then we'll
immediately apply the $1.65 coupon for the next month! And then
hopefully this unfortunate situation will come to a pleasant end.
Change-Id: I8a593948d8876c41a71d886b9a95d4e2c802b4f3
Replace GetProjectAllocatedBandwidth by GetProjectBandwidth which calculates
used bandwidth from allocated and settled bandwidth recorded in the
project_bandwidth_daily_rollups table.
For each day in the month, if the allocated bandwidth is expired, it uses the
settled bandwidth for computing used bandwidth.
Change-Id: Ife723c6d5275338f470619631acb25930d39ac3c
project_bandwidth_daily_rollups table
We want to calculate used bandwidth better so we need to calculate it
from allocated and settled bandwidth. To do this we need first populate
this new table.
https://storjlabs.atlassian.net/browse/PG-56
Change-Id: I308b737bf08ee48ce4e46a3605697ab2095f7257
Improve UpdateCheckIn on a contended row:
name old time/op new time/op delta
UpdateCheckInContended-100x-32 2.29s ±55% 0.17s ±61% -92.45% (p=0.008 n=5+5)
Change-Id: I053ab9f1cff136c306e5fb57f5e355cdc0269a8c
Currently the interface is not useful. When we need to vary the
implementation for testing purposes we can introduce a local interface
for the service/chore that needs it, rather than using the large api.
Unfortunately, this requires adding a cleanup callback for tests, there
might be a better solution to this problem.
Change-Id: I079fe4dbe297b0ae08c10081a1cea4dfbc277682
Initially metabase was developed separately and it was useful to have a
separate environment flag for tests, however, it's more convenient to
use the same as rest of the testsuite.
Change-Id: Ia4d79be27ce5911cbae68d57cdf0b30f63459444
Currently we were duplicating code for AS OF SYSTEM TIME in several
places. This replaces the code with using a method on
dbutil.Implementation.
As a consequence it's more useful to use a shorter name for
implementation - 'impl' should be sufficiently clear in the context.
Similarly, using AsOfSystemInterval and AsOfSystemTime to distinguish
between the two modes is useful and slightly shorter without causing
confusion.
Change-Id: Idefe55528efa758b6176591017b6572a8d443e3d