Our current endpoints bail on us, if the column data is null. Thus we need
to take the intermediate step and set the default to a fixed value and
reset those with the following release.
It sets the default column value to our current config values of 50GB
for storage and bandwidth and 100 buckets, while still enabling the field to be nullable.
All 0 values are migrated to be the default as well to ensure they can
keep using their projects, as with the original change, 0 actually means 0.
Change-Id: I797be80ce2d2105091599dc1b3fc76f74336b66b
Currently we have no way to actually set one
of the following limits to 0 (meaning not usable):
- maxBuckets
- usageLimit
- bandwidthLimit
With having the field nullable,
NULL corresponds to the global default,
0 now actually 0 and
a set value determines a custom limit.
Change-Id: I92bb77529dcbd0881ae8368921be9d246eb0919e
Jira: https://storjlabs.atlassian.net/browse/PG-67
There are a number of old-style objects (without the number of segments
in their metadata) on US-Central-1, Europe-West-1, and Asia-East-1
satellites. We want to migrate their metadata to contain the number of
segments before executing the main migration to metabase. This would
simplify the main migration to metabase with one less issue to think
about.
Change-Id: I42497ae0375b5eb972aab08c700048b9a93bb18f
WHAT:
added functionality for user to update project name. Logic only, without actual GUI updates.
WHY:
better user experience
Change-Id: I1e38e33ba827b0bdf2c89e29de24e4e87edb474a
nodes are submitting using both the legacy and windowed endpoints
and thus having their legacy submissions rejected.
it is legal to use both the legacy and windowed endpoints
in phase1 since they use the same backend. the legacy endpoint
is disabled in phase2 and phase3.
therefore, if we wait an order expiration period (2 days) after
we determine enough nodes have started using the windowed
endpoint, we can be sure that any orders they did have to
submit with the legacy endpoint will have expired.
Change-Id: I4418a881bf8bb9377efaef4c651e6103a5dc6ed0
Another change which is a part of refactoring to replace path parameter
(string/[]byte) with key paramter (metabase.SegmentKey)
Change-Id: I617878442442e5d59bbe5c995f913c3c93c16928
objectdeletion.ObjectIdentifier with metabase.ObjectLocation
Another change to use metabase.ObjectLocation across satellite codebase
to avoid duplication and provide better type safety.
Change-Id: I82cb52b94a9107ed3144255a6ef4ad9f3fc1ca63
WHAT:
notification bar added to project dashboard page. It is shown when projects count limit is reached.
Create project button is removed after creating last available project
WHY:
inform user that their projects count limit was reached
Change-Id: If0d67148003be40cc9eb4d8b25cc17f8204008d4
Additionally, this PR changes NewNodeFraction devDefault and testplanet config from 0.05 to 1.
This is because many tests relied on selecting nodes that were reputable based on audit and uptime
counts of 0, in effect, selecting new nodes as reputable ones.
However, since reputation is now indicated by a vetted_at db field that is explicitly set
rather than implied by audit and uptime counts, it would be more complicated to try to
update all of the nodes' reputations before selecting nodes for tests.
Now we just allow all test nodes to be new if needed.
Change-Id: Ib9531be77408662315b948fd029cee925ed2ca1d
metabaseSegmentKey` TransferQueueItem
We are unifying which name (and type) we are using for value we are
using to point to segment. We want to use `key` instead of `path`.
Dedicated type `metabase.SegmentKey` was created for this purposes also.
This change is doing refactoring around gracefulexit.
Change-Id: I90d51ff087b206179e61d5f1bc95f4709d76f917
This PR updates `uplink rb --force` command to use the new libuplink API
`DeleteBucketWithObjects`.
It also updates `DeleteBucket` endpoint to return a specific error
message when a given bucket has concurrent writes while being deleted.
Change-Id: Ic9593d55b0c27b26cd8966dd1bc8cd1e02a6666e
This PR fixes a deadlock that can happen when the number of piece
deletion requests is different from the distinct node count from those
requests. The success threshold should be based on the number of nodes
instead of the amount of requests
Change-Id: I83073a22eb1e111be1e27641cebcefecdc16afcb
This change forces the test of GetObjectIPs to use multiple remote
segments (earlier versions of the test were accidentally using inline
segments). This change also revealed a small bug in the for loop code,
which is fixed.
Change-Id: Ic486b079d221952ba13553acf0ca41a8873f3f21
* The audit worker wants to get items from the queue and process them.
* The audit chore wants to create new queues and swap them in when the
old queue has been processed.
This change adds a "Queues" struct which handles the concurrency
issues around the worker fetching a queue and the chore swapping a new
queue in. It simplifies the logic of the "Queue" struct to its bare
bones, so that it behaves like a normal queue with no need to understand
the details of swapping and worker/chore interactions.
Change-Id: Ic3689ede97a528e7590e98338cedddfa51794e1b
Add online score used for the new audit history offline tracking system
to the nodes table. This allows us easy access to the node's online
score for the storagenode dashboard as well as for data analysis.
Change-Id: Ie99be1192e5236862a5b3dbed2e5ef03b9169410
We were seeing error on the last day of the month with TestProjectAllocatedBandwidthRetainTwo.
This is due to AddDate normalizes its result in the same way that Date does, so, for example,
adding one month to October 31 yields December 1, the normalized form for November 31."
I also fixed a minor UTC issue with this test as well.
Change-Id: I0157873e7befa57810e5f264a922b188890fa46a
satellite.DB.Console().Projects().GetAll database query
can be replaced with planet.Uplinks[0].Projects[0].ID
Change-Id: I73b82b91afb2dde7b690917345b798f9d81f6831
When a node's audit history "online score" passes below a configured
threshold, the node goes into "offline suspension" mode and begins a
review period, where the operator is given an opportunity to bring their
node back online.
After the review period passes, offline suspension is turned off for the
node.
In the future, if a node still has a bad online score at the end of the
review period, it will be disqualified. This is disabled right now.
In the future, if a node is in offline suspension, it will be treated as
"unhealthy". Right now, there are no consequences for being in offline
suspension.
Minor changes:
* Moves AuditHistoryConfig out of UpdateStats/BatchUpdateStats args and
into UpdateRequest.
* Adds "now" argument to UpdateStats/BatchUpdateStats args for easy
testing.
* Changes formatting strings inside buildUpdateStatement to use specific
types.
Change-Id: I032b60298840fc16e6ef831da750f2d57619a397
Currently there is confusion between responsibilities of
metainfo.Endpoint, metainfo.Service, PointerDB.
By separating database "service" into a separate package and
its types allows to disentagle them.
This gives us responsibilities:
1. metainfo.Endpoint - translates requests and permissions
2. metainfo.Service - handles requests and coordinates with
objectdeletion, piecedeletion, metabase
3. metabase.Service - communication with the database interface and invariants
Currently metabase will contain the types necessary to coordinate
information.
Change-Id: If8c992b4b9d9e70a56bbd8a378a5af6b1a2ec34e
Jenkins has been failing a lot lately due to test timeouts with CockroachDB.
TestMigrateCockroach previously took around 5 minutes, now it takes 2.
Why 103? I couldn't get 100 to work due to an error w/ NOT NULL and PKs.
Change-Id: Iec95d4e25f9d6cd36920e7f43272c486a17fa879
TestMaxOutBuckets is one of our slower tests (50-90s).
This change seems to make it 2-12s.
It reduces the number of buckets that need to be created.
It also removes unnecessary storage nodes.
Change-Id: I1012fc6e9258b2f7674b16da4e8b418741c93eea
If a segment is deleted, is modified, or expires during an audit, this
is not problematic, so we should not return errors. Functionally,
nothing changes, but our metrics around audit success rate will be
improved after this change.
Change-Id: Ic11df056b2c73894b67a55894bd4d58c00470606
This PR changes DeleteBucket to be able to delete all objects within a
bucket if `DeleteAll` is set in `BucketDeleteRequest`.
It also changes `DeleteBucket` API to treat `ErrBucketNotFound` as a
successful delete operation instead of returning an error back to the
client.
Change-Id: I3a22c16224c7894f2d0c2a40ba1ae8717fa1005f
Add a function to the overlay cache called UpdateAuditHistory, which
allows us to add online or offline audits to a particular node's audit
history, and get that node's "online score" for the configured tracking
period.
The next step will be to use UpdateAuditHistory from inside
BatchUpdateStats/UpdateStats, so that audit history is actually updated
when nodes get audited, and we can suspend nodes based on their online
score.
Change-Id: I2289105e6961e68e829a987ff756b0e576fab120
This change accomplishes multiple things:
1. Instead of having a max in flight time, which means
we effectively have a minimum bandwidth for uploads
and downloads, we keep track of what windows have
active requests happening in them.
2. We don't double check when we save the order to see if it
is too old: by then, it's too late. A malicious uplink
could just submit orders outside of the grace window and
receive all the data, but the node would just not commit
it, so the uplink gets free traffic. Because the endpoints
also check for the order being too old, this would be a
very tight race that depends on knowledge of the node system
clock, but best to not have the race exist. Instead, we piggy
back off of the in flight tracking and do the check when
we start to handle the order, and commit at the end.
3. Change the functions that send orders and list unsent
orders to accept a time at which that operation is
happening. This way, in tests, we can pretend we're
listing or sending far into the future after the windows
are available to send, rather than exposing test functions
to modify internal state about the grace period to get
the desired effect. This brings tests closer to actual
usage in production.
4. Change the calculation for if an order is allowed to be
enqueued due to the grace period to just look at the
order creation time, rather than some computation involving
the window it will be in. In this way, you can easily
answer the question of "will this order be accepted?" by
asking "is it older than X?" where X is the grace period.
5. Increases the frequency we check to send up orders to once
every 5 minutes instead of once every hour because we already
have hour-long buffering due to the windows. This decreases
the maximum latency that an order will be reported back to
the satellite by 55 minutes.
Change-Id: Ie08b90d139d45ee89b82347e191a2f8db1b88036
services
This PR adds a limiter on the amount of concurrent objects deletion can be handled so
we don't run out of memory.
Change-Id: Id2ce368af6f86845fcdfd34cb2f5e460efe9b272
* Add all new orders to the orders filestore instead of the database.
* Submit orders from the filestore to the new satellite SettleWindow
endpoint.
The orders filestore will eventually replace the orders DB completely.
For now, we will still be checking the orders DB and submitting those
orders if they exist. In a later release, we will completely remove the
orders DB, but we need both the DB and filestore for the transitionary
period.
Change-Id: Iac8780fd5ab770296181bbd313e1d335f072d4dc
This change will require less work for the user of peiecedeletion
service by moving overlay database call into the package.
Change-Id: I14a150ab71fe885780e7a7a74db006a779507ae5
This adds the unimplemented GetObjectIPs method to metainfo endpoint so
we can import new common protobuf definitions.
Change-Id: I154f26baccb6bb3c66de3eb25611930545c9754b
When investigating a gap in storage usage data in the SN dashboard, I noticed that there were 2 entries in the accounting_rollups table on the date of the gap.
This change accounts for multiple entries in the accounting_rollups table for a given day.
Change-Id: Ibf2b5d0455117cb0417163e8fcfb7e509d594171
It's an obsolete table from earlier state of Stripe invoices
implementation. No code is currently using it. It is confirmed that this
table is currently empty across all satellites.
Change-Id: I12d2756578faf8418ea8f3b09088e885694b8925
Small extension to test case where another partner is upload/downloading
to/from the same bucket as partner which creates this bucket.
Change-Id: Ib674fe5f95f868b71341e30aba5e2440847738f4
Use new objectdeletion package for deleting pointers.
In the best case scenario, it will make on database call to fetch
information about the number of segments. And another request to delete
and fetch information about other segments.
This PR also changes our object deletion API to return no error when an
object is not found but instead consider such operation as success. This
behavior is asligned with S3 API and makes the code less complex.
Change-Id: I280c56e8b5d815a8c4dafe8227689467e899775a
Adds AuditHistory{WindowSize, TrackingPeriod, GracePeriod,
OfflineThreshold}. These values will be used to track offline audits over
time, and to suspend/disqualify nodes for being offline for too long.
Change-Id: I05f7dbc3c034bdc53c4fbd7719c71a44f37ec6a5
This change removes the overlay function FindStorageNodesForRepair,
which skips using the node selection cache and hits the database
directly. Otherwise, it is functionally identical to
FindStorageNodesForUpload, which checks the node selection cache first.
When selecting nodes for PUT_REPAIRs, we now call
FindStorageNodesForUpload instead of FindStorageNodesForRepair to reduce
database load.
Change-Id: If34e109695b2ed2b8fb6759115bf769a3459684e
This adds a config flag orders.window-endpoint-rollout-phase
that can take on the values phase1, phase2 or phase3.
In phase1, the current orders endpoint continues to work as
usual, and the windowed orders endpoint uses the same backend
as the current one (but also does a bit extra).
In phase2, the current orders endpoint is disabled and the
windowed orders endpoint continues to use the same backend.
In phase3, the current orders endpoint is still disabled and
the windowed orders endpoint uses the new backend that requires
much less database traffic and state.
The intention is to deploy in phase1, roll out code to nodes
to have them use the windowed endpoint, switch to phase2, wait
a couple days for all existing orders to expire, then switch
to phase3.
Additionally, it fixes a bug where a node could submit a bunch
of orders and rack up charges for a bucket.
Change-Id: Ifdc10e09ae1645159cbec7ace687dcb2d594c76d
Jira: https://storjlabs.atlassian.net/browse/USR-822
This the last step of dropping these 2 db tables. It also deletes all
code associate with them.
Change-Id: I8be840dc2a7be255cf6308c9434b729fe4d9391e
* Do not swap the active audit queue with the pending audit queue until
the active audit queue is empty.
* Do not begin creating a new pending audit queue until the existing
pending audit queue has been swapped to the active queue.
Change-Id: I81db5bfa01458edb8cdbe71f5baeebdcb1b94317
Add a config so that some percent of users require credit cards /
account balances
in order to create a project or have a promotional coupon applied
UI was updated to match needed paywall status
At this point we decided not to use a field to store if a user is in an
A/B
test, and instead just use math to see if they're in a test. We decided
to use MD5 (because its in Postgres too) and User UUID for that math.
Change-Id: I0fcd80707dc29afc668632d078e1b5a7a24f3bb3
It feels weird having a repairer configuration part of order services.
Let's have a single source of truth for it.
Change-Id: I24f7c897aec80f3293f8af24876cbb6733d85a0b
Inside CreateGetRepairOrderLimits we pass in a list of healthy pieces,
but when we query node info from this list we apply the "reliable" filter
again. We sometimes end up with nodes which at first were healthy, but then
became unhealthy, and thus can be repaired, but we do not update the 'unhealthyPieces'
list with these nodes.
This causes an error, 'piece to add already exists', as we fail to remove these
pieces from the pointer before replacing them with repaired pieces.
Change-Id: I6e2445f342ac117ded30351fa7e5e523c9ec26bd
Jira: https://storjlabs.atlassian.net/browse/USR-822
The balance history in Satellite GUI display the deposit bonuses as
separate rows. These bonuses used to be stored in the satellite DB. We
recently started depositing the bonus directly to the Stripe balance and
migrated old bonuses to Stripe metadata.
This change displays all billing history entirely from Stripe, so we can
remove the `credits` and `credits_spendings` DB tables in a next step.
Change-Id: I14c304c66ec47c6a51f5b8508f11470cf36c4e24
There's still a possibility of tests clashing due to the shared mock,
however it's slightly better, because it avoids the race.
Change-Id: I80eedf1ca50b6114ebe69ea3c4d61176452f4df0
Removes old project_bandwidth_rollups records that are no longer used.
Uses a retain months configuration to determine how many months to save. Current month cannot be removed.
Tests retainMonths=-1, 0, 2
Change-Id: Ia4be2546cdb28802427acf41ecd85ad66df3e62c
Jira: https://storjlabs.atlassian.net/browse/USR-968
We want to keep track of the STORJ amount and exchange rate in the
metadata of Stripe Customer Balance Transaction to be able to generate
reports without the need of requesting CoinPayments for this info.
Change-Id: Ia93af95706cd2312cf688f044874495279fe8fa2
I introduced a bug with https://review.dev.storj.io/c/storj/storj/+/2216
Because the log change allowed insert to be called multiple times.
This changes the insert logic to do nothing if the PK already exists.
Change-Id: I90d192a0f6619bfbb360ea104066f00a3348f6dd
Improve our delete logic to require fewer database requests.
This PR creates a new objectdeletion package
Change-Id: I0500173bb9b8c771accb350f076329ede6dbb42c
request
We are no longer using `BeginDeleteSegment` or `ListSegments` so we can
avoid generating StreamID as a result of `BeginDeleteObject`.
StreamID from `BeginDeleteObject` is also not used on Uplink side.
Change-Id: I3b068deab17068459849b5cf05811cad4b8a9034
We are adding a monkit evaluation for the total sum of data stored on
the nodes before it is inserted into the database. This will give us a
time-series history of total data stored so we can see it change over
time.
Change-Id: I41145a2d7a09c8e63b42ae578bd081035b60e529
To prevent creating multiple users with the same email via API, we should check for an existing user with given email.
Change-Id: Ie35b85c4f94a7ca72d42951dab8ff475d7f0dd7c
Currently a customer created via the IP does not get an payment account until he signs in.
That causes issues if the account should be deleted again.
Change-Id: I393c8f301e426301bb713c423d6ce011138d4ae4
This change switches the backend logic to use the new DB column on the users table to restrict project creation.
Furthermore it back fills the existing limits from registration tokens to the new column to ensure no users are reset to the new default.
UI is updated to reflect ability to create several projects
Change-Id: Ie29157430ae6b065411ca4c4557c9f1be69cdc4f
the flush batch size was set to 1 which means that a flush was
async scheduled after the first write. the explicit trigger wait
was then always flushing nothing, and the test would only
pass if the async flush was scheduled before the read.
remove that async flush and pause the flush loop so that we are
in full control of when the flushes happen so there are no races.
the tests are still disabled but that's because the endpoint is
still disabled.
Change-Id: I2b7b07fd5525388c30be8efbf4af7105087228da
We passed in revocationDB and metainfoDB for no reason.
Lets remove it from the dependency list to further reduce the footprint.
Change-Id: Ic0317bb92670fbd305d4a8b0ed1cb82858e2f6d3
Why: We need a way to cut down on database traffic due to bandwidth
measurement and tracking.
What: This changeset is the Satellite side of settling orders in 1 hr windows.
See design doc for more details: https://review.dev.storj.io/c/storj/storj/+/1732
Change-Id: I2e1c151e2e65516ebe1b7f47b7c5f83a3a220b31
What:
Use the github.com/jackc/pgx postgresql driver in place of
github.com/lib/pq.
Why:
github.com/lib/pq has some problems with error handling and context
cancellations (i.e. it might even issue queries or DML statements more
than once! see https://github.com/lib/pq/issues/939). The
github.com/jackx/pgx library appears not to have these problems, and
also appears to be better engineered and implemented (in particular, it
doesn't use "exceptions by panic"). It should also give us some
performance improvements in some cases, and even more so if we can use
it directly instead of going through the database/sql layer.
Change-Id: Ia696d220f340a097dee9550a312d37de14ed2044
STORJ_POSTGRES_TEST naming was not consistent with STORJ_SIM_POSTGRES.
This allows to use STORJ_TEST_POSTGRES for clarity, it still has a
fallback to STORJ_POSTGRES_TEST.
Change-Id: I6f294c66c80fcfd6750fea2a89795f3b7f5dd691
This runs each benchmark for one iteration to ensure that they are
valid. Unfortunately, it does not give any useful metrics as output.
Change-Id: I68940398c8dd849aed656bd12656f48d5df10128
This system tracks an abstract "api version" from nodes based on
their usage, allowing us to have latching behavior where if a node
ever uses a new api, it can be blocked from using the old api.
This is better than using self-reported semver version information
because the node cannot lie, there's no confusion about what semver
version implies which features, no questions about dev and ci
environments, and no dependencies between reporting the version
and using the new api.
Change-Id: Ifeced5c9ae8e0a16102d79635e176a7d3bdd8ed4
Apply the coin payments when CoinPayments.net recieves the funds
Instead of the when STORJ gets them from CoinPayments.net
Based on 7/1/20 User Growth standup guidance by JG
Relates to: https://storjlabs.atlassian.net/browse/USR-801
Change-Id: I174ca23a585010f39464c45525e1dfe0179b7c1a
Allow more than 25 coin payment statuses to be queries from coinpayments.net
Add some slight logging, a coinpayments duration metric, and a disabled test
These changes are small changes in support of https://storjlabs.atlassian.net/browse/USR-801
Change-Id: I5b176cdd5e513d8bd88b92f9b22a8bd2456bbdd5
Since we increased the number of concurrent audit workers to two, there are going
to be instances of a single node being audited simultaneously for different segments.
If the node times out for both, we will try to write them both to the pending audits
table, and the second will return an error since the path is not the same as what
already exists. Since with concurrent workers this is expected, we will log the
occurrence rather than return an error.
Since the release default audit concurrency is 2, update testplanet default to run with
concurrent workers as well.
Change-Id: I4e657693fa3e825713a219af3835ae287bb062cb
This adds EncryptionKey definition that can be used as a flag.
These order.EncryptionKey-s will be used to encrypt data in
order limits.
This helps to avoid storing lots of transient data in the
main database.
This code doesn't yet contain encryption itself.
Change-Id: I2efae102a89b851d33342a0106f8d8b3f35119bb
Use a field to distinguish migration steps that need to use a
different transaction from previous steps. This is clearer than
using a func.
Change-Id: I2147369d05413f3e8ddb50c71a46ab1ba3ab5114
Jira: https://storjlabs.atlassian.net/browse/USR-821
The `migrate-credits` billing command checks the available credits
balance for all users and moves it to the Stripe balance by creating a
new credit balance transaction.
Change-Id: Iafc7b95a4edad47f7c145a22e210f8c821ac183d