Commit Graph

73 Commits

Author SHA1 Message Date
Michal Niewrzal
3b6e1123b8 satellite/orders: fix sorting rollups before inserting
Sorting by primary key before inserting data into DB is fixed.
Earlier we were sorting input slice of BucketBandwidthRollup but then
we were putting all entries into map to rollup input data. Iteration
over map with a range loop doesn't guarantee any specific order so we
were loosing sorted order when we were creating with this map slices to
use with DB insert.

New code is also using map but when map is full its sorting map keys
separately and iterates over them to get data from map.

https://github.com/storj/storj/issues/5332

Change-Id: I5bf09489b0eecb6858bf854ab387b660124bf53f
2023-02-01 12:17:25 +00:00
Michal Niewrzal
a2a9dafa33 satellite/orders: don't store allocated bandwidth in
bucket_bandwidth_rollups table

We have performance problems with updating bucket_bandwidth_rollups. To
improve situation we can stop storing allocated bandwidth in this table.
This should reduce large number of updates which are comming from
metainfo endpoints, repair workers and audit.

Next step will be to drop `allocated` column completely from
bucket_bandwidth_rollups.

Allocated GET bandwidth is all we need and we are keeping it in
bucket_bandwidth_rollups table.

Change-Id: Ifdd26a89ba8262acbca6d794a6c02883ad0c0c9b
2023-01-12 13:21:02 +00:00
Fadila Khadar
ffcd54719a satellite/satellitedb: use tx instead of db.db in transactions
Change-Id: Icee8871af76233651f007e03173660905eafc0e3
2022-06-14 17:35:44 +00:00
dlamarmorgan
ab37b65cfc satellite/{accounting,orders,satellitedb}: group bucket bandwidth rollups by time window
Batching of the order submissions can lead to combining the allocated
traffic totals for two completely different time windows, resulting
in incorrect customer accounting. This change will group the batched
order submissions by projectID as well as time window, leading to
distinct updates of a buckets bandwidth rollup based on the hour
window in which the order was created.

Change-Id: Ifb4d67923eec8a533b9758379914f17ff7abea32
2022-01-05 20:24:48 +00:00
Fadila Khadar
f77f61532a satellite/orders: use egress_dead for calculating allocated bandwidth
Change-Id: I8240d99d0cdaad4c5e059565e88ee9619b62526e
2021-10-11 14:58:26 +02:00
Fadila Khadar
fb0d055a41 satellites/orders: populate egress_dead in project_bandwidth_daily_rollups
Populate the egress_dead column for taking into account allocated bandwidth that can be removed because orders have been sent by the storage nodes. The bandwidth not used in these orders can be allocated again.

Change-Id: I78c333a03945cd7330aec052edd3562ec671118e
2021-10-06 16:54:49 +00:00
Michał Niewrzał
d987990c15 satellite/satellitedb: removing usage of project_bandwidth_rollups table
We are not using this table so make no sense to put data there.
This change removes only code that is using this table. Before next
release we need to drop table with migration step.

Change-Id: I80f400aa778c717e70324bd00da502b7032c9d9f
2021-06-02 05:58:38 +00:00
Michał Niewrzał
59eabcca24 satellite/orders: populate
project_bandwidth_daily_rollups table

We want to calculate used bandwidth better so we need to calculate it
from allocated and settled bandwidth. To do this we need first populate
this new table.

https://storjlabs.atlassian.net/browse/PG-56

Change-Id: I308b737bf08ee48ce4e46a3605697ab2095f7257
2021-05-25 18:07:22 +00:00
Egon Elbre
10372afbe4 ci: fix lint errors
Change-Id: Ib5893440807811f77175ccd347aa3f8ca9cccbdf
2021-05-17 13:37:31 +00:00
Egon Elbre
a2e20c93ae private/dbutil: use dbutil and tagsql from storj.io/private
Initially we duplicated the code to avoid large scale changes to
the packages. Now we are past metainfo refactor we can remove the
duplication.

Change-Id: I9d0b2756cc6e2a2f4d576afa408a15273a7e1cef
2021-04-23 14:36:52 +03:00
Ivan Fraixedes
d93944c57b satellite/orders: Delete unused methods & DB tables
Delete satellite order methods and DB tables which aren't used anymore
after we have done a refactoring on the orders to stuck bucket
information in the orders' encrypted metadata.

There are also configuration parameters and a satellite chore that
aren't needed anymore after the orders refactoring.

Change-Id: Ida3682b95921df70792284b42c96d2508bf8ca9c
2021-02-01 18:01:29 +00:00
Jessica Grebenschikov
da0327c9b7 satellite/dbcleanup: remove expired serial chore
Change-Id: Ib71d41eb6679d6435e5bc10b6244dac66380a74e
2020-12-18 09:36:28 -08:00
Egon Elbre
55d5e1fd7d satellite/orders: ensure that expired deletion doesn't stall
Add checks to ensure that when somebody uses empty options, the deletion
doesn't loop infinitely.

Change-Id: I1738fb1e7e1f8efbbb954c491cb6489f7bcdc2db
2020-11-23 14:52:40 +02:00
Ethan
2b92bba563 satellite/satellitedb/orders: Handle serial_numbers deletes in smaller increments on CRDB
CRDB doesn't like large deletes. While testing in the POC environment we found that deletes on the serial_numbers table could take hours.  This change limits deletes to 1000 at a time (configurable) to avoid blocking other queries.

Change-Id: I08455e25db1574579dd4d7b7125a08e9c913dff1
2020-11-20 13:44:52 +00:00
Moby von Briesen
a8b66dce17 satellite/accounting: account for old orders that can be submitted in satellite rollup
With the new phase 3 order submission, orders can be added to the
storage and bandwidth rollup tables at timestamps before the most recent
rollup was run. This change shifts the start time of each new rollup
window to account for any unexpired orders that might have been added
since the previous rollup.

A satellitedb migration is necessary to allow upserts in the
accounting_rollups table when entries with identical node_ids and
start_times are inserted.

Change-Id: Ib3022081f4d6be60cfec8430b45867ad3c01da63
2020-11-18 14:46:00 -05:00
Jessica Grebenschikov
f558cc825e satellite/orders: add storagenode_bw_phase2 table and dont delete tallies for longer
It turns out we need to make 2 more changes in order for the new order submission phase 3 to get deployed.

This PR makes 2 changes:
1) when the rollup service deletes tallies, we now keep tallies around until orders expire (vs 1 day like before).
2) the reported rollup chore will now write the storagenode_bandwidth_rollups to a new table _phase2 as an intermediary step so it doesn't conflict with phase 3 order settlement.

These changes need to be deployed for 2 days before we can turn on phase 3 of the new orders settlement workflow.

Change-Id: Iafbff577ba7d55f8f17b7db857311b2ce799de60
2020-11-13 17:15:24 +00:00
Jeff Wendling
0f0faf0a9f satellite/orders: do a better job limiting concurrent requests
Doing it at the ProcessOrders level was insufficient: the endpoints
make multiple database calls. It was a misguided attempt to only
have one spot enter the semaphore. By putting it in the endpoint
we can not only be sure that the concurrency is correctly limited
but it can be configurable easily.

Change-Id: I937149dd077adf9eb87fce52a1a17dc0afe96f64
2020-10-09 16:27:15 -04:00
Jeff Wendling
7c303208ff satellite/satellitedb: emergency temporary order processing semaphore
we have thundering herds of order submissions that take all of the
database connections causing temporary periodic outages. limit
the amount of concurrent order processing to 2.

Change-Id: If3f86cdbd21085a4414c2ff17d9ef6d8839a6c2b
2020-10-08 19:16:47 +00:00
Egon Elbre
c23a8e3b81 go.mod: update pgx to v4.9.0
Fix query to use TextArray instead of VarcharArray.
Fix queries to use the correct type.

Change-Id: Ibb7e55adba277d05778118d81ca697470e72c374
2020-09-29 19:03:08 +00:00
Egon Elbre
94a09ce20b all: add missing dots
Change-Id: I93b86c9fb3398c5d3c9121b8859dad1c615fa23a
2020-08-11 17:50:01 +03:00
stefanbenten
257855b5de all: replace == comparison with errors.Is
Change-Id: I05d9a369c7c6f144b94a4c524e8aea18eb9cb714
2020-07-14 15:50:25 +00:00
Jessica Grebenschikov
8abb907010 satellite/orders: add settle orders with window
Why: We need a way to cut down on database traffic due to bandwidth
measurement and tracking.

What: This changeset is the Satellite side of settling orders in 1 hr windows.
See design doc for more details: https://review.dev.storj.io/c/storj/storj/+/1732

Change-Id: I2e1c151e2e65516ebe1b7f47b7c5f83a3a220b31
2020-07-13 15:41:29 -07:00
paul cannon
bbdb351e5e all: use jackc/pgx in place of lib/pq
What:

Use the github.com/jackc/pgx postgresql driver in place of
github.com/lib/pq.

Why:

github.com/lib/pq has some problems with error handling and context
cancellations (i.e. it might even issue queries or DML statements more
than once! see https://github.com/lib/pq/issues/939). The
github.com/jackx/pgx library appears not to have these problems, and
also appears to be better engineered and implemented (in particular, it
doesn't use "exceptions by panic"). It should also give us some
performance improvements in some cases, and even more so if we can use
it directly instead of going through the database/sql layer.

Change-Id: Ia696d220f340a097dee9550a312d37de14ed2044
2020-07-13 15:54:41 +00:00
Egon Elbre
36c461bd59 private/tagsql: track proper closing of rows and statements
This ensures that rows are closed to avoid leaks.
Also verifies that Err() is called, to ensure that no
error is left behind.

Change-Id: Idd1bec9bf479f40021da67b2c80ce83033149469
2020-06-05 18:25:43 +00:00
Jeff Wendling
2b3545c0c3 satellite/satellitedb: use delete returning to query pending_serial_queue
this way we don't have to do 2 steps, and by using the ctid, postgres
is going to do two very efficient prefix scans.

Change-Id: Ia9d0546cdf0a1af67ceec9cd508d336a5fdcbdb9
2020-06-01 15:43:33 -06:00
Jeff Wendling
44433f38be satellite/satellitedb: remove ORDER BY when reading from queue
also remove the continuation support from the queue, otherwise
we may end up sequential scanning the entire table to get
a few rows at the end.

then, in the core, instead of looping both to get a big enough
batch inside of the queue, as well as outside of it to ensure
we consume the whole queue, just get a single batch at a time.

also, make the queue size configurable because we'll need to
do some tuning in production.

Change-Id: If1a997c6012898056ace89366a847c4cb141a025
2020-06-01 18:31:14 +00:00
Ethan
acf53bea4d satellite/orders;accounting: Add monthly project download bandwidth rollup
See https://storjlabs.atlassian.net/browse/SM-776

Change-Id: Ifd5cccea43c556fd59822d17344f399cfe9a7164
2020-05-04 15:49:57 +00:00
Egon Elbre
0a69da4ff1 all: switch to storj.io/common/uuid
Change-Id: I178a0a8dac691e57bce317b91411292fb3c40c9f
2020-03-31 19:16:41 +03:00
Jeff Wendling
115f4559e5 satellite/orders: more efficient processing of orders
by doing an indexed anti-join we're able to reduce the time to
select the pending orders by over 10x on postgres. this should
help us process pending orders much more quickly.

it probably won't do as good a job on cockroach because it does
not do an indexed anti-join and instead does a hash join after
scanning the entire consumed serials table. we should either
remove orders entirely or try to make that more efficient
when necessary.

Change-Id: I8ca0535acd21c51e74955b24c9b86d20e4f2ff9c
2020-03-18 09:03:30 +00:00
Jeff Wendling
7baa59753a satellite/orders: add tests for double sending the same order
Change-Id: If2fa7f035257df3b04f506f81aa8b2e0916f5033
2020-03-17 14:18:03 +00:00
Egon Elbre
5f2ca0338b satellite/satellitedb: fix err and close order
Change-Id: Ied927275853c4cf4a8ccb500048d50545f6c6efe
2020-03-04 09:05:22 +00:00
Jeff Wendling
f671eb2beb satellite/satellitedb: use queue for orders to get back fast billing
This change adds two new tables to process orders as fast as we used
to but in an asynchronous manner and with hopefully less storage
usage. This should help scale on cockroach, but limits us to one
worker. It lays the groundwork for the order processing pipeline to
be queue rather than database driven.

For more details, see the added fast billing changes blueprint.

It also fixes the orders db so that all the timestamps that are
passed to columns that do not contain a time zone are converted to
UTC at the last possible opportunity, making it less likely to use
the APIs incorrectly. We really should migrate to include timezones
on all of our timestamp columns.

Change-Id: Ibfda8e7a3d5972b7798fb61b31ff56419c64ea35
2020-02-24 17:07:07 +00:00
Ivan Fraixedes
1a84a00cc9
satellite/orders: Fix doc comments
Enhance the documentation of the UseSerialNumber method (interface and
implementation) and add several missing dots in doc comments of the
methods of the same interface and implementation.

Change-Id: I792cd344f0d2542e060fa2ec288b71231cae69de
2020-02-18 13:03:23 +01:00
Egon Elbre
97d360afd1 satellite/satellitedb: use correct type
Array was using a smaller type integer.

Change-Id: I025d61b6cea9869efa0b4ac1d24265356491f6dc
2020-01-31 13:00:14 -05:00
Egon Elbre
227e03dea1 satellite/satellitedb: insert using arrays
Using dynamic query strings is error prone, prefer arrays.

Change-Id: I303fbf21c6a54795bd9f399371943b5c51e6f863
2020-01-27 21:27:28 +00:00
paul cannon
a0a94a9ac7 satellite/satellitedb: insert into reported_serials w/ arrays
Change-Id: Icb682de09ded3e3159e3590594dcf13f2e7f40f0
2020-01-24 18:36:21 -06:00
Cameron Ayer
494fead7af satellitedb/orders: fix comma bug in SQL stmt
Change-Id: Ibc6024eeeb5aa4de3909c0cec2d01ac0a01c809f
2020-01-24 13:58:32 -05:00
Jeff Wendling
665ed3b6b1 satellite/satellitedb: fix issue with shared memory on range for bucket rollups
A uuid.UUID is an array of bytes, and slicing it refers to the
underlying value, much like taking the address. Because range
in Go reuses the same value for every loop iteration, this means
that later iterations would overwrite earlier stored project
ids. We fix that by making a copy of the value before slicing it
for every loop iteration.

Change-Id: Iae3f11138d11a176ce360bd5af2244307c74fdad
2020-01-23 21:57:02 -07:00
Jeff Wendling
75314a4364 satellite/satellitedb: fix roundToNextDay to handle timezones appropriately
Since incoming times may be in any time zone, and we want the output
to be in UTC and for them to have 00:00:00 hours, minutes and seconds
we first convert the incoming timestamp to UTC before doing the
truncate to the day and adding a day.

Because the old code always returned a timestamp that was in the
future, this is just for efficiency.

Change-Id: Ie692d47bca8691e73852c822d5c56cf8773d99b4
2020-01-21 21:02:16 +00:00
Egon Elbre
1abfe42142 satellite: use tagsql
Change-Id: I2170dee409fb0c2fe85913ddd36e7811a3b853ed
2020-01-19 14:39:16 +02:00
Jessica Grebenschikov
955abd9293 satellite/satellitedb/orders: add multi row upserts to process orders
Change-Id: I00d8b55ee74b443fb328bd3a4378308cefa368e4
2020-01-16 23:51:46 +00:00
Jeff Wendling
696d98a232 satellite/satellitedb: fix nitpicks and timestamp issue found in review
warning: databases migrated to version 77 before this commit
is merged must be manually re-migrated. this should not be a
problem for anything but staging databases.

Change-Id: Ie1631c48379472352014183ee43f1465e22200f7
2020-01-16 21:22:38 +00:00
Jeff Wendling
78c6d5bb32 satellite/satellitedb: reported_serials table for processing orders
this commit introduces the reported_serials table. its purpose is
to allow for blind writes into it as nodes report in so that we have
minimal contention. in order to continue to accurately account for
used bandwidth, though, we cannot immediately add the settled amount.
if we did, we would have to give up on blind writes.

the table's primary key is structured precisely so that we can quickly
find expired orders and so that we maximally benefit from rocksdb
path prefix compression. we do this by rounding the expires at time
forward to the next day, effectively giving us storagenode petnames
for free. and since there's no secondary index or foreign key
constraints, this design should use significantly less space than
the current used_serials table while also reducing contention.

after inserting the orders into the table, we have a chore that
periodically consumes all of the expired orders in it and inserts
them into the existing rollups tables. this is as if we changed
the nodes to report as the order expired rather than as soon as
possible, so the belief in correctness of the refactor is higher.

since we are able to process large batches of orders (typically
a day's worth), we can use the code to maximally batch inserts into
the rollup tables to make inserts as friendly as possible to
cockroach.

Change-Id: I25d609ca2679b8331979184f16c6d46d4f74c1a6
2020-01-15 19:21:21 -07:00
Jeff Wendling
9da16b1d9e satellite/satellitedb/dbx: name the package dbx
everyone was importing it as dbx anyway. why should it be
named satellitedb? so yeah just pass the "-p dbx" flag.

Change-Id: I5efa669f4f00f196b38a9acd0d402009475a936f
2020-01-15 15:16:39 -07:00
Egon Elbre
64fb2d3d2f Revert "dbutil: statically require all databases accesses to use contexts"
This reverts commit 8e242cd012.

Revert because lib/pq has known issues with context cancellation.
These issues need to be resolved before these changes can be merged.

Change-Id: I160af51dbc2d67c5449aafa406a403e5367bb555
2020-01-15 07:28:00 +00:00
JT Olio
8e242cd012 dbutil: statically require all databases accesses to use contexts
this will allow for some nice runtime analysis down the road.
also, this allows for wrapping database handles in a way that
can interact with these contexts

requires https://review.dev.storj.io/c/storj/dbx/+/514

Change-Id: Ib087b7cd73296dd2c1e0331314da34d861f61d2b
2020-01-14 18:20:47 -05:00
Jeff Wendling
3b99f03780 satellite/orders: add monitoring to bucket bandwidth cache operations
Change-Id: Ib14303fc9f97a133410e2d6e2cf532e468b3dcee
2020-01-13 17:36:40 -07:00
Isaac Hess
4950d7106a satellite/orders: Add write cache for bw rollups
Change-Id: I8ba454cb2ab4742cafd6ed09120e4240874831fc
2020-01-13 22:40:51 +00:00
Egon Elbre
ff267168c5 private/migrate: add ctx argument
Change-Id: I3d65912d89261386413c494c7ed1576fed4dcaf4
2020-01-13 15:52:26 +02:00
littleskunk
bcc23f6869
Satellite/orders: remove allocated bandwith from storagenode_bandwidth_rollups
When an uplink requests an upload or download from the satellite we are trackig the
allocated bandwidth twice. The value in bucket_bandwidth_rollups is used
for project limits but the value in storagenode_bandwidth_rollups is not
used at all. We can increase the performance by removing it. Uplinks
will get a faster response from the satellite.

Change-Id: Icccd41f94107ef34668f30f99bf5f728c384b07e
2020-01-12 16:20:47 +01:00