Why: We need a way to cut down on database traffic due to bandwidth
measurement and tracking.
What: This changeset is the Satellite side of settling orders in 1 hr windows.
See design doc for more details: https://review.dev.storj.io/c/storj/storj/+/1732
Change-Id: I2e1c151e2e65516ebe1b7f47b7c5f83a3a220b31
What:
Use the github.com/jackc/pgx postgresql driver in place of
github.com/lib/pq.
Why:
github.com/lib/pq has some problems with error handling and context
cancellations (i.e. it might even issue queries or DML statements more
than once! see https://github.com/lib/pq/issues/939). The
github.com/jackx/pgx library appears not to have these problems, and
also appears to be better engineered and implemented (in particular, it
doesn't use "exceptions by panic"). It should also give us some
performance improvements in some cases, and even more so if we can use
it directly instead of going through the database/sql layer.
Change-Id: Ia696d220f340a097dee9550a312d37de14ed2044
STORJ_POSTGRES_TEST naming was not consistent with STORJ_SIM_POSTGRES.
This allows to use STORJ_TEST_POSTGRES for clarity, it still has a
fallback to STORJ_POSTGRES_TEST.
Change-Id: I6f294c66c80fcfd6750fea2a89795f3b7f5dd691
This system tracks an abstract "api version" from nodes based on
their usage, allowing us to have latching behavior where if a node
ever uses a new api, it can be blocked from using the old api.
This is better than using self-reported semver version information
because the node cannot lie, there's no confusion about what semver
version implies which features, no questions about dev and ci
environments, and no dependencies between reporting the version
and using the new api.
Change-Id: Ifeced5c9ae8e0a16102d79635e176a7d3bdd8ed4
Apply the coin payments when CoinPayments.net recieves the funds
Instead of the when STORJ gets them from CoinPayments.net
Based on 7/1/20 User Growth standup guidance by JG
Relates to: https://storjlabs.atlassian.net/browse/USR-801
Change-Id: I174ca23a585010f39464c45525e1dfe0179b7c1a
Since we increased the number of concurrent audit workers to two, there are going
to be instances of a single node being audited simultaneously for different segments.
If the node times out for both, we will try to write them both to the pending audits
table, and the second will return an error since the path is not the same as what
already exists. Since with concurrent workers this is expected, we will log the
occurrence rather than return an error.
Since the release default audit concurrency is 2, update testplanet default to run with
concurrent workers as well.
Change-Id: I4e657693fa3e825713a219af3835ae287bb062cb
Use a field to distinguish migration steps that need to use a
different transaction from previous steps. This is clearer than
using a func.
Change-Id: I2147369d05413f3e8ddb50c71a46ab1ba3ab5114
When a request comes in on the satellite api and we validate the
macaroon, we now also check if any of the macaroon's tails have been
revoked.
Change-Id: I80ce4312602baf431cfa1b1285f79bed88bb4497
add new columns `offline_suspended` and `under_review` to nodes table.
`unknown_audit_suspended` is a new column which will replace `suspended`
Change-Id: I22ddeb338ea0ff63f14332a7ebd0f3e9e4c06cdc
We should not be sending any type of orders to nodes that have completed
graceful exit with the current satellite. In particular, we should not
be trying to audit them, because that would be silly.
Change-Id: Ie2153e5739914ab696feefcdef28545ed70f84e4
Since we increased the number of audit workers from 1 to 2, we need to make sure
concurrent updates do not trample each other. We can do this by serializing the
transactions.
Change-Id: If1b2f71cabe3c779c12ffa33c0c3271778ac3ae0
This ensures that rows are closed to avoid leaks.
Also verifies that Err() is called, to ensure that no
error is left behind.
Change-Id: Idd1bec9bf479f40021da67b2c80ce83033149469
The DB query in GetAllocatedBandwidthTotal uses an exclusive range:
'WHERE interval_start > ?'
The value that is used for this condition is the first day of current the month,
00:00:00 UTC.
By using the exclusive '>', we exclude the entire first hour of the month from the
result set.
Change-Id: I3ed300f5230c7514dc9495a85e8166213cd0842e
this way we don't have to do 2 steps, and by using the ctid, postgres
is going to do two very efficient prefix scans.
Change-Id: Ia9d0546cdf0a1af67ceec9cd508d336a5fdcbdb9
also remove the continuation support from the queue, otherwise
we may end up sequential scanning the entire table to get
a few rows at the end.
then, in the core, instead of looping both to get a big enough
batch inside of the queue, as well as outside of it to ensure
we consume the whole queue, just get a single batch at a time.
also, make the queue size configurable because we'll need to
do some tuning in production.
Change-Id: If1a997c6012898056ace89366a847c4cb141a025
In jaeger, it shows that this function gets called repetitively in
a single request. Most of the time, it's less than 1ms. Therefore, it
doesn't add much value in our trace but create noises.
Change-Id: I20234f36bbcf0fc22f91e5e1a5634c0cad577ed0
struct
This change is removing ProjectID from code. Next change will be about
dropping this colum from DB table.
Change-Id: Idb949e2829e2c304a2b6b011259c7cc7667082e1
the initial calculations for the historical values of comp_at_rest
were wrong. because our historical data only included total amounts
as well as compensation for bandwidth, the at rest value was
calculated as
at_rest = total - bandwidth
unfortunately, that calculation did not take surge pricing into
account correctly. the at rest and bandwidth values do not
include surge pricing, but the total that was used did. so what
we actually calculated was
no_surge_at_rest = surge_total - no_surge_bandwidth
which will create a value that is too large. this migration
fixes the calculation for imports that are old enough and
of a non-negligable difference.
Change-Id: I61eb0b670510f6d7fb8fc3de39ba79150fac10eb
* add monkit stat new_remote_segments_needing_repair, which reports the
number of new unhealthy segments in the repair queue since the previous
checker iteration
Change-Id: I2f10266006fdd6406ece50f4759b91382059dcc3
This attempts to add a README.md to help create consistent migrations
that maximize our test coverage and do not include unnecessary
statements.
It also adds a feature to have an `-- OLD DATA --` section as well
as a `-- NEW DATA --` section so that we can fix mistakes made in
previous snapshots (like a row that was forgotten to be added when a
table was created) without editing them going forward.
Change-Id: I28a786f8ef163cae1de1bb08f61af1e1104b0a88
What: As soon as a node passes the vetting criteria (total_audit_count and total_uptime_count
are greater than the configured thresholds), we set vetted_at to the current timestamp.
Why: We may want to use this timestamp in future development to select new vs vetted nodes.
It also allows flexibility in node vetting experiments and allows for better metrics around
vetting times.
Please describe the tests: satellitedb_test: TestUpdateStats and TestBatchUpdateStats make sure vetted_at is set appropriately
Please describe the performance impact: This change does add extra logic to BatchUpdateStats and UpdateStats and
commits another variable to the db (vetted_at), but this should be negligible.
Change-Id: I3de804549b5f1bc359da4935bc859758ceac261d
To avoid including multiple months in a single invoice, we need all
inspector's invoice commands to run in for specific period.
See https://storjlabs.atlassian.net/browse/USR-725
Change-Id: I3637dc189234f02350daca8d897c21765762ea55
There is a subtle problem when one does a cast with `::date`. Observe:
teststorj=# set timezone = 'US/Eastern';
SET
teststorj=# select (timestamp with time zone '2020-02-01 00:00:00+00')::date;
date
------------
2020-01-31
(1 row)
teststorj=# set timezone = 'UTC';
SET
teststorj=# select (timestamp with time zone '2020-02-01 00:00:00+00')::date;
date
------------
2020-02-01
(1 row)
In order to correctly determine the date a timestamp is in, one has to
explicitly pick the time zone that the date truncation should use
otherwise postgres will use whatever setting the client has. These
tests were failing for me locally, because I run my postgres in
the US/Eastern time zone to try to tickle these bugs out. So it
should be `(x at time zone 'UTC')::date` instead of just `x::date`.
Change-Id: I4e9e32d4b53abc6165a4d0474f4702f8b9f801c7
Add a flag that allows us to easily switch disqualification from
suspension mode on or off. A node will only be disqualified from
suspension mode if it has been suspended for longer than the grace
period AND the SuspensionDQEnabled flag is true.
Change-Id: I9e67caa727183cd52ab2042b0a370a1bcaebe792