Metainfo needs to know rate and burst limit to be able to limit users
requests. We made cache for per project limiter but to make single
instance we need to know about limits. So far we were doing direct DB
call to get rate/burst limit for project but it's generating lots of
DB requests and can be easily cached as we even have project limit cache.
This change extends project limit cache with rate/burst limit and starts
using this change while creating project limiter instance for metainfo.
Because data size kept in project limit cache is quite small this change
also bumps a bit default capacity of the cache.
Fixes https://github.com/storj/storj/issues/5663
Change-Id: Icb42ec1632bfa0c9f74857b559083dcbd054d071
We have lots of direct DB requests to get API keys. It should be handled
by cache but default value is very low at the moment.
Fixes https://github.com/storj/storj/issues/5665
Change-Id: I214ebebd6e397cacff80b2f36dc4a2eea388f93d
This might be pretty awful, but at least it is a complete and non-flaky
solution.
**Only when using the rollingupgrade test** (which implies a throwaway
satellite and also a PostgreSQL backend), create a trigger on the nodes
table which forces last_net to be equal to last_ip_port always.
Change-Id: I8448cf131e46576d96a414d06780270c7b2b1892
This test involves a satellite with dev defaults (DistinctIP=no) being
upgraded past commit 2522ff09b6, which
means we need to run the dev-defaults-satellite-upgrade migration SQL
to avoid getting DistinctIP=yes behavior (which breaks the tests).
Change-Id: I29fb596d1ffa568dad635d98cfe9abacd3aaa48f
Up to now, we have been implementing the DistinctIP preference with code
in two places:
1. On check-in, the last_net is determined by taking the /24 or /64
(in ResolveIPAndNetwork()) and we store it with the node record.
2. On node selection, a preference parameter defines whether to return
results that are distinct on last_net.
It can be observed that we have never yet had the need to switch from
DistinctIP to !DistinctIP, or from !DistinctIP to DistinctIP, on the
same satellite, and we will probably never need to do so in an automated
way. It can also be observed that this arrangement makes tests more
complicated, because we often have to arrange for test nodes to have IP
addresses in different /24 networks (a particular pain on macOS).
Those two considerations, plus some pending work on the repair framework
that will make repair take last_net into consideration, motivate this
change.
With this change, in the #2 place, we will _always_ return results that
are distinct on last_net. We implement the DistinctIP preference, then,
by making the #1 place (ResolveIPAndNetwork()) more flexible. When
DistinctIP is enabled, last_net will be calculated as it was before. But
when DistinctIP is _off_, last_net can be the same as address (IP and
port). That will effectively implement !DistinctIP because every
record will have a distinct last_net already.
As a side effect, this flexibility will allow us to change the rules
about last_net construction arbitrarily. We can do tests where last_net
is set to the source IP, or to a /30 prefix, or a /16 prefix, etc., and
be able to exercise the production logic without requiring a virtual
network bridge.
This change should be safe to make without any migration code, because
all known production satellite deployments use DistinctIP, and the
associated last_net values will not change for them. They will only
change for satellites with !DistinctIP, which are mostly test
deployments that can be recreated trivially. For those satellites which
are both permanent and !DistinctIP, node selection will suddenly start
acting as though DistinctIP is enabled, until the operator runs a single
SQL update "UPDATE nodes SET last_net = last_ip_port". That can be done
either before or after deploying software with this change.
I also assert that this will not hurt performance for production
deployments. It's true that adding the distinct requirement to node
selection makes things a little slower, but the distinct requirement is
already present for all production deployments, and they will see no
change.
Refs: https://github.com/storj/storj/issues/5391
Change-Id: I0e7e92498c3da768df5b4d5fb213dcd2d4862924
Segments loop have build-in sanity check to verify if number of segments
processed by loop is roughly fine. We want to have the same verification
for ranged loop.
https://github.com/storj/storj/issues/5544
Change-Id: Ia19edc0fb4aa8dc45993498a8e6a4eb5928485e9
This change adds a new chore that will check for failed invoices and
potentially freeze corresponding accounts.
It makes slight modifications to stripemock.go and invoices.go (adding
stripe CustomerID to the Invoice struct).
Issue: https://github.com/storj/storj-private/issues/140
Change-Id: I161f4037881222003bd231559c75f43360509894
add new config to the satellite admin: --admin.groups.limit-update.
This can be used as an alternate means of authentication if the request
is coming from the oauth proxy.
Change-Id: Ic2de13862e6414244b060c66a0f2bed72097cbad
Add new purchase-package endpoint to Server. The endpoint can be enabled
or disabled by a new config, --console.pricing-packages-enabled.
The purchase-package endpoint applies a coupon and adds and charges a
credit card if user's useragent is a partner with a configured package
plan.
github issue: https://github.com/storj/storj-private/issues/125
Change-Id: I0d6ccccd6874ddba360c45f338fd1c44f95e135a
Older releases are not compiling with latest Go version if quic is used.
We need to add noquic tag to be able to compile older release with
latest Go version.
Change-Id: Id5768fcaa5c1f7cf3e6fbb633e7ca60309b7a37c
Adds a feature flag for the new all projects dashboard. It defaults to false.
Issue: https://github.com/storj/storj/issues/5514
Change-Id: I160904eccae7d30e05b734e69600725702b16aca
We will be needing an infrequent chore to check which nodes are in the
reverify queue and synchronize that set with the 'contained' field in
the nodes db, since it is easily possible for them to get out of sync.
(We can't require that the reverification queue table be in the same
database as the nodes table, so maintaining consistency with SQL
transactions is out. Plus, even if they were in the same database, using
such SQL transactions to maintain consistency would be slow and
unwieldy.)
This commit adds the actual chore.
Refs: https://github.com/storj/storj/issues/5431
Change-Id: Id78b40bf69fae1ac39010e3b553315db8a1472bd
To improve deletion of old entries in project_bandwidth_daily_rollup
we need index on `interval_day` column which is used to find those old
entries.
As an addition we are changing interval how often deletion is executed
from 7 to 1 day. We would like to have smaller portion of data to
delete.
Fixes https://github.com/storj/storj/issues/5465
Change-Id: Ie18ebe859887b93d6e4e6065a61fb9214c7ad27a
implemented observer and partial, created new structures to keep mon
metrics remain in same way as in segment loop
Change-Id: I209c126096c84b94d4717332e56238266f6cd004
Create a config to specify one-time prices and corresponding coupon
ids for partners.
github issue: https://github.com/storj/storj-private/issues/118
Change-Id: I67b26e7208b12ba8f0e6dc1b164dd9545b09cac0
Add node tally ranged loop observer and partial.
Add node tally randed observer to range loop peer.
Add config flag to select which loop to use for node tally.
Update satellite core to use segement/ranged loop based on a flag.
Duplicate existing node tally test but using ranged loop.
Change-Id: I6786f1a16933463fab5f79601bf438203a7a5f9e
added in storj-sim rangedloop for each satellite, to verify it works for metrics oveserver,
removed identity from rangedloop peer as we never use it, added logs on service run, added loop
to service instead of endless for loop, interval value to config
Closes: https://github.com/storj/storj/issues/5414
Change-Id: Ibc3b06071b68feda4a35b45da2bbe36e22a02fc8
This change implements the ranged loop observer to replace the audit
chore that builds the audit queue.
The strategy employed by this change is to use a collector for each
segment range to build separate per-node segment reservoirs that are
then merge them during the join step.
In previous observer migrations, there were only a handful of tests so
the strategy was to duplicate them. In this package, there are dozens
of tests that utilize the chore. To reduce code churn and maintenance
burden until the chore is removed, this change introduces a helper that
runs tests under both the chore and observer, providing a pair of
functions that can be used to pause or run the queueing function.
https://github.com/storj/storj/issues/5232
Change-Id: I8bb4b4e55cf98b1aac9f26307e3a9a355cb3f506
The tests are forked from the chore tests with slight adaptations for
being run against the ranged loop. I also moved a benchmark for the
database from chore_test.go to db_test.go.
The pathcollector is reused as a rangedloop.Partial.
https://github.com/storj/storj/issues/5234
Change-Id: I56182031d133812a9f4d4a433c01b9150af39f31
This change stubs userinfo endpoint from storj/common/pb/userinfo.proto.
It also adds config for allowed peers, and a method for verifying peers.
Issue: https://github.com/storj/storj/issues/5358
Change-Id: I057a0e873a9e9b3b9ad0bba69305f0d708bd9b9e
Adds new method Exists which can be used to verify which
requested piece ids exists on storage node. Will verify only pieces
which belongs to the satellite that used that endpoint.
Minum WASM size was increased a bit.
https://github.com/storj/storj/issues/5415
Change-Id: Ia5f9cadeb526541b2776a8973eb7d50133ad8636
This change updates the stripecoinpayments service to optionally skip
generating line items for payments records that have no egress, storage,
or segments for the billing period.
This results in a reduction from 4 to 1 Stripe API calls for customers
who have no usage. The final API call is the attempt to generate an
invoice on stripe, which expectedly fails because there are no unapplied line
items. Removing that final API call would require some additional
queries and is out of scope for this change.
This functionality is behind the
`payments.stripe-coin-payments.skip-empty-invoices` feature flag.
https://github.com/storj/storj/issues/5381
Change-Id: Id184969a4c79047c40502336d69c51388ab03bf8
Minimal implementation of the ranged (=threaded) segment loop
service, to improve performance over the existing loop.
Has tests with a an inmemory segment database
and example observer.
Does not have yet: database link, observer duration tracking,
suspicious processed ratio guard, rate limiting, minimum execution
interval per observer, etc.
Part of https://github.com/storj/storj/issues/5223
Change-Id: I08ffb392c3539e380f4e7b4f1afd56c4c394668d
This change reduces the token links expiry time from 24h to 30m and improves the UI to promt users of the expiration.
see: https://github.com/storj/storj-private/issues/17
Change-Id: Iac00f5740fa84069937fdf9bd30a739b6db2a9e0
The audit chore will be pushing a large number of segments to be
audited, and the db might choke on that large insert when under load.
This change divides the insert up into batches, which can be sized
however is optimal for the backing database. It also arranges for
segments to be inserted in the order of the primary key, which helps
performance on some systems.
Refs: https://github.com/storj/storj/issues/5228
Change-Id: I941f580f690d681b80c86faf4abca2995e37135d