Components related to project usage costs have been updated to show
different estimations for each partner, and the satellite has been
updated to send the client the information it needs to do this.
Previously, project costs in the satellite frontend were estimated
using only the price model corresponding to the partner that the user
registered with. This caused users who had a project containing
differently-attributed buckets to see an incorrect price estimation.
Resolvesstorj/storj-private#186
Change-Id: I2531643bc49f24fcb2e5f87e528b552285b6ff20
This combines the ListStreamPositions and GetSegmentByPosition
calls with a ListSegments call that now knows how to return
only the segments within a Range, just like ListStreamPositions.
It would theoretically be possible to also include the
GetObjectLastCommitted call by having it do one of three
queries based on the incoming request range, but that would
mean duplicating the data for the object in every single
row that is returned for each segment in the range.
One gross thing that ListSegments has to do now is update the
first segment returned with the information from any ancestor
segment because GetSegmentByPosition used to do that. It only
updates the first segment so that it doesn't do O(N) database
queries. It seems difficult to have it do a single query to
update all of the segments at once. I'm not certain this change
should be merged on this basis alone.
This change has made me think a couple of things should happen:
1. Server side copy with ancestor segments strikes again
making the code less clear and potentially more buggy
or inefficient for a rare case (empirically <0.1%)
2. The download code requests individual segments from
the satellite lazily as part of its download which
requires the satellite telling it the locations of
all of the segments which requires the satellite
querying the locations of all of the segments. Instead
the download RPC could return the orders for all of
the segments for a range and the download code could
issue N download calls rather than 1 download call and
N get segment calls. I believe both sides of the code
paths would be simpler and more efficient this way.
3. In looking at the timing information for downloads when
testing this, we really need to focus on getting the
auth key and bandwidth limit verification times down.
Here's the timing I saw:
- 42ms: validate auth
- 52ms: bandwidth usage checking
- 14ms: get object info
- 26ms: get segment position info
- 26ms: getting the first segment full info
- 20ms: unaccounted for by spans
- 6ms: creating the orders
This change will remove 26ms, but there's a good 90ms
in just validation. With improved semantics hitting the
database only once and improved validation, a download
rpc taking ~30ms seems doable compared to our current
~200ms.
Change-Id: I4109dba082eaedb79e634c61dbf86efa93ab1222
Get bucket was returning a "bad request" HTTP status code when the
bucket doesn't exists.
We have to return HTTP "Not found" status code.
Change-Id: If717d99276b02a1e59a9b71ebc909bd6d8d9390b
Instead of granting a coupon when purchasing a package, grant credit.
This changes paymentsconfig.PackagePlan to use credit amount rather than
coupon ID. Add additional check to see if a paid invoice with the
description exists. If so, don't create and pay another invoice.
Change-Id: I81df24984c519c773db5fc8e9070bd7797070ec2
Add and implement interface to manage customer balances. Adds ability to
add credit to a user's balance, list balance transactions, and get the
balance.
Change-Id: I7fd65d07868bb2b7489d1141a5e9049514d6984e
Invoicing-related payment service methods have been modified to send
Stripe API requests in parallel.
Additionally, randomness has been added to the Stripe backend wrapper's
exponential backoff strategy in order to reduce the effects of the
thundering herd problem, which arises when executing many simultaneous
API calls.
Resolves#5156
Change-Id: I568f933284f4229ef41c155377ca0cc33f0eb5a4
Add columns package_plan and purchased_package_at to stripe_customers
table and add methods to update and select these values from console
service and payments accounts.
Change-Id: I1e89909055cc3054bfb7baa33c9dca3dfdc7336e
Implementing https://github.com/storj/storj/issues/5702 means adding a
bonus billing transaction for each storjscan transaction being recorded.
To do this idempotently, we need to the ability to for both the
storjscan and bonus transaction to be committed together.
This change updates the billing database to allow multiple billing
transactions to be inserted under the same database transaction.
Change-Id: I941864f47fc64d65aab076eec2e96fd04fcc7aac
A row in the new `user_settings` table does not always exist for a user,
even if they have been around for a while.
Since `user_settings` is now what defines the state of a user's
onboarding flow, prior to this fix, even old users would receive the
onboarding flow again.
This change appropriately updates `user_settings` for users who already
have projects, and thus have already gone through the onboarding flow. A
brand new user will still be navigated to the beginning of onboarding.
Change-Id: Ie745d280f6b8094ec60c200c2dca8d018d51f7d1
FileWalker implements methods to walk over pieces in
in a storage directory.
This is just a refactor to separate filewalker functions
from pieces.Store. This is needed to simplify the work
to create a separate filewalker subprocess and reduce the
number of config flags passed to the subprocess.
You might want to check https://review.dev.storj.io/c/storj/storj/+/9773
Change-Id: I4e9567024e54fc7c0bb21a7c27182ef745839fff
By accident query to get latest table stats was ordered by 'row_count'
column instead 'create'. We need latest stats so we need ordering by
creation time.
Change-Id: I9d0a0edda8bab59c3d96b7a15cd6502ed51633fc
This flag was in general one time switch to enable versions internally.
New we can remove it as it makes code more complex.
Change-Id: I740b6e8fae80d5fac51d9425793b02678357490e
We want to eliminate usages of LoopSegmentEntry.Pieces, because it is
costing a lot of cpu time to look up node IDs with every piece of every
segment we read.
In this change, we are eliminating use of LoopSegmentEntry.Pieces in the
node tally observer (both the ranged loop and segments loop variants).
It is not necessary to have a fully resolved nodeID until it is time to
store totals in the database. We can use NodeAliases as the map key
instead, and resolve NodeIDs just before storing totals.
Refs: https://github.com/storj/storj/issues/5622
Change-Id: Iec12aa393072436d7c22cc5a4ae1b63966cbcc18
The default price threshold has been changed to 10000 cents. The help
text has also been modified to clarify that this value is in cents.
Change-Id: I3b605056c860a3afef7202950180bc8b863fe16a
A debug log has been added to the autofreeze chore to show numbers of
invoices, users involved, users frozen, users warned and users bypassed
due to larger than config amount owed.
Issue: https://github.com/storj/storj/issues/5700
Change-Id: I4e628606fe234b6d239b76167a4781ca9e276308
This handles cases where a user is warned and triggers payment for their
account. Previously, only a frozen account will trigger this payment,
and will be unfrozen on successful payment. Now, accounts in warning
state trigger payments and are removed from that state on successful payment.
Issue: https://github.com/storj/storj/issues/5691
Change-Id: Icc2107f5d256657d176d8b0dd0a43a470eb01277
Previously, our code produced a Go panic when attempting to index an
empty slice of bucket user agent entries. A length check has been
added to ensure that the slice is not accessed beyond its bounds.
Change-Id: I904ca275b934be1b05393a45d99ff415dcafc926
This change adds an endpoint that gets a user's settings. It will
create a new settings entry if no settings exists. There's also a new
endpoint to change a user's onboarding status.
Issue: https://github.com/storj/storj/issues/5661
Change-Id: I9941bb9d61994af46244003f3ef4fcfe7d36918e
This change adds onboarding_start, onboarding_end and onboarding_step
columns to the user_settings table. the first two are used to determine
if a user should go through onboarding, the last will be used to as the
step a user got to before exiting onboarding without finishing.
Issue: https://github.com/storj/storj/issues/5660
Change-Id: I8070c880d0d2fc22086f24087c962f57c695cc50
A Stripe backend implementation has been added that uses an exponential
backoff strategy to retry failed API calls. This behavior can be
configured in the satellite config.
References #5156
Change-Id: I16ff21a39775ea331c442457f976be0c95a7b695
We have a specific issue that a user uploaded a file to a bucket
geo-fenced to EU and one of the pieces appeared to be on a node in the
US. The country code of this node is set to SE (Sweden) in the satellite
DB. It turns out that some time ago MaxMind changed the country code of
this node's IP from Sweden to US, but this change hasn't been reflected
in the satellite's database.
So far the satellite updates the country code of a node only if its IP
changes. It was assumed that if the IP does not change, its country code
shouldn't change too. This turned to be a wrong assumption.
With this change, the satellite will look up the MaxMindDB on every
check-in to see if the country code of the node's IP has changed.
Change-Id: Icdf659b09be9fc6ad14601902032b35ba5ea78c4
Add a combined index on normalized_email,status to improve performance of
common "get user" query used for the satellite UI.
Change-Id: I24a20d7826e0a68a68c2f95b5847eb819921e7c0
This commit ensures that all invocations of Stripe library methods
include a context. This allows us to control the timeout and
cancellation of the underlying HTTP requests made by the Stripe
library.
References #5156
Change-Id: I8ddb317f3f2cbb06cfab869fbebdaf2ad78b7999
Metainfo needs to know rate and burst limit to be able to limit users
requests. We made cache for per project limiter but to make single
instance we need to know about limits. So far we were doing direct DB
call to get rate/burst limit for project but it's generating lots of
DB requests and can be easily cached as we even have project limit cache.
This change extends project limit cache with rate/burst limit and starts
using this change while creating project limiter instance for metainfo.
Because data size kept in project limit cache is quite small this change
also bumps a bit default capacity of the cache.
Fixes https://github.com/storj/storj/issues/5663
Change-Id: Icb42ec1632bfa0c9f74857b559083dcbd054d071
We just fixed case were project limit cache was not used properly. This
is test case to cover that fix.
Change-Id: Iee467f0a46836860a14ab6238a9842ffbf54ed4c
This might be pretty awful, but at least it is a complete and non-flaky
solution.
**Only when using the rollingupgrade test** (which implies a throwaway
satellite and also a PostgreSQL backend), create a trigger on the nodes
table which forces last_net to be equal to last_ip_port always.
Change-Id: I8448cf131e46576d96a414d06780270c7b2b1892
Currently, to get number of entries in segments table we are doing
heavy SELECT count(*) operation. For biggest satellite it's taking
25min now. We are using this method to get stat before and after
segments loop so it adds almost 1h to overall loop time.
With current version of crdb we are using this additional code won't be
used because global configuration for stats refresh rate is inaccurate
for such large table like `segments`. Soon we should be able to upgrade
crdb and be able to adjust refresh rate per table and configure it to
satisfy defined threshold.
https://github.com/storj/storj/issues/5544
Change-Id: I05cfd9154f08894d2bc56bf716b436d1b03b87f1
A field has been added to the coupon struct indicating whether it is
associated with a partner's pricing package. This is required to
alter the appearance of partner coupons in the satellite frontend.
References storj/storj-private#172
Change-Id: Ie48ae3902aaa108abf9a399242a0cd98cb53d1c3
This change sends an event to segment for when a user is unfrozen.
It also moves freeze and warning event triggers from the autofreeze
chore to the account freeze service.
Change-Id: I5c0522b921b7baf52d6db5eb7ef841c08644a461
This change reworks the allowedAuthorization function to check what
groups the user is a part of to determine if authorization should be
granted. By wrapping each handler with withAuth, we can specify the
allowed groups for each api method individually.
github issue: https://github.com/storj/storj/issues/5565
Change-Id: I1804dda04d5b16d19e93bd7199fb3fc89fca1294
Remove generate-missing-project-salt migration tool code and related
tests. This migration has already been run and this code is no longer
needed.
Issue https://github.com/storj/storj-private/issues/163
Change-Id: I4e36dcd95a07c5305c597113a7fd08148e100ccc
It looks that at some point we broke how project limits cache is used
and we were missing cache in most critical paths (upload/download).
This is fix for this issue.
I also adjusted cache methods naming.
Change-Id: Ic98372779a39365d0920fe3943f1f7a68b064173
This change adds a card to the billing overview page, which shows the
user's token balance from coinpayments.
Issue: https://github.com/storj/storj-private/issues/151
Change-Id: I11e295b48791b32b745cb7a11c5b4aad6b56618e
This test involves a satellite with dev defaults (DistinctIP=no) being
upgraded past commit 2522ff09b6, which
means we need to run the dev-defaults-satellite-upgrade migration SQL
to avoid getting DistinctIP=yes behavior (which breaks the tests).
Change-Id: I29fb596d1ffa568dad635d98cfe9abacd3aaa48f
Only API peer needs access to order DB (and rollups cache) because it's
only place where we are creating orders for PUT and GET operations. For
other peers like auditor and repairer we can set noop implementation to
reduce number of dependencies needed for them.
Change-Id: Ic32d1879f0b97ffc4516f401898e31e95ae892e4
It was surprising that `satellite auditor` complained about SMTP mail settings, even if it's not supposed to sending any mail.
Looks like we can remove the mail service dependency, as it's not a hard requirement for overlay.Service.
Change-Id: I29a52eeff3f967ddb2d74a09458dc0ee2f051bd7
I tried to configure a satellite service and got this error:
```
DEBUG process/exec_conf.go:408 Unrecoverable error {"error": "missing port in address"}
```
It took some time to realize that I forgot to set the SMTPServerAddress.
This patch makes it easier to detect similar problem (detailed error message), and makes SMTP parameters optional if no real mail sending is used (simulated or nomail)
Change-Id: I32535a7c8d6529e19e4d919806f42ba430d074a5
Up to now, we have been implementing the DistinctIP preference with code
in two places:
1. On check-in, the last_net is determined by taking the /24 or /64
(in ResolveIPAndNetwork()) and we store it with the node record.
2. On node selection, a preference parameter defines whether to return
results that are distinct on last_net.
It can be observed that we have never yet had the need to switch from
DistinctIP to !DistinctIP, or from !DistinctIP to DistinctIP, on the
same satellite, and we will probably never need to do so in an automated
way. It can also be observed that this arrangement makes tests more
complicated, because we often have to arrange for test nodes to have IP
addresses in different /24 networks (a particular pain on macOS).
Those two considerations, plus some pending work on the repair framework
that will make repair take last_net into consideration, motivate this
change.
With this change, in the #2 place, we will _always_ return results that
are distinct on last_net. We implement the DistinctIP preference, then,
by making the #1 place (ResolveIPAndNetwork()) more flexible. When
DistinctIP is enabled, last_net will be calculated as it was before. But
when DistinctIP is _off_, last_net can be the same as address (IP and
port). That will effectively implement !DistinctIP because every
record will have a distinct last_net already.
As a side effect, this flexibility will allow us to change the rules
about last_net construction arbitrarily. We can do tests where last_net
is set to the source IP, or to a /30 prefix, or a /16 prefix, etc., and
be able to exercise the production logic without requiring a virtual
network bridge.
This change should be safe to make without any migration code, because
all known production satellite deployments use DistinctIP, and the
associated last_net values will not change for them. They will only
change for satellites with !DistinctIP, which are mostly test
deployments that can be recreated trivially. For those satellites which
are both permanent and !DistinctIP, node selection will suddenly start
acting as though DistinctIP is enabled, until the operator runs a single
SQL update "UPDATE nodes SET last_net = last_ip_port". That can be done
either before or after deploying software with this change.
I also assert that this will not hurt performance for production
deployments. It's true that adding the distinct requirement to node
selection makes things a little slower, but the distinct requirement is
already present for all production deployments, and they will see no
change.
Refs: https://github.com/storj/storj/issues/5391
Change-Id: I0e7e92498c3da768df5b4d5fb213dcd2d4862924