Components related to project usage costs have been updated to show
different estimations for each partner, and the satellite has been
updated to send the client the information it needs to do this.
Previously, project costs in the satellite frontend were estimated
using only the price model corresponding to the partner that the user
registered with. This caused users who had a project containing
differently-attributed buckets to see an incorrect price estimation.
Resolvesstorj/storj-private#186
Change-Id: I2531643bc49f24fcb2e5f87e528b552285b6ff20
This is not recommended for most nodes; leaving your node running when
it can't handle requests fast enough is a good way to fail audits and
get disqualified, which may happen before you even know about the
problem.
But some Windows users are finding that this is being triggered
regularly on their nodes, and that it apparently causes the whole system
to lock up occasionally. We are adding this option as a way to mitigate
that problem until we can collect more information.
Change-Id: I7a652b0f9f970bbb9ed9f2cb3ad1cb89d90db8d7
This combines the ListStreamPositions and GetSegmentByPosition
calls with a ListSegments call that now knows how to return
only the segments within a Range, just like ListStreamPositions.
It would theoretically be possible to also include the
GetObjectLastCommitted call by having it do one of three
queries based on the incoming request range, but that would
mean duplicating the data for the object in every single
row that is returned for each segment in the range.
One gross thing that ListSegments has to do now is update the
first segment returned with the information from any ancestor
segment because GetSegmentByPosition used to do that. It only
updates the first segment so that it doesn't do O(N) database
queries. It seems difficult to have it do a single query to
update all of the segments at once. I'm not certain this change
should be merged on this basis alone.
This change has made me think a couple of things should happen:
1. Server side copy with ancestor segments strikes again
making the code less clear and potentially more buggy
or inefficient for a rare case (empirically <0.1%)
2. The download code requests individual segments from
the satellite lazily as part of its download which
requires the satellite telling it the locations of
all of the segments which requires the satellite
querying the locations of all of the segments. Instead
the download RPC could return the orders for all of
the segments for a range and the download code could
issue N download calls rather than 1 download call and
N get segment calls. I believe both sides of the code
paths would be simpler and more efficient this way.
3. In looking at the timing information for downloads when
testing this, we really need to focus on getting the
auth key and bandwidth limit verification times down.
Here's the timing I saw:
- 42ms: validate auth
- 52ms: bandwidth usage checking
- 14ms: get object info
- 26ms: get segment position info
- 26ms: getting the first segment full info
- 20ms: unaccounted for by spans
- 6ms: creating the orders
This change will remove 26ms, but there's a good 90ms
in just validation. With improved semantics hitting the
database only once and improved validation, a download
rpc taking ~30ms seems doable compared to our current
~200ms.
Change-Id: I4109dba082eaedb79e634c61dbf86efa93ab1222
Get bucket was returning a "bad request" HTTP status code when the
bucket doesn't exists.
We have to return HTTP "Not found" status code.
Change-Id: If717d99276b02a1e59a9b71ebc909bd6d8d9390b
updates flag descriptions with correct punctuation, and fix errors
to not be capitalized.
Updates #5623
Change-Id: I9c6ef6d9888b2fb90b17db8775cc6abe803e102f
Instead of granting a coupon when purchasing a package, grant credit.
This changes paymentsconfig.PackagePlan to use credit amount rather than
coupon ID. Add additional check to see if a paid invoice with the
description exists. If so, don't create and pay another invoice.
Change-Id: I81df24984c519c773db5fc8e9070bd7797070ec2
Add and implement interface to manage customer balances. Adds ability to
add credit to a user's balance, list balance transactions, and get the
balance.
Change-Id: I7fd65d07868bb2b7489d1141a5e9049514d6984e
Invoicing-related payment service methods have been modified to send
Stripe API requests in parallel.
Additionally, randomness has been added to the Stripe backend wrapper's
exponential backoff strategy in order to reduce the effects of the
thundering herd problem, which arises when executing many simultaneous
API calls.
Resolves#5156
Change-Id: I568f933284f4229ef41c155377ca0cc33f0eb5a4
Add columns package_plan and purchased_package_at to stripe_customers
table and add methods to update and select these values from console
service and payments accounts.
Change-Id: I1e89909055cc3054bfb7baa33c9dca3dfdc7336e
Implementing https://github.com/storj/storj/issues/5702 means adding a
bonus billing transaction for each storjscan transaction being recorded.
To do this idempotently, we need to the ability to for both the
storjscan and bonus transaction to be committed together.
This change updates the billing database to allow multiple billing
transactions to be inserted under the same database transaction.
Change-Id: I941864f47fc64d65aab076eec2e96fd04fcc7aac
This is a fix for locked objects count to not update after some action (upload, delete etc.)
Although the issue is still there if user deletes file, navigates outside object browser and navigates back to the same bucket.
In this case we would refetch objects count both from satellite API and our gateway.
So bucket tally wouldn't be triggered that fast and object browser would still show locked objects.
Anyway it's better than it is now.
Also reworked weird object browser initialization which triggered some routing error so this event would be displayed in hubspot as a UI error
Change-Id: I545ab925b135fe3ef2740d17aaece6d43b731c96
A row in the new `user_settings` table does not always exist for a user,
even if they have been around for a while.
Since `user_settings` is now what defines the state of a user's
onboarding flow, prior to this fix, even old users would receive the
onboarding flow again.
This change appropriately updates `user_settings` for users who already
have projects, and thus have already gone through the onboarding flow. A
brand new user will still be navigated to the beginning of onboarding.
Change-Id: Ie745d280f6b8094ec60c200c2dca8d018d51f7d1
The assignemnt to `err = nil` is not used in the rest of the code,
however, this was a protective err = nil assignment.
Change-Id: Id70fb2a2e68b91e2481952d865334e603ca41188
* There was one bug where all projects dashboard was enabled, and the
actual path was not being passed to the router
* There was another bug where all projects dashboard was disabled, and
the user was directed to the projects dashboard rather than the next
step of the onboarding flow, meaning that upon refresh, the user would
be prompted for package selection again
Change-Id: I388f04c3af9d03b84b4dd3af6de29e6b82b10531
adds an additional flag to return an additional TXT record that will
enable TLS on custom domains with Linksharing.
Closes#5623
Change-Id: I941616362d7dcd9aec20dfd10346e483021516a4
FileWalker implements methods to walk over pieces in
in a storage directory.
This is just a refactor to separate filewalker functions
from pieces.Store. This is needed to simplify the work
to create a separate filewalker subprocess and reduce the
number of config flags passed to the subprocess.
You might want to check https://review.dev.storj.io/c/storj/storj/+/9773
Change-Id: I4e9567024e54fc7c0bb21a7c27182ef745839fff
This flag was previously implemented, but when we reworked the billing
UI, we forgot to re-implement it with the new screens.
This change fixes that.
Change-Id: Ifad2b82f1080928b72d7e572796fcf4287e5ed3f
By accident query to get latest table stats was ordered by 'row_count'
column instead 'create'. We need latest stats so we need ordering by
creation time.
Change-Id: I9d0a0edda8bab59c3d96b7a15cd6502ed51633fc
Download is server from two goroutines:
* one is waiting for the orders (and updates the actual limit)
* other one sends the valuable bytes back to the client (in case the actual order is big enough)
These two tasks are syncrhonized with the help of a `sync2.NewThrottle()`
But all of these happens in the same method, therefore we have no idea how much time is spent on waiting for next orders
(throttle can wait until we receive new orderlimit), and how much time is spent with actual work.
This patch moves the actual work (after sending routine is waked up) to a separated method to have better visibility and measure the actual work (read data + send it).
Change-Id: Ia5068c544560a53bc2fcea6cb6fce85cfacbd95b