The following tests should be made less flaky by this change:
- TestFailedDataRepair
- TestOfflineNodeDataRepair
- TestUnknownErrorDataRepair
- TestMissingPieceDataRepair_Succeed
- TestMissingPieceDataRepair
- TestCorruptDataRepair_Succeed
- TestCorruptDataRepair_Failed
This follows on to a change in commit 6bb64796. Nearly all tests in the
repair suite used to rely on events happening in a certain order. After
some of our performance work, those things no longer happen in that
expected order every time. This caused much flakiness.
The fix in 6bb64796 was sufficient for the tests operating directly on
an `*ECRepairer` instance, but not for the tests that make use of the
repairer by way of the repair queue and the repair worker. These tests
needed a different way to indicate the number of expected failures. This
change provides that different way.
Refs: https://github.com/storj/storj/issues/5736
Refs: https://github.com/storj/storj/issues/5718
Refs: https://github.com/storj/storj/issues/5715
Refs: https://github.com/storj/storj/issues/5609
Change-Id: Iddcf5be3a3ace7ad35fddb513ab53dd3f2f0eb0e
Components related to project usage costs have been updated to show
different estimations for each partner, and the satellite has been
updated to send the client the information it needs to do this.
Previously, project costs in the satellite frontend were estimated
using only the price model corresponding to the partner that the user
registered with. This caused users who had a project containing
differently-attributed buckets to see an incorrect price estimation.
Resolvesstorj/storj-private#186
Change-Id: I2531643bc49f24fcb2e5f87e528b552285b6ff20
This is not recommended for most nodes; leaving your node running when
it can't handle requests fast enough is a good way to fail audits and
get disqualified, which may happen before you even know about the
problem.
But some Windows users are finding that this is being triggered
regularly on their nodes, and that it apparently causes the whole system
to lock up occasionally. We are adding this option as a way to mitigate
that problem until we can collect more information.
Change-Id: I7a652b0f9f970bbb9ed9f2cb3ad1cb89d90db8d7
This combines the ListStreamPositions and GetSegmentByPosition
calls with a ListSegments call that now knows how to return
only the segments within a Range, just like ListStreamPositions.
It would theoretically be possible to also include the
GetObjectLastCommitted call by having it do one of three
queries based on the incoming request range, but that would
mean duplicating the data for the object in every single
row that is returned for each segment in the range.
One gross thing that ListSegments has to do now is update the
first segment returned with the information from any ancestor
segment because GetSegmentByPosition used to do that. It only
updates the first segment so that it doesn't do O(N) database
queries. It seems difficult to have it do a single query to
update all of the segments at once. I'm not certain this change
should be merged on this basis alone.
This change has made me think a couple of things should happen:
1. Server side copy with ancestor segments strikes again
making the code less clear and potentially more buggy
or inefficient for a rare case (empirically <0.1%)
2. The download code requests individual segments from
the satellite lazily as part of its download which
requires the satellite telling it the locations of
all of the segments which requires the satellite
querying the locations of all of the segments. Instead
the download RPC could return the orders for all of
the segments for a range and the download code could
issue N download calls rather than 1 download call and
N get segment calls. I believe both sides of the code
paths would be simpler and more efficient this way.
3. In looking at the timing information for downloads when
testing this, we really need to focus on getting the
auth key and bandwidth limit verification times down.
Here's the timing I saw:
- 42ms: validate auth
- 52ms: bandwidth usage checking
- 14ms: get object info
- 26ms: get segment position info
- 26ms: getting the first segment full info
- 20ms: unaccounted for by spans
- 6ms: creating the orders
This change will remove 26ms, but there's a good 90ms
in just validation. With improved semantics hitting the
database only once and improved validation, a download
rpc taking ~30ms seems doable compared to our current
~200ms.
Change-Id: I4109dba082eaedb79e634c61dbf86efa93ab1222
Get bucket was returning a "bad request" HTTP status code when the
bucket doesn't exists.
We have to return HTTP "Not found" status code.
Change-Id: If717d99276b02a1e59a9b71ebc909bd6d8d9390b
updates flag descriptions with correct punctuation, and fix errors
to not be capitalized.
Updates #5623
Change-Id: I9c6ef6d9888b2fb90b17db8775cc6abe803e102f
Instead of granting a coupon when purchasing a package, grant credit.
This changes paymentsconfig.PackagePlan to use credit amount rather than
coupon ID. Add additional check to see if a paid invoice with the
description exists. If so, don't create and pay another invoice.
Change-Id: I81df24984c519c773db5fc8e9070bd7797070ec2
Add and implement interface to manage customer balances. Adds ability to
add credit to a user's balance, list balance transactions, and get the
balance.
Change-Id: I7fd65d07868bb2b7489d1141a5e9049514d6984e
Invoicing-related payment service methods have been modified to send
Stripe API requests in parallel.
Additionally, randomness has been added to the Stripe backend wrapper's
exponential backoff strategy in order to reduce the effects of the
thundering herd problem, which arises when executing many simultaneous
API calls.
Resolves#5156
Change-Id: I568f933284f4229ef41c155377ca0cc33f0eb5a4