No component has referenced this page since 9dab10e and we do not
anticipate this changing, so this page can be safely removed.
Resolves#5768
Change-Id: I57acb5e4d0977d74df46aaf67606a19ec0f10bcf
We automatically start a chore to check whether the blobstore is
writeable and readable, however, we don't want to fail the tests due to
that reason. Usually we want to test some other failure.
There probably should be some nicer way to achieve this, but this is an
easier fix.
Change-Id: I77ada75329f88d3ea52edd2022e811e337c5255a
- only applies to storjscan transactions
- applies a 10% bonus by default
- bonus transactions have a distinct source "type" to allow for
filtering on the frontend
Fixes: https://github.com/storj/storj/issues/5702
Change-Id: I32d65f776c58bcb41227ff5bc77a8e4cb62a9add
This another account endpoint; patch /auth/account/settings. to handle
changing a user's settings, including their session timeout config.
Issue: https://github.com/storj/storj/issues/5560
Change-Id: I747b4e919cf7cef7c867ac9d282837ef51bed67e
We avoid putting more than one piece of a segment on the same /24
network (or /64 for ipv6). However, it is possible for multiple pieces
of the same segment to move to the same network over time. Nodes can
change addresses, or segments could be uploaded with dev settings, etc.
We will call such pieces "clumped", as they are clumped into the same
net, and are much more likely to be lost or preserved together.
This change teaches the repair checker to recognize segments which have
clumped pieces, and put them in the repair queue. It also teaches the
repair worker to repair such segments (treating clumped pieces as
"retrievable but unhealthy"; i.e., they will be replaced on new nodes if
possible).
Refs: https://github.com/storj/storj/issues/5391
Change-Id: Iaa9e339fee8f80f4ad39895438e9f18606338908
We would like to have ability to limit burst uploads to the single
object (the same location). This change we are limiting such upload to
one per second.
Change-Id: Ib9351df1017cbc07d7fc2f846c2dbdbfcd3a360c
The blobstore implementation is entirely related to storagenode, so the
rightful place is together with the storagenode implementation.
Fixes https://github.com/storj/storj/issues/5754
Change-Id: Ie6637b0262cf37af6c3e558556c7604d9dc3613d
add ability to sort api keys on the access management page by ascending
or descending name or creation date. Additionally, edit api keys query
when ordering by name to order by lower(name) so names starting with
capital letters are not treated differently from lower case when
ordering.
Change-Id: I81dbb87587a24fb7097313f76bad116b1f20d306
With this change we are replacing parsing code with existing go-redis
util.
We also switch redis client to version 9.
Change-Id: Ie4a651e3ae6960e68958c690873925d319b70e10
The following tests should be made less flaky by this change:
- TestFailedDataRepair
- TestOfflineNodeDataRepair
- TestUnknownErrorDataRepair
- TestMissingPieceDataRepair_Succeed
- TestMissingPieceDataRepair
- TestCorruptDataRepair_Succeed
- TestCorruptDataRepair_Failed
This follows on to a change in commit 6bb64796. Nearly all tests in the
repair suite used to rely on events happening in a certain order. After
some of our performance work, those things no longer happen in that
expected order every time. This caused much flakiness.
The fix in 6bb64796 was sufficient for the tests operating directly on
an `*ECRepairer` instance, but not for the tests that make use of the
repairer by way of the repair queue and the repair worker. These tests
needed a different way to indicate the number of expected failures. This
change provides that different way.
Refs: https://github.com/storj/storj/issues/5736
Refs: https://github.com/storj/storj/issues/5718
Refs: https://github.com/storj/storj/issues/5715
Refs: https://github.com/storj/storj/issues/5609
Change-Id: Iddcf5be3a3ace7ad35fddb513ab53dd3f2f0eb0e
Components related to project usage costs have been updated to show
different estimations for each partner, and the satellite has been
updated to send the client the information it needs to do this.
Previously, project costs in the satellite frontend were estimated
using only the price model corresponding to the partner that the user
registered with. This caused users who had a project containing
differently-attributed buckets to see an incorrect price estimation.
Resolvesstorj/storj-private#186
Change-Id: I2531643bc49f24fcb2e5f87e528b552285b6ff20
This combines the ListStreamPositions and GetSegmentByPosition
calls with a ListSegments call that now knows how to return
only the segments within a Range, just like ListStreamPositions.
It would theoretically be possible to also include the
GetObjectLastCommitted call by having it do one of three
queries based on the incoming request range, but that would
mean duplicating the data for the object in every single
row that is returned for each segment in the range.
One gross thing that ListSegments has to do now is update the
first segment returned with the information from any ancestor
segment because GetSegmentByPosition used to do that. It only
updates the first segment so that it doesn't do O(N) database
queries. It seems difficult to have it do a single query to
update all of the segments at once. I'm not certain this change
should be merged on this basis alone.
This change has made me think a couple of things should happen:
1. Server side copy with ancestor segments strikes again
making the code less clear and potentially more buggy
or inefficient for a rare case (empirically <0.1%)
2. The download code requests individual segments from
the satellite lazily as part of its download which
requires the satellite telling it the locations of
all of the segments which requires the satellite
querying the locations of all of the segments. Instead
the download RPC could return the orders for all of
the segments for a range and the download code could
issue N download calls rather than 1 download call and
N get segment calls. I believe both sides of the code
paths would be simpler and more efficient this way.
3. In looking at the timing information for downloads when
testing this, we really need to focus on getting the
auth key and bandwidth limit verification times down.
Here's the timing I saw:
- 42ms: validate auth
- 52ms: bandwidth usage checking
- 14ms: get object info
- 26ms: get segment position info
- 26ms: getting the first segment full info
- 20ms: unaccounted for by spans
- 6ms: creating the orders
This change will remove 26ms, but there's a good 90ms
in just validation. With improved semantics hitting the
database only once and improved validation, a download
rpc taking ~30ms seems doable compared to our current
~200ms.
Change-Id: I4109dba082eaedb79e634c61dbf86efa93ab1222
Get bucket was returning a "bad request" HTTP status code when the
bucket doesn't exists.
We have to return HTTP "Not found" status code.
Change-Id: If717d99276b02a1e59a9b71ebc909bd6d8d9390b
Instead of granting a coupon when purchasing a package, grant credit.
This changes paymentsconfig.PackagePlan to use credit amount rather than
coupon ID. Add additional check to see if a paid invoice with the
description exists. If so, don't create and pay another invoice.
Change-Id: I81df24984c519c773db5fc8e9070bd7797070ec2
Add and implement interface to manage customer balances. Adds ability to
add credit to a user's balance, list balance transactions, and get the
balance.
Change-Id: I7fd65d07868bb2b7489d1141a5e9049514d6984e
Invoicing-related payment service methods have been modified to send
Stripe API requests in parallel.
Additionally, randomness has been added to the Stripe backend wrapper's
exponential backoff strategy in order to reduce the effects of the
thundering herd problem, which arises when executing many simultaneous
API calls.
Resolves#5156
Change-Id: I568f933284f4229ef41c155377ca0cc33f0eb5a4
Add columns package_plan and purchased_package_at to stripe_customers
table and add methods to update and select these values from console
service and payments accounts.
Change-Id: I1e89909055cc3054bfb7baa33c9dca3dfdc7336e
Implementing https://github.com/storj/storj/issues/5702 means adding a
bonus billing transaction for each storjscan transaction being recorded.
To do this idempotently, we need to the ability to for both the
storjscan and bonus transaction to be committed together.
This change updates the billing database to allow multiple billing
transactions to be inserted under the same database transaction.
Change-Id: I941864f47fc64d65aab076eec2e96fd04fcc7aac
A row in the new `user_settings` table does not always exist for a user,
even if they have been around for a while.
Since `user_settings` is now what defines the state of a user's
onboarding flow, prior to this fix, even old users would receive the
onboarding flow again.
This change appropriately updates `user_settings` for users who already
have projects, and thus have already gone through the onboarding flow. A
brand new user will still be navigated to the beginning of onboarding.
Change-Id: Ie745d280f6b8094ec60c200c2dca8d018d51f7d1
FileWalker implements methods to walk over pieces in
in a storage directory.
This is just a refactor to separate filewalker functions
from pieces.Store. This is needed to simplify the work
to create a separate filewalker subprocess and reduce the
number of config flags passed to the subprocess.
You might want to check https://review.dev.storj.io/c/storj/storj/+/9773
Change-Id: I4e9567024e54fc7c0bb21a7c27182ef745839fff
By accident query to get latest table stats was ordered by 'row_count'
column instead 'create'. We need latest stats so we need ordering by
creation time.
Change-Id: I9d0a0edda8bab59c3d96b7a15cd6502ed51633fc
This flag was in general one time switch to enable versions internally.
New we can remove it as it makes code more complex.
Change-Id: I740b6e8fae80d5fac51d9425793b02678357490e
We want to eliminate usages of LoopSegmentEntry.Pieces, because it is
costing a lot of cpu time to look up node IDs with every piece of every
segment we read.
In this change, we are eliminating use of LoopSegmentEntry.Pieces in the
node tally observer (both the ranged loop and segments loop variants).
It is not necessary to have a fully resolved nodeID until it is time to
store totals in the database. We can use NodeAliases as the map key
instead, and resolve NodeIDs just before storing totals.
Refs: https://github.com/storj/storj/issues/5622
Change-Id: Iec12aa393072436d7c22cc5a4ae1b63966cbcc18
The default price threshold has been changed to 10000 cents. The help
text has also been modified to clarify that this value is in cents.
Change-Id: I3b605056c860a3afef7202950180bc8b863fe16a
A debug log has been added to the autofreeze chore to show numbers of
invoices, users involved, users frozen, users warned and users bypassed
due to larger than config amount owed.
Issue: https://github.com/storj/storj/issues/5700
Change-Id: I4e628606fe234b6d239b76167a4781ca9e276308
This handles cases where a user is warned and triggers payment for their
account. Previously, only a frozen account will trigger this payment,
and will be unfrozen on successful payment. Now, accounts in warning
state trigger payments and are removed from that state on successful payment.
Issue: https://github.com/storj/storj/issues/5691
Change-Id: Icc2107f5d256657d176d8b0dd0a43a470eb01277
Previously, our code produced a Go panic when attempting to index an
empty slice of bucket user agent entries. A length check has been
added to ensure that the slice is not accessed beyond its bounds.
Change-Id: I904ca275b934be1b05393a45d99ff415dcafc926
This change adds an endpoint that gets a user's settings. It will
create a new settings entry if no settings exists. There's also a new
endpoint to change a user's onboarding status.
Issue: https://github.com/storj/storj/issues/5661
Change-Id: I9941bb9d61994af46244003f3ef4fcfe7d36918e
This change adds onboarding_start, onboarding_end and onboarding_step
columns to the user_settings table. the first two are used to determine
if a user should go through onboarding, the last will be used to as the
step a user got to before exiting onboarding without finishing.
Issue: https://github.com/storj/storj/issues/5660
Change-Id: I8070c880d0d2fc22086f24087c962f57c695cc50
A Stripe backend implementation has been added that uses an exponential
backoff strategy to retry failed API calls. This behavior can be
configured in the satellite config.
References #5156
Change-Id: I16ff21a39775ea331c442457f976be0c95a7b695
We have a specific issue that a user uploaded a file to a bucket
geo-fenced to EU and one of the pieces appeared to be on a node in the
US. The country code of this node is set to SE (Sweden) in the satellite
DB. It turns out that some time ago MaxMind changed the country code of
this node's IP from Sweden to US, but this change hasn't been reflected
in the satellite's database.
So far the satellite updates the country code of a node only if its IP
changes. It was assumed that if the IP does not change, its country code
shouldn't change too. This turned to be a wrong assumption.
With this change, the satellite will look up the MaxMindDB on every
check-in to see if the country code of the node's IP has changed.
Change-Id: Icdf659b09be9fc6ad14601902032b35ba5ea78c4
Add a combined index on normalized_email,status to improve performance of
common "get user" query used for the satellite UI.
Change-Id: I24a20d7826e0a68a68c2f95b5847eb819921e7c0
This commit ensures that all invocations of Stripe library methods
include a context. This allows us to control the timeout and
cancellation of the underlying HTTP requests made by the Stripe
library.
References #5156
Change-Id: I8ddb317f3f2cbb06cfab869fbebdaf2ad78b7999
Metainfo needs to know rate and burst limit to be able to limit users
requests. We made cache for per project limiter but to make single
instance we need to know about limits. So far we were doing direct DB
call to get rate/burst limit for project but it's generating lots of
DB requests and can be easily cached as we even have project limit cache.
This change extends project limit cache with rate/burst limit and starts
using this change while creating project limiter instance for metainfo.
Because data size kept in project limit cache is quite small this change
also bumps a bit default capacity of the cache.
Fixes https://github.com/storj/storj/issues/5663
Change-Id: Icb42ec1632bfa0c9f74857b559083dcbd054d071
We just fixed case were project limit cache was not used properly. This
is test case to cover that fix.
Change-Id: Iee467f0a46836860a14ab6238a9842ffbf54ed4c
This might be pretty awful, but at least it is a complete and non-flaky
solution.
**Only when using the rollingupgrade test** (which implies a throwaway
satellite and also a PostgreSQL backend), create a trigger on the nodes
table which forces last_net to be equal to last_ip_port always.
Change-Id: I8448cf131e46576d96a414d06780270c7b2b1892
Currently, to get number of entries in segments table we are doing
heavy SELECT count(*) operation. For biggest satellite it's taking
25min now. We are using this method to get stat before and after
segments loop so it adds almost 1h to overall loop time.
With current version of crdb we are using this additional code won't be
used because global configuration for stats refresh rate is inaccurate
for such large table like `segments`. Soon we should be able to upgrade
crdb and be able to adjust refresh rate per table and configure it to
satisfy defined threshold.
https://github.com/storj/storj/issues/5544
Change-Id: I05cfd9154f08894d2bc56bf716b436d1b03b87f1