Our DB support in storj/private was updated to enable basic context
support for executing SQL queries. This change requires some small
adjustments as not all parts were working correctly.
storj/private commit with change:
4bc77107b7acfcc2f7ad65796d5dd3d7c64801e4
Change-Id: I64d7ed92788ea0920d12cecd1aa0e414720e9b9c
When running pay invoices command skip invoices that has due date set, so
customers can still pay it themselves until due date.
https://github.com/storj/storj/issues/5453
Change-Id: I8e557062491ab0c8246b28bc5ca57e845eb32e29
This is automated test around metabase tests. It's detecting queries
which were performing full table scan during test execution.
After merging this and checking that this is not problematic in any way
we will enable this also for testplanet tests.
One query was adjusted to avoid full table scan and pass new check.
https://github.com/storj/storj/issues/5471
Change-Id: I2d176f0c25deda801e8a731b75954f83d18cc1ce
When there is no wallet in the database for a particular customer
return 404 http response status code instread of internal server error.
Change web/satellite payments API to return empty wallet on 404 response
code instead of throwing an error.
Change-Id: Ib44914f9ed002382258968fb81846f2b97dee0fe
implemented observer and partial, created new structures to keep mon
metrics remain in same way as in segment loop
Change-Id: I209c126096c84b94d4717332e56238266f6cd004
When we have problem with inserting bandwidth amounts into cache
or DB we are logging information about it but log entries are not
very detailed. This change adds bandwidth amounts to the log entry.
https://github.com/storj/storj/issues/5470
Change-Id: I55ccad837d17b141501d3def1dec7ad5f3acdb0b
Create a config to specify one-time prices and corresponding coupon
ids for partners.
github issue: https://github.com/storj/storj-private/issues/118
Change-Id: I67b26e7208b12ba8f0e6dc1b164dd9545b09cac0
This change allows users who register with a partner that has
different project usage prices to see the correct prices in the
satellite UI.
Resolvesstorj/storj-private#90
Change-Id: I06bde50db474b25396671a27e282ef5637efe85b
This change allows for overriding project usage prices for a specific
partner so that users who sign up with that partner do not need their
invoices to be manually adjusted.
Relates to storj/storj-private#90
Change-Id: Ia54a9cc7c2f8064922bbb15861f974e5dea82d5a
Add node tally ranged loop observer and partial.
Add node tally randed observer to range loop peer.
Add config flag to select which loop to use for node tally.
Update satellite core to use segement/ranged loop based on a flag.
Duplicate existing node tally test but using ranged loop.
Change-Id: I6786f1a16933463fab5f79601bf438203a7a5f9e
Small cleanups of the observer stats init code:
1. Use sync.Once for race free addition to the monitoring chain
(purely defensive)
2. Set the observer durations before adding to the monitoring chain on
first use.
3. observerDurations slice does not need to be initialized to non-nil
Change-Id: I9ae8ec96debc7d52c4ee5d22762a89f21bb2e38c
* use the same DB application name for satellite and metabase
* use noop orders DB implementation to avoid storing allocated bandwidth
in DB
Change-Id: I20e88c694d38240fe1a20c45719e210cfb76402c
bucket_bandwidth_rollups table
We have performance problems with updating bucket_bandwidth_rollups. To
improve situation we can stop storing allocated bandwidth in this table.
This should reduce large number of updates which are comming from
metainfo endpoints, repair workers and audit.
Next step will be to drop `allocated` column completely from
bucket_bandwidth_rollups.
Allocated GET bandwidth is all we need and we are keeping it in
bucket_bandwidth_rollups table.
Change-Id: Ifdd26a89ba8262acbca6d794a6c02883ad0c0c9b
On generated console api endpoints allow either the project ID or the
public ID to be used as the ID parameter.
github issue: https://github.com/storj/storj/issues/5412
Change-Id: Ic9901ed273931a50ae12f20142a3c4938dfcc8c0
When an observer errors we still want to finish the other observers.
This changes store the error and continues the loop, skipping
the observer which errored out and setting the duration metric to -1.
When the error occurs in the process stage, it does continue the other
ranges of the same observer. It removes the observer entirely after the process
stage. To improve this would make it more complex due to race
conditions.
Closes https://github.com/storj/storj/issues/5389
Change-Id: I528432c491d4340817d6950f1200ee2b9e703309
Move the IsAuthenticated check until after initial parameter
parsing/validation. IsAuthenticated will be more expensive than
parsing/validation, so we should fail before auth if possible.
Change-Id: I96a020892eabcb750e8ec9ecc1d8b7d9bf8bf573
Add option AsOfSystemTime to segment provider to make it equivalent to
the old segment loop.
There's no comment on what it does because it's pretty complex and
makes no sense, but we can improve it later.
closes https://github.com/storj/storj/issues/5434
Change-Id: I8f09b03803e681e2fd41008c5dba67804b0f37a1
The `coupons`, `offers`, `coupon_usages`, `coupon_codes`, and
`user_credits` tables are not used anymore. They exist due to
legacy billing code which has already been removed.
Change-Id: Ie93af90a06e5247a74015af2a78a223d69249565
The shell script has different behavior on different operating systems,
and now has enough logic that it makes sense to put it into a go script.
This change replaces dbx/gen.sh with dbx/gen/main.go, which accomplishes
the same thing.
Change-Id: Ibb53ebf9c2475377aae1165cfe245140d7162026
The current satellitedb.dbx has grown quite big over time.
Split the single file into multiple smaller ones.
Change-Id: I8d6ca851b834ac60820aff565906f04aab661875
Update get usage-limits, daily-usage and salt endpoints to support
both project-ID and project-PublicID.
Issue: https://github.com/storj/storj/issues/5411
Change-Id: Iff0114a295d1a479b141bfffbfb31599844d1fc0
Update the delete API key by name and projectID to support project-ID
and project-publicID.
Issue: https://github.com/storj/storj/issues/5410
Change-Id: I3bd11b9c3ae1ad6ce662dfc18b42779d2e4edf9b
Add an observer to monitor ranged segment loop progress.
Tested by running the segment loop in storj-up and navigating to
http://<container>:11111/mon/stats and there is the entry:
rangedloop-live,scope=storj.io/storj/satellite/metabase/rangedloop numSegments=364523630000.000000
part of https://github.com/storj/storj/issues/5223
Change-Id: If3d2774d2f17f51eac86f47c6dda1fb8ad696dfe
Removing all references to column last_verification_reminder which is to be removed, due to new column verification_reminders
Issue: https://github.com/storj/storj/issues/4560
Change-Id: I7c9a426e946c7aed58e62c1eef80629daf6b1272
Support interruption of the ranged segment loop through context.
Part of https://github.com/storj/storj/issues/5223
Change-Id: Iae0260e250f8ea33affed95c6592a1f42df384eb
added in storj-sim rangedloop for each satellite, to verify it works for metrics oveserver,
removed identity from rangedloop peer as we never use it, added logs on service run, added loop
to service instead of endless for loop, interval value to config
Closes: https://github.com/storj/storj/issues/5414
Change-Id: Ibc3b06071b68feda4a35b45da2bbe36e22a02fc8
Add public ID field to graphql Project so it can be used on the front
end. Additionally public_id needed to be added to the ListByOwnerID sql
query which is called by graphql OwnedProjectsQuery.
github issue: https://github.com/storj/storj/issues/5408
Change-Id: I2ec04363c20493dc0f9c70b6d1610f724f18ec2f
Wire up duration measurement of observers with monkit.
Tested by attaching a SleepObserver, starting the rangedloop in storj-up
and navigating to http://<container>:11111/mon/stats. It reports the
following statistic:
completed-observer-duration,observer=*rangedlooptest.SleepObserver,scope=storj.io/storj/satellite/metabase/rangedloop duration=10.000117
Change-Id: Ief131d34001dd5d3ba1d7be6f161986e1f66440d
Before we introduced objects versions internally move operation was
always failing when under target location object exists. But then we
had only single version 1 all the time. With versions different than 1
we need to check all existing objects under target location.
To be backward compatible with our API new logic looks like this:
* if there is no object under target location use source object version
as target version
* if there are only pending objects find first free (highest) version
which could be used to move object there
* if there is committed object under target location reject move
operation
Fixes https://github.com/storj/storj/issues/5403
Change-Id: I717f3e7c42470b406287d6ec335f6f057d3fc3b5
This change implements the ranged loop observer to replace the audit
chore that builds the audit queue.
The strategy employed by this change is to use a collector for each
segment range to build separate per-node segment reservoirs that are
then merge them during the join step.
In previous observer migrations, there were only a handful of tests so
the strategy was to duplicate them. In this package, there are dozens
of tests that utilize the chore. To reduce code churn and maintenance
burden until the chore is removed, this change introduces a helper that
runs tests under both the chore and observer, providing a pair of
functions that can be used to pause or run the queueing function.
https://github.com/storj/storj/issues/5232
Change-Id: I8bb4b4e55cf98b1aac9f26307e3a9a355cb3f506
add triggerAttemptPaymentIfFrozen to check if the account is frozen
and if frozen, will trigger an attempt to pay outstanding invoices
Issue: https://github.com/storj/storj/issues/5398
Change-Id: I0da6a982e2da4204dee219d98ce2d503cbbb6f8e
The tests are forked from the chore tests with slight adaptations for
being run against the ranged loop. I also moved a benchmark for the
database from chore_test.go to db_test.go.
The pathcollector is reused as a rangedloop.Partial.
https://github.com/storj/storj/issues/5234
Change-Id: I56182031d133812a9f4d4a433c01b9150af39f31
Track duration of all segment loop observers. Factor out functions to
reduce size.
Still need to send the measurements out via monkit.
Part of https://github.com/storj/storj/issues/5223
Change-Id: Iae0260e250f8ea33affed95c6592a1f42df384eb
This modifies the userinfo endpoint to return appropriate errors;
PermissionDenied for untrusted peers and Unimplemented because
the endpoint isn't implemented
Change-Id: I5109bb204b5e1ce2e21fe16b003991b6c900a8ce
Implemented interception for http requests.
We redirect user to login page on every 401 response.
Issue:
https://github.com/storj/storj/issues/5339
Change-Id: Icba4fc0031cb2b4e682a1be078cdcf95b7fa6bfe
This change stubs userinfo endpoint from storj/common/pb/userinfo.proto.
It also adds config for allowed peers, and a method for verifying peers.
Issue: https://github.com/storj/storj/issues/5358
Change-Id: I057a0e873a9e9b3b9ad0bba69305f0d708bd9b9e
This change adds an account freeze service with methods for checking
if a user is frozen, freezing a user, and unfreezing a user.
Furthermore, methods for altering the usage limits of a user or project
have been implemented for use by the account freeze service.
Change-Id: I77fecfac5c152f134bec90165acfe4f1dea957e7
This change creates a new independent process, the 'auditor', comparable
to the repairer, gc, and api processes. This will allow auditors to be
scaled independently of the core.
Refs: https://github.com/storj/storj/issues/5251
Change-Id: I8a29eeb0a6e35753dfa0eab5c1246048065d1e91
Now that all the reverification changes have been made and the old code
is out of the way, this commit renames the new things back to the old
names. Mostly, this involves renaming "newContainment" to "containment"
or "NewContainment" to "Containment", but there are a few other renames
that have been promised and are carried out here.
Refs: https://github.com/storj/storj/issues/5230
Change-Id: I34e2b857ea338acbb8421cdac18b17f2974f233c
This change updates the stripecoinpayments service to optionally skip
generating line items for payments records that have no egress, storage,
or segments for the billing period.
This results in a reduction from 4 to 1 Stripe API calls for customers
who have no usage. The final API call is the attempt to generate an
invoice on stripe, which expectedly fails because there are no unapplied line
items. Removing that final API call would require some additional
queries and is out of scope for this change.
This functionality is behind the
`payments.stripe-coin-payments.skip-empty-invoices` feature flag.
https://github.com/storj/storj/issues/5381
Change-Id: Id184969a4c79047c40502336d69c51388ab03bf8
Now that we are doing scalable piecewise reverifications, the code for
handling the old way of doing things (containment, pending audits,
reporting, testing) can now be removed.
Refs: https://github.com/storj/storj/issues/5230
Change-Id: Ief1a75f423eff682e8f3d57804e343b3409a6631
This commit pulls the big switch! We have been setting up piecewise
reverifications (the workers for which can be scaled independently of
the core) for several commits now, and this commit actually begins
making use of them.
The core of this commit is fairly small, but it requires changing the
semantics in all the tests that relate to reverifications, so it ends up
being a large change. The changes to the tests are mostly mechanical and
repetitive, though, so reviewers needn't worry much.
Refs: https://github.com/storj/storj/issues/5230
Change-Id: Ibb421cc021664fd6e0096ffdf5b402a69b2d6f18
This change implements DB methods for interacting with the
account_freeze_event table and introduces structures related to
account freeze events.
Change-Id: Ib125b31dfb754b2428212c39b780e14cfc7f97bf
This change fixes the access of unset segments and keys on the reservoir
when the reservoir size is less than the max OR the number of sampled
segments is smaller than the reservoir size. It does so by tucking away
the segments and keys behind methods that return properly sized slices
into the segments/keys arrays.
It also fixes a bug in the housekeeping for the internal index variable
that holds onto how many items in the array have been populated. As part
of this fix, it changes the type of index to int8, which reduces the
size of the reservoir struct by 8 bytes.
The tests have been updated to provide better coverage for this case.
Change-Id: I3ceb17b692fe456fc4c1ca5d67d35c96aeb0a169
Adding this entry means that the database accessed as "reverifyqueue"
(`(*satelliteDBCollection).ReverifyQueue()`) can be located on a
different database host from the other databases, and things should
still work. There aren't any queries that do a JOIN on tables from
reverifyQueue and other things in satellitedb, for example.
This should really have been put here earlier, when reverifyqueue was
first added, but it's ok. This won't have any bearing on things until we
need to deploy to prod.
Refs: https://github.com/storj/storj/issues/5230
Change-Id: I76f68de79cd645c869f3dbfbe3b2c9c4f9359e8f
This method on the Verifier allows the caller to find, out of the nodes
holding pieces in a given segment, which ones are contained.
This method is not yet being used. It will be in a future commit.
Refs: https://github.com/storj/storj/issues/5230
Change-Id: I242cd999913ca4dabbe8a62767ed4869b31fca04
Implemented UI error tracking.
We use satellite analytics service to track the fact that UI error occurred and send minimal info to Segment (not Hubspot).
We send only the fact that UI error occurred and the place where this error occurred.
Extended notificator plugin error function to include the place where error occurred.
I made the place argument nullable to be always explicitly provided (build fails if place is not provided).
If place is not null then error event is triggered in the background.
Issue:
https://github.com/storj/storj-private/issues/107
Change-Id: I7d129fb29629979f5be6ff5dea37ad19b1a2397e
update the updateProject function to set user specified bandwidth and storage limits
fixes https://github.com/storj/storj/issues/5185
Change-Id: Ib4132487f6b7ea0afa7c57acfc358857b3e852d1
We missed proper handling of object copies for method
GetStreamPieceCountByNodeID which is used by metabase.GetObjectIPs.
That caused some lack of IPs returned when queriyng IPs of copy and
broke things like pices map on linksharing.
Fixes https://github.com/storj/storj/issues/5406
Change-Id: I9574776f34880788c2dc9ff78a6ae20d44fe628f
Here we add a worker class comparable to audit.Worker, which will be
responsible for pulling items off of the reverification queue and
calling reverifier.ReverifyPiece on them.
Note that piecewise reverification audits (which this will control) are
not yet being done. That is, nothing is being added to the
reverification queue at this point.
Refs: https://github.com/storj/storj/issues/5251
Change-Id: I94e28830e27caa49f2c8bd4a2336533e187ab69c
The Reporter is responsible for processing results from auditing
operations, logging the results, disqualifying nodes that reached
the maximum reverification count, and passing the results on to
the reputation system.
In this commit, we extend the Reporter so that it knows how to process
the results of piecewise reverification audits.
We also change most reporter-related tests so that reverifications
happen as piecewise reverification audits, exercising the new code.
Note that piecewise reverification audits are not yet being done outside
of tests. In a later commit, we will switch from doing segmentwise
reverifications to piecewise reverifications, as part of the
audit-scaling effort.
Refs: https://github.com/storj/storj/issues/5230
Change-Id: I9438164ce1ea4d9a1790d18d0e1046a8eb04d8e9
While researching logs from a large set of audits, I noticed that nearly
all of them had streamIDs starting with 0 or 1. This seemed very odd,
because streamIDs are supposed to be pretty much entirely random, and
every hex digit from 0-f should have been represented with roughly equal
frequency.
It turned out that our A-Chao implementation of reservoir sampling is
flawed. As far as we can tell, so is the Wikipedia implementation. No
one has yet reviewed the original 1982 paper by Dr. Chao in enough
detail to know where the error originated, but we do know that we have
been auditing segments near the beginning of the segment loop (low
streamIDs) far more often than segments near the end of the segment loop
(high streamIDs).
This change uses an algorithm Wikipedia calls "A-Res" instead, and adds
a test to check for that sort of bias creeping back in somehow. A-Res
will be slightly slower than A-Chao, because of a few extra steps that
need to be done, but it does appear to be selecting items uniformly.
Change-Id: I45eba4c522bafc729cebe2aab6f3fe65cd6336be
Some observers assume that they will observe all the segments for a
given stream, and that they will observe those segments in a sequential
stream over one or more iterations.
This change updates the range provider from rangedlooptest to provide
these guarantees.
The change also removes the Mock suffix from the provider/splitter types
since the package name (rangedlooptest) implies that the type is a test
double.
Change-Id: I927c409807e305787abcde57427baac22f663eaa
We have a bug in our behavior while doing API pods deployment. At this
time its possible to have pods with multiple versions flag set true only
partially for some of pods. Because of that it's possible to start new
object without removing existing/older version on BeginObject
(new behavior) and also don't remove that existing/older object on
CommitObject. That can cause to have two committed objects with
different versions and that's a state we want to avoid.
To fix it we are removing multiple versions flag from CommitObject to
always try delete existing objects. This way even if we don't remove
existing object on BeginObject it will be always removed while
committing.
Fixes https://github.com/storj/storj/issues/5373
Change-Id: Idc334bf5cc785d2f559af96e92c3de6d82ca58ba
Add an abstraction rangedloop.SegmentProvider to fetch chunks of
segments from the metainfo database in parallel.
Part of https://github.com/storj/storj/issues/5223
Change-Id: Ife26467ea0c3be550bde0b05464ef1db62dd4d2a
Adds DeleteAllSessionsByUserIDExcept which removes all sessions except the specified session from the database and applies this function to enableMFA and disableMFA
addresses https://github.com/storj/storj-private/issues/15
Change-Id: I5d8c620dadbbda4a1b430ccf8a6121e167dd0761
Minimal implementation of the ranged (=threaded) segment loop
service, to improve performance over the existing loop.
Has tests with a an inmemory segment database
and example observer.
Does not have yet: database link, observer duration tracking,
suspicious processed ratio guard, rate limiting, minimum execution
interval per observer, etc.
Part of https://github.com/storj/storj/issues/5223
Change-Id: I08ffb392c3539e380f4e7b4f1afd56c4c394668d
This change shows STORJ token balance on the billing overview page instead of the Stripe balance it shows currently.
It changes the text on the "Available balance" card to reflect the new balance being displayed. Finally, it adds shortcuts to navigate straight to token history or add tokens modal when call to action on "Balance card"
Issue: https://github.com/storj/storj/issues/5204
Change-Id: Ic88e43c602e4949b6c6be4c7644c04f3c7d38585
To be able to verify segments in a list of buckets, this change:
- adds method ListBucketsStreamIDs to list all stream ids belonging to a list of buckets provided using a ListVerifyBucketList on which Add(projectID, bucketName) is defined.
- allows to specify a list of streamIDs to check in ListVerifySegments
Fixes https://github.com/storj/storj-private/issues/101
Change-Id: I72a48a0873a3056ac54ad56c0e9242364b2ae918
First, adding a logger argument allows the caller to have a logger
already set up with whatever extra fields they want here.
Secondly, we need to return the Outcome instead of a simple boolean so
that it can be passed on to the Reporter later (need to make the right
decision on increasing reputation vs decreasing it).
Thirdly, we collect the cached reputation information from the overlay
when creating the Orders, and return it from ReverifyPiece. This will
allow the Reporter to determine what reputation-status fields need to be
updated, similarly to how we include a map of ReputationStatus objects
in an audit.Report.
Refs: https://github.com/storj/storj/issues/5251
Change-Id: I5700b9ce543d18b857b81e684323b2d21c498cd8
NewContainment will replace Containment later in this commit chain, but
for now it is not yet being used.
NewContainment will allow a node to be contained for multiple pending
reverify jobs at a time. It is implemented by way of the reverify queue.
Refs: https://github.com/storj/storj/issues/5231
Change-Id: I126eda0b3dfc4710a88fe4a5f41780618ec19101
We have a bug where if number of buckets in the system will be
multiplication of batch size (2500) then loop that is going over
all buckets can run indefinitely.
Fixes https://github.com/storj/storj/issues/5374
Change-Id: Idd4d97c638db83e46528acb9abf223c98ad46223
Simple email validation before attempting to send notifications. If the
email is not valid, skip sending notifications and go to update
email_sent so we don't try it again. Also, move ValidateEmail function
into new package so it can be used in nodeevents without import cycle.
Change-Id: I63ce0fc84f7b1d964f7cc6da61206f54baaf1a21
It helps for the (*reverifyQueue).Insert() method to be idempotent (it
does not make sense for the same node to be under containment for the
same piece multiple times). This change allows for that, by adding an
`ON CONFLICT DO NOTHING` clause to the database query.
Refs: https://github.com/storj/storj/issues/5231
Change-Id: Id2839ee185d5396c0bc2f84ffad610df9786f6c7
Adding a new worker comparable to Verifier, called Reverifier; as the
name suggests, it will be used for reverifications, whereas Verifier
will be used for verifications.
This allows distinct logging from the two classes, plus we can add some
configuration that is specific to the Reverifier.
There is a slight modification to GetNextJob that goes along with this.
This should have no impact on operational concerns.
Refs: https://github.com/storj/storj/issues/5251
Change-Id: Ie60d2d833bc5db8660bb463dd93c764bb40fc49c
Previously, the node events chore would select based on the earliest
created_at. However, if for some reason this batch fails, it would still
be the next item to select. If there is a consistent error, the chore
would be stuck retrying the same batch over and over. Now instead
GetNextBatch orders by `last_attempted NULLS FIRST ASC, created_at ASC`.
If a batch fails during Notify, last_attempted is updated so we can move
on to a new batch if one exists.
Change-Id: Ia8458e05ac358d85b2f2c6d690f3d607d631be61
audit.Queues was the previous method of passing stacks of segments for
audit to the verifier. As of commit 68f9ce4a, this is now happening
by way of the auditor queue (database-backed, so that communication can
happen between multiple peers). audit.Queues is no longer needed.
Refs: https://github.com/storj/storj/issues/5228
Change-Id: I46f2d48d655fb66366c92146cdb6b85aef200552
SetNodeContained() will change the contained flag in the nodes table,
which will affect whether nodes are selected for new uploads. This flag
_should_ correlate with whether or not a given node has any entries in
the reverification queue. However, the reverification queue is intended
to be 'safely partitionable' from the nodes table, so we can't enforce
that characteristic transactionally. But this is ok; there are no dire
consequences if they are out of sync.
We will be adding a chore that updates the contained flag based on the
contents of the reverification queue periodically, if something fails
to set it directly when appropriate.
Refs: https://github.com/storj/storj/issues/5231
Change-Id: I26460d8718dee63fd55d00a44568b2065fc8fe30
GetByNodeID will allow querying the reverification queue to see if there
are any pending jobs for a given node ID. And thus, to see if that node
ID should be contained or not.
Some parameters on the other methods of the ReverifyQueue interface have
been changed to accept pointers; this was done ahead of the rest of the
changes for the reverification queue to better match the signatures of
the methods that these will replace once ReverifyQueue is actually being
used (meaning fewer changes to tests).
Refs: https://github.com/storj/storj/issues/5251
Change-Id: Ic38ce6d2c650702b69f1c7244a224f00a34893a1
This change removes the error type that is returned when a token
request contains an incorrect password. Instead, the generic error
type for invalid login credentials is used.
Change-Id: Ia7dbc38f4a08aeaeeac7ff5b5a801233e349b8b3
This change reduces the token links expiry time from 24h to 30m and improves the UI to promt users of the expiration.
see: https://github.com/storj/storj-private/issues/17
Change-Id: Iac00f5740fa84069937fdf9bd30a739b6db2a9e0
The audit chore will be pushing a large number of segments to be
audited, and the db might choke on that large insert when under load.
This change divides the insert up into batches, which can be sized
however is optimal for the backing database. It also arranges for
segments to be inserted in the order of the primary key, which helps
performance on some systems.
Refs: https://github.com/storj/storj/issues/5228
Change-Id: I941f580f690d681b80c86faf4abca2995e37135d
* storj/common
* storj/private
Latests common version requires small refactoring for names and types
used by metainfo code.
Change-Id: I224fe93b4751c996ba6e846be0e5677252cf830f
Reputation updates during repair currently consumes a lot of database
resources. Sometimes increasing the rate of repair is more important
than auditing a node based on whether they have or don't have the
correct piece during repair. This is the job of the audit service.
This commit is to implement an intermediate solution from this issue: https://github.com/storj/storj/issues/5089
This commit does not address the more in-depth fix discussed here: https://github.com/storj/storj/issues/4939
Change-Id: I4163b18d78a96fadf5265789fd73c8aa8def0e9f
This change causes rate limiting errors to be returned to the client
as JSON objects rather than plain text to prevent the satellite UI from
encountering issues when trying to parse them.
Resolvesstorj/customer-issues#88
Change-Id: I11abd19068927a22f1c28d18fc99e7dad8461834
We tested new upload flow (with multiple versions) to fix inconsistency
while uploading object on QA/EUN1/SLC. Now we would like to enable it
for all satellites by default. Tests required small adjustments.
Fixes https://github.com/storj/storj/issues/5283
Change-Id: I0d53c041abebc0d182ba5a88bb1dac906c29caf0
As part of the effort of splitting out the auditor workers to their own
process, we are transitioning the communication between the auditor
chore and the verification workers to a queue implemented in the
database, rather than the sequence of in-memory queues we used to use.
This logical database is safely partitionable from the rest of
satelliteDB.
Refs: https://github.com/storj/storj/issues/5251
Change-Id: I6cd31ac5265423271fbafe6127a86172c5cb53dc
We added alternative way to calculate bucket tallies for accounting and
now it's tested and we will enable it by default.
CollectBucketTallies was extended to support overriding current time
to be able to test handling expired objects.
Change-Id: I738b99a33fd2e086245f92d874c1cbb806e834c0
Add a new chore to periodically insert nodes who are offline and
have not gotten an offline email in a certain amount of time into node
events
Change-Id: I658b385bb777b0240c98092946a93d65bee94abc
Create NodeEvents Chore on satellite core to read nodeevents DB and
notify node operators on node events. The chore sends notifications
grouped by email and event type: it selects the oldest entry in
nodeevents.DB and also any other event with the same email and event
type no matter how old it is. The oldest entry of a group must exist for
a minimum amount of time before that group can be selected, however.
This minimum amount of time is a configurable value:
--node-events.selection-wait-period. This wait period allows us to
combine events of the same time and same email address into a singular
email.
Change-Id: I8b444aa324d2dae265cc27d9e9e85faef79195d8
read one is the wrong method when trying to select one row when there
are multiple. It returns TooManyRows error. Read first is the correct
method.
Change-Id: Ic6c92795486892ac041befd118b6945314bffeaa
Add LastOfflineEmail to overlay.NodeDossier. This is the last time a
node got an offline email. Add two new overlay db methods,
GetOfflineNodesForEmail and UpdateLastOfflineEmail. Edit db method
UpdateCheckIn to nullify last_offline_email if node is up.
Change-Id: I1ee60e7d98dd1b68348a57f9a4fb77c6c9895d6d
We have code that is used only by old uplinks and can fail at some point
but we don't interrupt anything and only log message about failure.
Until now it was logged as error but it's nothing critial so we can
reduce it to warning.
As an addition log entry was extended with more information about client
that is using this backward compatibility code.
Change-Id: Ie21c673ee59eb10de065cc371132f8f9505e2220
This change causes the session inactivity timer to be enabled unless
expressly specified otherwise.
Change-Id: I85b4014394afac2feb21f383cac414cddb09ca8f
Added new feature flag.
Reworked vuex logic to work properly with project level passphrase.
Implemented new simple set project level passphrase modal.
Issue:
https://github.com/storj/storj/issues/5280
Change-Id: I6a15e90ee9fa7aa8a09c67022466787090120f9c
Currently the primary key of the underlying rollup table has the
primary key being the bucket name, but we used to sort by projectID.
This caused dead locks due to the contention during updates/inserts.
We should reevalute if bucket name being the primary key is the right
way for this table, this should stop the long running and failing attempts tho.
Change-Id: Ie7d0f86944da48ad9cbd92eb162226882a2fb954
This change modifies the method responsible for returning project
usage summaries such that the end date of the given time period
is excluded to prevent overlap.
Change-Id: If06155efff5c6fce3865f5f6e4344873abe3e432
When a node checks in and its version is below the minimum, insert
BelowMinVersion event into node events
Change-Id: I0e437ac34496778369515cbc40c15676da8b27ae
Multipart upload requires to have the same UploadID returned from
different requests (BeginUpload, ListUploads). Otherwise client won't
be able to find existing uploads. Main issue was that data needed to
construct UploadID is in System metadata which can be filtered out
by listing option.
This change is fixing how we are setting Status for listed objects and
it's forcing reading System metadata if we are reading pending objects.
Fixes https://github.com/storj/storj/issues/5298
Change-Id: I8dd5fbab4421a64dc3ed95556408ead4c829f276
Upon adding members to a project using the Add Team Member modal,
users are now notified that only email addresses belonging to an
account will receive a project invitation. This notification appears
regardless of whether every submitted email corresponds to an account.
Previously, users received an error message if any email address not
attached to an account was submitted.
Change-Id: Ia014c8311c1347e001b1c6c33de73ea61f20b0cb
We want to be able to exclude contained nodes from nodes selection. For this, we add a 'contained' column to the nodes table to track the containment status.
Fixes https://github.com/storj/storj/issues/5231
Change-Id: Id78e645f172145adcb8664646e8ebf14e218b57b
Since the auditor will be moving to a different process from the
metainfo loop, we need a way of communicating which segments have
been chosen for audit. This queue will be that communication, for now.
Contrast this with the queue for _re_verifications in commit 9c67f62f.
Refs: https://github.com/storj/storj/issues/5251
Change-Id: I9a269c7ef21e6c5e9c6e5e1f3db298fe159a8a79
Hubspot is migrating from using API keys for authentication to OAuth.
This change migrates our Hubspot integration to use OAuth tokens.
It modifies the EnqueueCreateUser code to not send empty HubspotUTK to hubspot, and to return error for failed requests.
see: https://developers.hubspot.com/changelog/upcoming-api-key-sunset
Change-Id: I422f00e3e3caeff3ff3d08ddec059502b9addaee
* Mark node events table as "safely partitionable", meaning that it
is/will not be queried relationally along with other tables. This way,
we can safely use this table in Postgres rather than CockroachDB,
where most of our other satellite tables are running.
* Add a dbx-generated delete function to the node events table, to allow
us to easily delete entries created before a provided time. This
allows us to keep the table clean, since there is no need to persist
entries after emails have been sent.
Change-Id: I25e8a5c4092fe49dcfa6c8bb73f2043646bb611f
Libuplink is using some aliases to storj package which we will
move directly to libuplink and remove from common/storj.
To make code compilable we need to fix places where we
are using aliased types directly to be able to update libuplink.
Change-Id: I7222a927af3b41e214d1c9204917f3ebce4727ce
GetObject and GetObjectIPs are invoked by the Linksharing service to
display the shared object and its map. These two endpoint currently
require read permission.
There is a use case where an object can be shared with an access grant
that has only list permission. In such a case, the expectation is that
the linksharing service would still display the metadata of the shared
object (name, size, map), but the content would be still inaccessible.
See https://github.com/storj/gateway-mt/issues/209 for details.
This change allows GetObject and GetObjectIPs to require either read or
list permission to support the described use case.
Change-Id: I3477edc7bf8990e9848482890da047094c875d09
When a customer has no pending line items, an invoice will not be
generated for them and the Stripe client method responsible for
creating new invoices returns nil. This change adds a nil check to
account for this possibility to ensure that no panics are caused
by attempted processing of the invoice.
Change-Id: Id184d027d7447f0ef876db58601ab6cf63927fc5
Some changes to make code cleaner and easier to adopt to new ranged
loop.
* removed unneeded mutex
* reorganize constructor args
* avoid creating the same redundancy scheme for each segment piece
Change-Id: I81f3f6597147fc515516949db3ce796a60b1c8a0
Pair uuid's to create ranges. Will be used to parallelize the segment
loop.
Part of https://github.com/storj/storj/issues/5223
Change-Id: I73e2fb8a2cd379b840864449b6251b48feeb7b66
Instead of sending emails at the time the node is seen to be back
online, we have decided to send the event to the node events table,
which will initiate the email sending process at some point.
Change-Id: Id756209498112579de8e78ee20ad2df54571a617
Add nodeevents.DB to satellite overlay service so we can insert node
events into the nodeevents DB.
Change-Id: I642c0ccc9941ecdb08cb22d5c8cf701959a55156
New flag 'MultipleVersions' was not correctly passed from metainfo
configuration to metabase configuration. Because configuration was
set correctly for unit tests we didn't catch it and issue was found
while testing on QA satellite.
This change reduce number of places where new metabase flags needs
to be propagated from metainfo configuration to avoid problems with
setting new flags in the future.
Fixes https://github.com/storj/storj/issues/5274
Change-Id: I74bc122649febefd87f665be2fba628f6bfd9044
since amount of objects is growing and looping through all of them
starts taking lot of time, we are switching for SQL query to do it
in chunks of tallies per bucket. 2nd part of issue fix.
Closes https://github.com/storj/team-metainfo/issues/125
Change-Id: Ia26bcac0a7e2c6503df9ebbf4817a636841d3284
Change he bloomfilter generation process to prefix the objects with a date and update the LATEST object with the prefix. The sender will read the LATEST file to get the prefix to process.
Change-Id: Iae0d3c49015d57f391d87789fb799a7d774066bf
The current deployment strategy requires that the GC bloomfilter generation process executes only once and exits.
Change-Id: I952991f126596aa165d1f2e9fce6f8548c21bdba
Implement node events DB with Insert and GetLatestByEmailAndEvent. Get
was changed to GetLatestByEmailAndEvent so we can verify items are being
inserted into the table without needing the ID, which is not available
to us in the tests.
Change-Id: I4abe63631c44774cd7e795fbab0cbab4d801db4c
Change node_events schema to use an id column as primary key rather than
node id because there can be multiple events per node id.
Change-Id: I518d8ef9ea658764876483e282a4058d3c4910f4
Add new table for node events. We can use this to notify node
operators of certain node events. Further, we can squash events for
multiple nodes with the same email into a single notification.
Change-Id: Icea6dd939df8fe4a98806bd79c014e21d239c43e
ReverifyPiece() is not currently hooked up to anything, but is planned
to take the place of audit.(*Verifier).Reverify().
ReverifyPiece() works by downloading one piece in its entirety, rather
than pulling an entire stripe across many nodes.
Change-Id: Ie2c680f4d3c3b65273a72466a3f9f55c115b0311
This table will be used as a queue for pieces that need to be reverified
(a regular audit timed out on the owning node, so now that node is
contained and we need to validate the piece before un-containing it).
Refs: https://github.com/storj/storj/issues/5228
Change-Id: I5dcd26b6adced8674cbd81884c1543a61ea9d4c8
BeginCopyObject checks twice for write permission in the destination
bucket. One check should be enough.
Change-Id: I3d5935d34f69cd48eaaf00d0117683edfdcefc05
The procedure responsible for node reputation status comparison could
return an invalid result due to comparing a status timestamp against
itself rather than comparing it against another. This results in
unnecessary database updates that could be avoided otherwise.
This change modifies the procedure to resolve this issue.
Change-Id: Id147e1942e994e8bca4ced2a9358f2474927d6ec
We had multiple experiment so far to collect high cardinality data (mainly in aggregated form).
1. we have a `/top` endpoint which aggregates events with upper bound
2. we use same api (eventstat) to publish S3 gateway-mt agents to influxdb
This patch starts to replace theses api with jtolio/eventkit. Instead of aggregation all events can be sent to a collector host where we can do aggregation and/or persisting data.
Change-Id: Id6df4882b51d2dbd2be9401ee4199d14f3ff7186
The threshold of piece deletions from the nodes during CommitObject
when overriding an existing object seemed to cause a race condition in
tests.
This change makes the threshold configurable so we can set it to maximum
so CommitObject waits until all pieces are removed from the nodes in the
test.
Change-Id: Idf6b52e71d0082a1cd87ad99a2edded6892d02a8
We removed monkit call from "object" method because it was using
too much cpu and was visible on cpu profile but we should first try
optimized version of this call.
Change-Id: Ib76d8a2968a704ce47235c6dac6edad4e40bde48
One of two parts to stop using objects loop for bucket accounting,
this method collects bucket tallies from list of bucket locations
part1 of: https://github.com/storj/team-metainfo/issues/125
Change-Id: Id2d492582453e28463cddf1245622fb7f191050c
We want to send emails to SNOs. Node status changes go through the
overlay service, so it's a good place to add the mail service.
Add the mailservice.Service, satellite address, and satellite name to
overlay service. Also add feature flag --overlay.send-node-emails
Change-Id: I3bd2cb3bf22f9724954ce2374f8b651b902b3a24
Add getSalt to projects api. Add action, GET_SALT, on Store
Projects module to make the api request and return the salt
string everywhere in the web app that generates an access grant.
The Wasm code which is used to create the access grant has been
changed to decode the salt as a base64 encoded string. The names
of the function calls in the changed Wasm code have also been
changed to ensure that access grant creation fails if JS access
grant worker code and Wasm code are not the same version.
https://github.com/storj/storj-private/issues/64
Change-Id: Ia2bc4cbadad84b066ca1882b042a3f0bb13c783a
We have new flow where existing object is deleted not on begin
object but on commit object. Deletion on commit object is still
missing deletion from storage nodes. This change adds this part
to the code.
Fixes https://github.com/storj/storj/issues/5222
Change-Id: Ibfd34665b2a055ec6c0d6e260c1a57e8a4c62b0e
We have a code to limit segments loop in case it will hit DB to hard
but so far we didn't use this loop feature in production. This is a
simple change to avoid logic responsible for rate limiting and its
monitoring if limiting is disabled (RateLimit = 0)
Change-Id: I43e07b407c6e65cf252303159d052eef250d1bea
Until this change we were stripping prefix from object key on satellite side. Because of that we were transferring over network unnecessary data
from DB. This change adjusts iterator SQL queries to use SUBSTRING to
remove prefix on DB side and avoid sending it to satellite.
Benchmark against 'main':
unfortunately "time/op" is very unstable while doing local bench in this
case and sometimes there is no difference in time and sometimes its up to 18%. I never saw results when old solution is faster then new one. Results for "alloc/op" and "allocs/op" are rather consistent.
name old time/op new time/op delta
NonRecursiveListing/Cockroach/listing_no_prefix-8 1.98ms ± 6% 2.05ms ±23% ~ (p=1.000 n=9+10)
NonRecursiveListing/Cockroach/listing_with_prefix-8 3.97ms ± 8% 3.42ms ±20% -13.86% (p=0.005 n=10+10)
NonRecursiveListing/Cockroach/listing_only_prefix-8 8.42ms ±16% 7.58ms ± 5% -9.91% (p=0.002 n=10+10)
name old alloc/op new alloc/op delta
NonRecursiveListing/Cockroach/listing_no_prefix-8 16.7kB ± 0% 16.9kB ± 0% +1.16% (p=0.000 n=10+10)
NonRecursiveListing/Cockroach/listing_with_prefix-8 27.3kB ± 0% 28.2kB ± 0% +3.31% (p=0.000 n=10+10)
NonRecursiveListing/Cockroach/listing_only_prefix-8 60.0kB ± 0% 62.4kB ± 0% +3.93% (p=0.000 n=10+8)
name old allocs/op new allocs/op delta
NonRecursiveListing/Cockroach/listing_no_prefix-8 312 ± 0% 315 ± 0% +0.96% (p=0.000 n=10+10)
NonRecursiveListing/Cockroach/listing_with_prefix-8 526 ± 0% 541 ± 0% +2.85% (p=0.000 n=10+10)
NonRecursiveListing/Cockroach/listing_only_prefix-8 1.16k ± 0% 1.23k ± 0% +5.24% (p=0.000 n=10+10)
Change-Id: I23e501494ededafb2dd5ea903e8e4e313b42e956
This change increments users' failed_login_count in the database layer to avoid potential data race.
It also updates the login_lockout_expiration as well in one operation.
see: https://github.com/storj/storj/issues/4986
Change-Id: I74624f1bee31667b269cb205d74d16e79daabcb6
With this change we are switching methods to begin object, from
BeginObjectExactVersion to BeginObjectNextVersion. Main implication
is that from now it will be possible to have object with version
different than 1. New object will always get first available version.
Main reason to do this it to avoid deleting existing object during
reuploading object. Now we can create multiple pending objects but
only last committed will be available to the user. Any previous
committed object will be deleted.Because of that we moved logic to
delete existing object from BeginObject to CommitoObject request.
New logic is behind feature flat to be able to test it well first
before enablng on production.
Fixes https://github.com/storj/storj/issues/4871
Change-Id: I2dd9c7364fd93796a05ef607bda9c39a741e6a89
The flags weren't properly loading from config.
The code assumed that every node that's online for downloading also have
data uploaded to them -- which is not true.
Change-Id: Ifd65a47b9eca5b4841231928244fab17acbde6fb
This is another change to remove monkit calls from fast methods. Those
calls are visible in CPU profiles.
Change-Id: Ib3beba0dca6a6d93c3342b0994c580f78bbdd50b
After you create a brand new cluster (with storj-up, for example) the project usage fails during the first 5 minutes.
The problem is the usage of `AS OF SYSTEM TIME` which points to a time where the master database didn't exist.
In this specific case the database not found error can be ignored to avoid such messages. (if the database is really missing, we will have problems way more earlier, eg. at the login)
Change-Id: I51ee78994d91fc2a14b56646402faaaa8154c934
This patch addresses the following issues:
1. Running full migration in cockroachdb is quite slow. We already have an approach for unit tests to start from the latest snapshot. This patch makes it possible to use it for integrations tests.
2. Migration requires executing a separated command which makes it hard to run application in containerized test environments (like storj-up) or from IDE. This patch introduces a hidden flag to run migration.
3. Test user creation is painful. We do it with calling GraphQL + admin API. Providing an option with testuser makes the integration tests significant more simple (especially as the projectID -> access grant can be predictable)
Change-Id: I61010728727b490ea6aac32620f2da0484966727