added in storj-sim rangedloop for each satellite, to verify it works for metrics oveserver,
removed identity from rangedloop peer as we never use it, added logs on service run, added loop
to service instead of endless for loop, interval value to config
Closes: https://github.com/storj/storj/issues/5414
Change-Id: Ibc3b06071b68feda4a35b45da2bbe36e22a02fc8
This change creates a new independent process, the 'auditor', comparable
to the repairer, gc, and api processes. This will allow auditors to be
scaled independently of the core.
Refs: https://github.com/storj/storj/issues/5251
Change-Id: I8a29eeb0a6e35753dfa0eab5c1246048065d1e91
Now that all the reverification changes have been made and the old code
is out of the way, this commit renames the new things back to the old
names. Mostly, this involves renaming "newContainment" to "containment"
or "NewContainment" to "Containment", but there are a few other renames
that have been promised and are carried out here.
Refs: https://github.com/storj/storj/issues/5230
Change-Id: I34e2b857ea338acbb8421cdac18b17f2974f233c
Now that we are doing scalable piecewise reverifications, the code for
handling the old way of doing things (containment, pending audits,
reporting, testing) can now be removed.
Refs: https://github.com/storj/storj/issues/5230
Change-Id: Ief1a75f423eff682e8f3d57804e343b3409a6631
NewContainment will replace Containment later in this commit chain, but
for now it is not yet being used.
NewContainment will allow a node to be contained for multiple pending
reverify jobs at a time. It is implemented by way of the reverify queue.
Refs: https://github.com/storj/storj/issues/5231
Change-Id: I126eda0b3dfc4710a88fe4a5f41780618ec19101
Reputation updates during repair currently consumes a lot of database
resources. Sometimes increasing the rate of repair is more important
than auditing a node based on whether they have or don't have the
correct piece during repair. This is the job of the audit service.
This commit is to implement an intermediate solution from this issue: https://github.com/storj/storj/issues/5089
This commit does not address the more in-depth fix discussed here: https://github.com/storj/storj/issues/4939
Change-Id: I4163b18d78a96fadf5265789fd73c8aa8def0e9f
If we are processing list of segments (csv) we should not stop if one of
segments is not found in DB.
Change-Id: I720f85dc7601c2ca77032e20c1577de55092bd9b
Current option is to put stream id and position as an input but
it's not very efficient when we have long list of segments to repair.
This change adds option to read whole csv file and process each entry
one by one.
If command will have single argument then it will treat it as csv file
location and if will have two arguments then it will parse it just as
stream id and position.
Change-Id: I1e91cf57a794d81d74af0091c24a2e7d00d1fab9
Implements logic for satellite command to repair a single segment.
Segment will be repaired even if it's healthy. This is not checked
during this process. As a part of repair command will download whole
segment into memory and will try to reupload segment to number of new
nodes that equal to existing number of pieces. After successful
upload new pieces will completely replace existing pieces.
Command:
satellite repair-segment <streamid> <position>
https://github.com/storj/storj/issues/5254
Change-Id: I8e329718ecf8e457dbee3434e1c68951699007d9
Add nodeevents.DB to satellite overlay service so we can insert node
events into the nodeevents DB.
Change-Id: I642c0ccc9941ecdb08cb22d5c8cf701959a55156
New flag 'MultipleVersions' was not correctly passed from metainfo
configuration to metabase configuration. Because configuration was
set correctly for unit tests we didn't catch it and issue was found
while testing on QA satellite.
This change reduce number of places where new metabase flags needs
to be propagated from metainfo configuration to avoid problems with
setting new flags in the future.
Fixes https://github.com/storj/storj/issues/5274
Change-Id: I74bc122649febefd87f665be2fba628f6bfd9044
The current deployment strategy requires that the GC bloomfilter generation process executes only once and exits.
Change-Id: I952991f126596aa165d1f2e9fce6f8548c21bdba
Add getSalt to projects api. Add action, GET_SALT, on Store
Projects module to make the api request and return the salt
string everywhere in the web app that generates an access grant.
The Wasm code which is used to create the access grant has been
changed to decode the salt as a base64 encoded string. The names
of the function calls in the changed Wasm code have also been
changed to ensure that access grant creation fails if JS access
grant worker code and Wasm code are not the same version.
https://github.com/storj/storj-private/issues/64
Change-Id: Ia2bc4cbadad84b066ca1882b042a3f0bb13c783a
This patch addresses the following issues:
1. Running full migration in cockroachdb is quite slow. We already have an approach for unit tests to start from the latest snapshot. This patch makes it possible to use it for integrations tests.
2. Migration requires executing a separated command which makes it hard to run application in containerized test environments (like storj-up) or from IDE. This patch introduces a hidden flag to run migration.
3. Test user creation is painful. We do it with calling GraphQL + admin API. Providing an option with testuser makes the integration tests significant more simple (especially as the projectID -> access grant can be predictable)
Change-Id: I61010728727b490ea6aac32620f2da0484966727
Add an extra parameter to the pay-invoices command that can be used to restrict which invoices will have a payment attempted in stripe. The parameter should be of the form MM/DD/YYY and any invoices created on or after the date will have token balances applied and be processed for payment according to stripe subscriptions settings.
Change-Id: I5da5070d3ac97f45c05c02f2849254bdc44413c3
This change introduces the generate-invoices satellite billing
command whose functionality is equivalent to running
apply-free-coupons, prepare-invoice-records,
create-project-invoice-items, and create-invoices in order.
Invoice finalization must still be performed separately.
Change-Id: Ia3d80b95eef1f2776c38bd730ed731e42ec4c35e
Rather than using Invoice Items to account for storjscan token payments, credit notes will be used and applied to the users finalized invoice. This credit note will reduce the amount due of the users invoice based on the amount of storj token balance the user has on the satellite. Applying credit notes to a finalized invoice also requires that the invoice not be automatically paid when finalized. Therefore, a new command (pay-invoices) was added to initiate payment for users invoices.
Change-Id: Ie539375a10e842e3cb64bf0140834bbab0774f54
We would like to have separate process/command to collect bloom
filters from source different than production DBs. Such process will
use segment loop to build bloom filters for all storage nodes and
will send it to Storj bucket. This change is extending satellite binary
with appropriete command.
New GC service for collecting bloom filter will be a subsequent
change.
Updates https://github.com/storj/team-metainfo/issues/120
Change-Id: Ibc03e119c340919cf468fc1f5a4f3d187bb3a5a1
This change adds project ID and bucket name columns to the generated
partner attribution report. Attribution values are now summed based on
their project ID and bucket name in addition to their user agent.
Additionally, the command to generate the attribution report has been
modified to optionally include only certain user agents.
Change-Id: I61a1d854379134f26b31467d9e83a787beb451dd
Implement a buffer for inserting repair items into the queue in a batch.
Part of https://github.com/storj/storj/issues/4727
Change-Id: I718472b2f2b1f4993c3d6f15c44923776407155a
Attribution is attached to bucket usage, but that's more granular than
necessary for the attribution report. This change iterates over the
bucket attributions, parses the user agent, converts the first entry
to lower case, and uses that as the key to a map which holds the
attribution totals for each unique user agent.
Change-Id: Ib2962ba0f57daa8a7298f11fcb1ac44a8bb97875
Before, the VA query was summing the total and dividing by the number of
rows. This gives the average bytes stored per hour, but we charge for
usage with byte-hours. Why not do value attribution the same way?
To do that, we don't divide by the number of rows. We also have object
and segment fees so return segment-hours and object-hours too.
Change-Id: I1f18b7e1b2bae1d3fae1ca3b93bfc24db5b9b0e6
We've had a lot of issues with alpine and currently there's a broken
network issue on alpine for users running on RPI arm32 architechture
which requires a workaround before docker is able to sync time between
the host and the container: https://wiki.alpinelinux.org/wiki/Release_Notes_for_Alpine_3.13.0\#time64_requirements.
Since we're switching the base image of the storagenode to debian,
it's best to switch the base image of all our docker images to
debian as well for consistency; less drift across them and keeps
the push target consistent.
Change-Id: If3adf7a57dc59f19ef2221b892f340d919798fc5
When there is an error fetching a piece, the reader might be present or
it might not, depending on how far the fetch operation got. The
fetch-pieces code did not handle the "reader-not-present" case. Now it
should.
Change-Id: I263657d544d0ab8ba5d307a34ffc76bbf56835d0
We would like to disable in production those parts of code
which are now mixed with new server-side copy logic.
Change-Id: Iff50682bc9545207330f58dd19b5eee53d404d7f
Currently the metainfo/metabase DB connections are missing the proper
application_name in order to differentiate and filter queries on the DB
side for analytics.
Without it, it is very time-consuming to correlate processes and their load.
This change adds the "check" on DB connection init and passes the fallbacks
in all places to catch connection strings, that do not set it.
Change-Id: Iea5cea8658bc63778ff89038e5c1c352bf482cfd
The "satellite fetch-pieces" command allows a satellite operator to
fetch as many pieces of a segment as possible, along with their
original order limits and hashes as provided by the storage nodes. The
fetched pieces and associated info will be stored on in a specified
folder as they are, rather than being RS-decoded or decrypted.
It is hoped that this will allow easier debugging of certain one-off
problems we've observed in the wild.
Change-Id: I42ae0e9ef0023538e42473a9be5a2460a3ac0f3a
Users signing up through a url containing a promo code will have that code applied to their stripe account instead of the free tier coupon.
Change-Id: I071041b0934648ef3f5bdb05b6ec97c400f89ae4
The main motivation is to wrap the bucket DB and metainfo DB, so we
could check if a bucket is empty before applying geofencing config.
Change-Id: I8bac21555e01d51a663fb557bc1acfc8106bc2e1
Usage: from a host for the affected satellite, issue this command
and supply the number of segments that have been permanently lost. For
example, to indicate 2 segments permanently lost:
satellite register-lost-segments 2
If the unthinkable happens and this is necessary to use, it is presumed
that we will also remove the non-recoverable segments from the metainfo
db, so they will not continue to appear as 'temporarily unavailable' to
the repair checker.
The influxQL query for this will probably look something like:
SELECT sum(total) AS "sum_total"
FROM "v3_stats_new"."autogen"."lost-segments"
WHERE time > :dashboardTime: AND "scope" = 'segment-durability'
GROUP BY time(:interval:), "application", "instance"
FILL(null)
Or use the Flux language in order to get the benefit of the
cumulativeSum function:
from(bucket: "v3_stats_new/autogen")
|> range(start: dashboardTime)
|> filter(fn: (r) => r._measurement == "lost-segments"
and r.scope == "segment-durability"
and r._field == "total")
|> group(columns: ["application", "instance"])
|> cumulativeSum()
Change-Id: I73d364937705aa815af7520ab79c00bc2aea09f6
Multipart upload limits added. Last part has no size limit.
Max number of parts: 10000, min part size: 5 MiB
Change-Id: Ic2262ce25f989b34d92f662bde720d4c4d0dc93d