added in storj-sim rangedloop for each satellite, to verify it works for metrics oveserver,
removed identity from rangedloop peer as we never use it, added logs on service run, added loop
to service instead of endless for loop, interval value to config
Closes: https://github.com/storj/storj/issues/5414
Change-Id: Ibc3b06071b68feda4a35b45da2bbe36e22a02fc8
This change stubs userinfo endpoint from storj/common/pb/userinfo.proto.
It also adds config for allowed peers, and a method for verifying peers.
Issue: https://github.com/storj/storj/issues/5358
Change-Id: I057a0e873a9e9b3b9ad0bba69305f0d708bd9b9e
Adds new method Exists which can be used to verify which
requested piece ids exists on storage node. Will verify only pieces
which belongs to the satellite that used that endpoint.
Minum WASM size was increased a bit.
https://github.com/storj/storj/issues/5415
Change-Id: Ia5f9cadeb526541b2776a8973eb7d50133ad8636
This change creates a new independent process, the 'auditor', comparable
to the repairer, gc, and api processes. This will allow auditors to be
scaled independently of the core.
Refs: https://github.com/storj/storj/issues/5251
Change-Id: I8a29eeb0a6e35753dfa0eab5c1246048065d1e91
Now that all the reverification changes have been made and the old code
is out of the way, this commit renames the new things back to the old
names. Mostly, this involves renaming "newContainment" to "containment"
or "NewContainment" to "Containment", but there are a few other renames
that have been promised and are carried out here.
Refs: https://github.com/storj/storj/issues/5230
Change-Id: I34e2b857ea338acbb8421cdac18b17f2974f233c
Now that we are doing scalable piecewise reverifications, the code for
handling the old way of doing things (containment, pending audits,
reporting, testing) can now be removed.
Refs: https://github.com/storj/storj/issues/5230
Change-Id: Ief1a75f423eff682e8f3d57804e343b3409a6631
Here we add a worker class comparable to audit.Worker, which will be
responsible for pulling items off of the reverification queue and
calling reverifier.ReverifyPiece on them.
Note that piecewise reverification audits (which this will control) are
not yet being done. That is, nothing is being added to the
reverification queue at this point.
Refs: https://github.com/storj/storj/issues/5251
Change-Id: I94e28830e27caa49f2c8bd4a2336533e187ab69c
The query changes we did while fixing the usage graph led to wrong
payout calculations directly linked to disk space.
This change:
- avoids converting from Bh to B directly in the query
- returns the at_rest_total in the original bytes*hour value
- returns at_rest_total_bytes as the calculated disk spaced used in bytes
- uses the at_rest_total_bytes only for the disk space graph
- return summary_bytes as the average disk space used within the specified date
- updates the disk space graph header to "average disk space used this month"
The total disk used in the month is also displayed in B not B*day
Resolves https://github.com/storj/storj/issues/5355
Change-Id: I2cfefb0fe711f9c59de2adb547c4ab50b05c7cbb
NewContainment will replace Containment later in this commit chain, but
for now it is not yet being used.
NewContainment will allow a node to be contained for multiple pending
reverify jobs at a time. It is implemented by way of the reverify queue.
Refs: https://github.com/storj/storj/issues/5231
Change-Id: I126eda0b3dfc4710a88fe4a5f41780618ec19101
at_rest_total_bytes and summary_bytes are storage usages return as bytes
instead of bytes*hour. This is used for the disk space graph.
Updates https://github.com/storj/storj/issues/5355
Change-Id: I81f77fe9b9069cf3b29ab681586e506363e5b066
Adding a new worker comparable to Verifier, called Reverifier; as the
name suggests, it will be used for reverifications, whereas Verifier
will be used for verifications.
This allows distinct logging from the two classes, plus we can add some
configuration that is specific to the Reverifier.
There is a slight modification to GetNextJob that goes along with this.
This should have no impact on operational concerns.
Refs: https://github.com/storj/storj/issues/5251
Change-Id: Ie60d2d833bc5db8660bb463dd93c764bb40fc49c
This change turns off fsync on the postgres container used for tests. This
reduces migration time significantly when initializing new satellite
databases.
The change also includes a new benchmark for satellite initialization in
testplanet.
$ benchstat old.txt new.txt name old time/op new time/op delta
Run_Satellite/Postgres-16 1.36s ± 0% 0.08s ± 0% ~ (p=1.000 n=1+1)
Change-Id: Ic954767133864770cf652b0dfdcd6b109a167b5f
Running all of the migrations necessary to initialize a storage node
database takes a significant amount of time during runs.
The package current supports initializing a database from manually coalesced
migration data (i.e. snapshot) which improves the situation somewhat.
This change takes things a bit further by changing the snapshot code to
instead hydrate the database directory from a pre-generated snapshot zip
file.
name old time/op new time/op delta
Run_StorageNodeCount_4/Postgres-16 2.50s ± 0% 0.16s ± 0% ~ (p=1.000 n=1+1)
Change-Id: I213bbba5f9199497fbe8ce889b627e853f8b29a0
This change causes rate limiting errors to be returned to the client
as JSON objects rather than plain text to prevent the satellite UI from
encountering issues when trying to parse them.
Resolvesstorj/customer-issues#88
Change-Id: I11abd19068927a22f1c28d18fc99e7dad8461834
As part of the effort of splitting out the auditor workers to their own
process, we are transitioning the communication between the auditor
chore and the verification workers to a queue implemented in the
database, rather than the sequence of in-memory queues we used to use.
This logical database is safely partitionable from the rest of
satelliteDB.
Refs: https://github.com/storj/storj/issues/5251
Change-Id: I6cd31ac5265423271fbafe6127a86172c5cb53dc
Add a new chore to periodically insert nodes who are offline and
have not gotten an offline email in a certain amount of time into node
events
Change-Id: I658b385bb777b0240c98092946a93d65bee94abc
Create NodeEvents Chore on satellite core to read nodeevents DB and
notify node operators on node events. The chore sends notifications
grouped by email and event type: it selects the oldest entry in
nodeevents.DB and also any other event with the same email and event
type no matter how old it is. The oldest entry of a group must exist for
a minimum amount of time before that group can be selected, however.
This minimum amount of time is a configurable value:
--node-events.selection-wait-period. This wait period allows us to
combine events of the same time and same email address into a singular
email.
Change-Id: I8b444aa324d2dae265cc27d9e9e85faef79195d8
Add nodeevents.DB to satellite overlay service so we can insert node
events into the nodeevents DB.
Change-Id: I642c0ccc9941ecdb08cb22d5c8cf701959a55156
Full file sync before saving files to piecestore seems to be very expensive.
Bwfore we do any step to eliminate them we are planning to do more measurement which requires a temporary flag to turn it off. (not for production).
Change-Id: I5cb8f8cb348ca3590fb5eae14d02edb3f0424617
Currently, its not strightforward how to benchmark with testplanet.
This change add Bench method to make it easy.
Change-Id: I212dfe71a18bb6ddd7a127e5b6d313b1b0c1f824
We will introduce new logic for creating new objects (BeginObject).
Instead of using single version internally (1) we will be selecting first
available version during object creation. Because we need to be sure
that everything is wired up correctly we need a feature flag to be
able to control if new feature is enabled.
Change-Id: If0f8496397130811f43bf9db9fdcc2b30cd2e4ca
Implement a new service to read retain filter from a bucket and
send them out to storagenodes.
This allows the retain filters to be generated by a separate command on
a backup of the database.
Paralellism (setting ConcurrentSends) and end-to-end garbage collection
tests will be restored in a subsequent commit.
Solves https://github.com/storj/team-metainfo/issues/121
Change-Id: Iaf8a33fbf6987676cc3cf74a18a8078916fe673d
Our Test Versions still requires 1.16 to be compatible with our oldest
uplink versions. These changes make the code compile with 1.16.
Also, it makes go generate work in private/apigen/example.
Change-Id: Ib2f7493941a16f361328fe01d2be293f26123719
Currently the paths were set relative to the root of the module,
however the code did not ensure that we are running relative to the
module directory.
Also, ensure typescript output corresponds to our styling.
Change-Id: I2b3cbd4ea8f2615e35c7b58c6fb8851669c47885
We would like to have separate process/command to collect bloom
filters from source different than production DBs. Such process will
use segment loop to build bloom filters for all storage nodes and
will send it to Storj bucket.
This change adds integration with testplanet which makes writing
unit tests possible.
Updates https://github.com/storj/team-metainfo/issues/120
Change-Id: I7b335c5dafa8cffe265c56b75d8c8f8567580893
This change adds the following endpoints:
- projects/apikeys/{id}: returns a paged list of API keys for the
project specified by the given ID
- apikeys/delete/{id}: deletes the API key specified by the given ID
Additionally, the API Go code generator has been given the ability to
process unsigned integer parameters.
Change-Id: I5ff24e012da24a3f06bea1ebb62bae6ff62f951a
Today each storagenode should have a port which is opened for the internet, and handles DRPC protocol calls.
When we do a HTTP call on the DRPC endpoint, it hangs until a timeout.
This patch changes the behavior: the main DRPC port of the storagenodes can accept HTTP requests and can be used to monitor the status of the node:
* if returns with HTTP 200 only if the storagnode is healthy (not suspended / disqualified + online score > 0.9)
* it CAN include information about the current status (per satellite). It's opt-in, you should configure it so.
In this way it becomes extremely easy to monitor storagenodes with external uptime services.
Note: this patch exposes some information which was not easily available before (especially the node status, and used satellites). I think it should be acceptable:
* Until having more community satellites, all storagenodes are connected to the main Storj satellites.
* With community satellites, it's good thing to have more transparency (easy way to check who is connected to which satellites)
The implementation is based on this line:
```
http.Serve(NewPrefixedListener([]byte("GET / HT"), publicMux.Route("GET / HT")), p.public.http)
```
This line answers to the TCP requests with `GET / HT...` (GET HTTP request to the route), but puts back the removed prefix.
Change-Id: I3700c7e24524850825ecdf75a4bcc3b4afcb3a74
When enconding structs into JSON, byte slices are marshalled as base64
encoded string using the base64.StdEncoding.Encode():
ea9c3fd42d/src/encoding/json/encode.go (L833-L861)
We, however, expect API Secrets to be encoded as base64URL, so when
an marshalled secret (with byte slice type) is added to the multinode
dashboard, it fails with `illegal base64 data at input byte XX`.
This change changes the type of APISecret field in the
multinode/nodes.Nodes struct to use multinodeauth.Secret type instead
of []byte.
multinodeauth.Secret is extended with custom MarshalJSON and
UnmarshalJSON methods which implement the json.Marshaler and
json.Unmarshaler interfaces, respectively.
Resolves https://github.com/storj/storj/issues/4949
Change-Id: Ib14b5f49ceaac109620c25d7ff83be865c698343
Similar to the existing snapshot based tests of satellite/metabase db we make a migration here which is:
* dedicated to unit tests
* faster (with less steps)
* but safe: additional unit test ensures that the snapshot based migration and normal prod migration have the same results.
Change-Id: Ie324b09f64b4553df02247a9461ece305a6cf832
This change implements a unit test for ensuring proper
processing of requests and responses by generated API code.
Additionally, this change requires API handlers to explicitly receive
Monkit scopes rather than assuming that `mon` will always exist in the
generated API code's namespace.
Change-Id: Iea56f139f9dad0050b7d09ea765189280c3466f2
I don't know why the go people thought this was a good idea, because
this automatic reformatting is bound to do the wrong thing sometimes,
which is very annoying. But I don't see a way to turn it off, so best to
get this change out of the way.
Change-Id: Ib5dbbca6a6f6fc944d76c9b511b8c904f796e4f3
- Previously unused struct Endpoint.Request now defines the form
of the request body.
- Path parameters (e.g. "id" in "/delete/{id}") are defined in
the Endpoint.PathParams field.
- Endpoint.Params has been renamed to Endpoint.QueryParams to
eliminate confusion.
Change-Id: Ifef51ca2f362c33086f0e43e936d50b0fdd18aa1
Previously there was no realtime administration of the storage usage
during copies. Now there is.
Closes https://github.com/storj/storj/issues/4719
Change-Id: I0d536bf551d16208116c3aceac89ed590ec473bf
This change fixes the issue where the API generator would produce
different Go code for the same API definition upon each invocation
due to the random nature of map iteration.
Change-Id: I6770a10faf06311c24f541611c25d0b2b0f8e521
A message intended to diagnose issues in a previous change was
accidentally allowed to remain in finalized code and merged into
the codebase.
Change-Id: I7df662a81e3790de065beeb6db54be5280ac2fa1
This change resolves several issues:
* Imports can no longer be duplicated
* The import statement now conforms to Storj's code style requirements
* Packages are imported as needed, eliminating extraneous imports
Change-Id: Ie021e06bb0d472c65aa3575db94bc81ccac108e3
This change integrates the session management database functionality
with the web application. Claim-based authentication has been removed
in favor of session token-based authentication.
Change-Id: I62a4f5354a3ed8ca80272814aad2448f901eab1b