Full file sync before saving files to piecestore seems to be very expensive.
Bwfore we do any step to eliminate them we are planning to do more measurement which requires a temporary flag to turn it off. (not for production).
Change-Id: I5cb8f8cb348ca3590fb5eae14d02edb3f0424617
Currently, its not strightforward how to benchmark with testplanet.
This change add Bench method to make it easy.
Change-Id: I212dfe71a18bb6ddd7a127e5b6d313b1b0c1f824
We will introduce new logic for creating new objects (BeginObject).
Instead of using single version internally (1) we will be selecting first
available version during object creation. Because we need to be sure
that everything is wired up correctly we need a feature flag to be
able to control if new feature is enabled.
Change-Id: If0f8496397130811f43bf9db9fdcc2b30cd2e4ca
Implement a new service to read retain filter from a bucket and
send them out to storagenodes.
This allows the retain filters to be generated by a separate command on
a backup of the database.
Paralellism (setting ConcurrentSends) and end-to-end garbage collection
tests will be restored in a subsequent commit.
Solves https://github.com/storj/team-metainfo/issues/121
Change-Id: Iaf8a33fbf6987676cc3cf74a18a8078916fe673d
Our Test Versions still requires 1.16 to be compatible with our oldest
uplink versions. These changes make the code compile with 1.16.
Also, it makes go generate work in private/apigen/example.
Change-Id: Ib2f7493941a16f361328fe01d2be293f26123719
Currently the paths were set relative to the root of the module,
however the code did not ensure that we are running relative to the
module directory.
Also, ensure typescript output corresponds to our styling.
Change-Id: I2b3cbd4ea8f2615e35c7b58c6fb8851669c47885
We would like to have separate process/command to collect bloom
filters from source different than production DBs. Such process will
use segment loop to build bloom filters for all storage nodes and
will send it to Storj bucket.
This change adds integration with testplanet which makes writing
unit tests possible.
Updates https://github.com/storj/team-metainfo/issues/120
Change-Id: I7b335c5dafa8cffe265c56b75d8c8f8567580893
This change adds the following endpoints:
- projects/apikeys/{id}: returns a paged list of API keys for the
project specified by the given ID
- apikeys/delete/{id}: deletes the API key specified by the given ID
Additionally, the API Go code generator has been given the ability to
process unsigned integer parameters.
Change-Id: I5ff24e012da24a3f06bea1ebb62bae6ff62f951a
Today each storagenode should have a port which is opened for the internet, and handles DRPC protocol calls.
When we do a HTTP call on the DRPC endpoint, it hangs until a timeout.
This patch changes the behavior: the main DRPC port of the storagenodes can accept HTTP requests and can be used to monitor the status of the node:
* if returns with HTTP 200 only if the storagnode is healthy (not suspended / disqualified + online score > 0.9)
* it CAN include information about the current status (per satellite). It's opt-in, you should configure it so.
In this way it becomes extremely easy to monitor storagenodes with external uptime services.
Note: this patch exposes some information which was not easily available before (especially the node status, and used satellites). I think it should be acceptable:
* Until having more community satellites, all storagenodes are connected to the main Storj satellites.
* With community satellites, it's good thing to have more transparency (easy way to check who is connected to which satellites)
The implementation is based on this line:
```
http.Serve(NewPrefixedListener([]byte("GET / HT"), publicMux.Route("GET / HT")), p.public.http)
```
This line answers to the TCP requests with `GET / HT...` (GET HTTP request to the route), but puts back the removed prefix.
Change-Id: I3700c7e24524850825ecdf75a4bcc3b4afcb3a74
When enconding structs into JSON, byte slices are marshalled as base64
encoded string using the base64.StdEncoding.Encode():
ea9c3fd42d/src/encoding/json/encode.go (L833-L861)
We, however, expect API Secrets to be encoded as base64URL, so when
an marshalled secret (with byte slice type) is added to the multinode
dashboard, it fails with `illegal base64 data at input byte XX`.
This change changes the type of APISecret field in the
multinode/nodes.Nodes struct to use multinodeauth.Secret type instead
of []byte.
multinodeauth.Secret is extended with custom MarshalJSON and
UnmarshalJSON methods which implement the json.Marshaler and
json.Unmarshaler interfaces, respectively.
Resolves https://github.com/storj/storj/issues/4949
Change-Id: Ib14b5f49ceaac109620c25d7ff83be865c698343
Similar to the existing snapshot based tests of satellite/metabase db we make a migration here which is:
* dedicated to unit tests
* faster (with less steps)
* but safe: additional unit test ensures that the snapshot based migration and normal prod migration have the same results.
Change-Id: Ie324b09f64b4553df02247a9461ece305a6cf832
This change implements a unit test for ensuring proper
processing of requests and responses by generated API code.
Additionally, this change requires API handlers to explicitly receive
Monkit scopes rather than assuming that `mon` will always exist in the
generated API code's namespace.
Change-Id: Iea56f139f9dad0050b7d09ea765189280c3466f2
I don't know why the go people thought this was a good idea, because
this automatic reformatting is bound to do the wrong thing sometimes,
which is very annoying. But I don't see a way to turn it off, so best to
get this change out of the way.
Change-Id: Ib5dbbca6a6f6fc944d76c9b511b8c904f796e4f3
- Previously unused struct Endpoint.Request now defines the form
of the request body.
- Path parameters (e.g. "id" in "/delete/{id}") are defined in
the Endpoint.PathParams field.
- Endpoint.Params has been renamed to Endpoint.QueryParams to
eliminate confusion.
Change-Id: Ifef51ca2f362c33086f0e43e936d50b0fdd18aa1
Previously there was no realtime administration of the storage usage
during copies. Now there is.
Closes https://github.com/storj/storj/issues/4719
Change-Id: I0d536bf551d16208116c3aceac89ed590ec473bf
This change fixes the issue where the API generator would produce
different Go code for the same API definition upon each invocation
due to the random nature of map iteration.
Change-Id: I6770a10faf06311c24f541611c25d0b2b0f8e521
A message intended to diagnose issues in a previous change was
accidentally allowed to remain in finalized code and merged into
the codebase.
Change-Id: I7df662a81e3790de065beeb6db54be5280ac2fa1
This change resolves several issues:
* Imports can no longer be duplicated
* The import statement now conforms to Storj's code style requirements
* Packages are imported as needed, eliminating extraneous imports
Change-Id: Ie021e06bb0d472c65aa3575db94bc81ccac108e3
This change integrates the session management database functionality
with the web application. Claim-based authentication has been removed
in favor of session token-based authentication.
Change-Id: I62a4f5354a3ed8ca80272814aad2448f901eab1b
Implemented project delete endpoint for REST API.
Added project usage status check service method to indicate if project can be deleted.
Updated project invoice status check method to indicate if project can be deleted.
Change-Id: I57dc96efb072517144252001ab5405446c9cdeb4
Implemented new service method for generating API keys.
Implemented new endpoint.
Improved multiple endpoint groups handling.
Change-Id: Iba26fbf9123707b5b4c2d5e8c5a35d507404f24a
Add storjscan wallets DB to the satellite. For now this DB is a one to one mapping of the users account to a storjscan wallet that can be used by the account holder to make payments on their Storj account.
relates to https://github.com/storj/storj/issues/4347
Change-Id: I6e65b15817b90ceb75641244f9bf173c3b4228a7
We want to remind unverified users to verify their emails:
once after 24 hours has passed and again after 5 days has passed.
Add mailservice.Service to satellite core because it is needed by the
chore for sending emails. To add the mailservice.Service to the core,
we create a helper function in satellite/peer.go to avoid duplicating
the code in both api.go and core.go. In addition to the chore, this
change adds methods to users.DB to get unverified users in need of
reminder.
Change-Id: I4e515bdf43f922788b4f965b2efb34fa32288bd1
This will apply an appropriate "subsystem" label to goroutines which are
part of the core, api, repairer, admin, or gc subsystems.
It will also label goroutines whose job it is to watch for slow shutdown
of lifecycle groups (there are a lot of these).
Finally, this will also label goroutines whose job it is to wait on the
toplevel errgroup of a subsystem.
Change-Id: I560b5fff4a0101300d6c9a67609c2d80d7424486
This change created access grant manually to save some cpu
by avoiding root key derivation and avoid dialing satellite
for project id. It affects only testplanet tests.
Change-Id: I6742bcf699cca51e658f147e6df72c6b3db78d10
Added documentation.
Replaced PUT request with POST request.
Added inline param support for PATCH request.
Replaced unix timestamps handling with RFC-3339 timestampts handling.
Added 'Bearer' method requirement for Authorization header.
Change-Id: I4faa3864051dd18826c2c583ada53666d4aaec44
Implemented new endpoint for project update using apigen.
Implemented new service method compatible with new generated api.
Change-Id: Ic0a7e0bbf3ea942275bd927d6e30cfb7e721e9c1
Testplanet limits the execution of a single test case to 3 minutes.
This change adds a Timeout field to the Testplanet's config, so test
cases can configure their timeout. This is helpful when executing larger
3rd party test suite on top of Testplanet.
Change-Id: Ibbf7c5ffdc0a9e723e7e28b885eac084f04c6ca1
Implemented new endpoint for project creation using apigen.
Implemented new service method compatible with new generated api.
Change-Id: I2bae22c8b046f21ec5bb6522f09b9c4e74bdba0c
Implemented account management api key authentication.
Extended IsAuthenticated service method to include both cookie and api key authorization.
Change-Id: I6f2d01fdc6115cb860f2e49c74980a39155afe7e
Rather than starting all servers on 127.0.0.1 start them
on a random local host to try avoid port exhaustion.
The port exhaustion is just a guess.
Change-Id: Ibf31d6a017852238d836291d703642b44ff66c0c