Added new endpoint to get project's single bucket usage rollup.
Extended generation code to handle service method args.
Change-Id: Ief768632a801c047c66e0617056fbd7b30427b33
Getting a copied segment by GetLatestObjectLastSegment needs to retrieve inline_data or remote_alias_pieces and other information from the original segment.
Resolves https://github.com/storj/storj/issues/4478
Change-Id: I8c7822c343b1ec3e04683f31a20f71e3097b4b4a
Having the storagenode and storagenode-updater processes in one container
requires a process manager to properly handle the individual processes.
Using a process manager like supervisord requires that you package
supervisord and it configuration in the image, along with the storagenode
and storagenode-updater binaries.
Installing supervisord requires that we run apk to install it and its
dependencies at build time which makes it difficult to build multi-platoform
images; executing apk forces a requirement of the build system to run
foreign architechtures.
This change adds a dockerfile which will be used to build the base image
for the storagenode and has supervisord packaged. The base image will be
built manually using docker buildx, with QEMU binfmt support.
Updates https://github.com/storj/storj/issues/4489
Change-Id: I33f8f01398a7207bca08d8a4a43f4ed56b6a2473
We decided that we want to have segment limit for paying users high
enough to not have to change it too often.
Fixes https://github.com/storj/storj/issues/4590
Change-Id: Ic1c38bf3e2fcc000548ff4c7e7004647b39fbecf
There are two events in
web/satellite/src/utils/constants/analyticsEventNames.ts which did not
have corresponding entries in the backend analytics service.
Change-Id: If0f67cef2ed312953e580d855d63366e7c12786a
Users will be required to enter a MFA passcode or recovery code
upon attempting a password reset for an account with MFA enabled.
Change-Id: I08d07597035d5a25849dbc70f7fd686753530610
Create global config to specify a list of country codes that should be
excluded from node selection during uploads.
This exclusion is not implemented when the upload selection cache is
disabled.
Change-Id: Ic41e8b4f18857a11045668eac23107da99668a72
This change allows us to send newly registered users to a configured URL
to help us track user conversions for marketing campaigns.
Brave conversions continue to be tracked using the /signup-success page
within the satellite app.
Change-Id: I9b451947ce0f39d3c99b233cb4b806d361151823
Added new projectaccounting query to get project's single bucket usage rollup.
Added new service method to call new query.
Added implementation for IsAuthenticated method which is used by new generated API.
Change-Id: I7cde5656d489953b6c7d109f236362eb465fa64a
Add a RepairExcludedCountryCodes config flag for overlay for providing a list of country codes to exclude nodes from target repair selection.
Mark segments with less than repairThreshold pieces in countries not in the RepairExcludedCountryCodes as not healthy.
With this change, the repair process is not affected. The segment will be removed from the repair queue by the repairer.
Another change will handle the logic at the repairer level.
Fixes https://github.com/storj/team-metainfo/issues/95
Change-Id: I9231b32de117a116488de055a3e94efcabb46e81
Added a feture flag which will be used to indicate if new generated console api is used.
Fixed some comments from previous PR.
Change-Id: Ice31c998b0b347028a491c971a648fd1269bfd49
Return segments when creating a test object so that it can be checked
that the correct segments are remaining after a delete action.
Change-Id: Ifc245948935ba278806e887672c03abc5f2c2654
There was a defined type (`validationErrors`) for gathering several
validation errors and classify them with the `ErrValdiation errs.Class`.
`errs.Combine` doesn't maintain the classes of the errors to combine,
for example
```
var myClass errs.Class = "My error class"
err1 := myClass.Wrap(erros.New("error 1"))
err2 := myClass.Wrap(erros.New("error 2"))
err3 := errors.New("error 3")
combinedErr := errs.Combine(err1, err2, err3)
myClass.Has(combinedErr) // It returns false
// Even only passing errors with a class and with the same one for all
// of them
combinedErr := errs.Combine(err1, err2)
myClass.Has(combinedErr) // It returns false
```
Hence `validationErrors` didn't return what we expected to return when
calling its `Combine` method.
This commit delete the type and it replaces by `errs.Group` when there
are more than one error, and wrapping the `errs.Group.Err` returned
error with `ErrValiation` error class.
The bug caused the HTTP API server to return a 500 status code as you
can seee in the following log message extracted from the satellite
production logs:
```
code: 500
error: "console service: validation: full name can not be empty; validation: Your password needs at least 6 characters long; validation: mail: no address"
errorVerbose: "console service: validation: full name can not be empty; validation: Your password needs at least 6 characters long; validation: mail: no address
storj.io/storj/satellite/console.(*Service).CreateUser:593
storj.io/storj/satellite/console/consoleweb/consoleapi.(*Auth).Register:250
net/http.HandlerFunc.ServeHTTP:2047
storj.io/storj/private/web.(*RateLimiter).Limit.func1:90
net/http.HandlerFunc.ServeHTTP:2047
github.com/gorilla/mux.(*Router).ServeHTTP:210
storj.io/storj/satellite/console/consoleweb.(*Server).withRequest.func1:464
net/http.HandlerFunc.ServeHTTP:2047
net/http.serverHandler.ServeHTTP:2879
net/http.(*conn).serve:1930"
message: "There was an error processing your request"
```
The issues was that not being classified with `ErrValidation` class it
was not picked by the correct switch branch of the
`consoleapi.Auth.getStatusCode` method which is in the call chain to
`consoleapi.Auth.Register` method when it calls
`console.Service.CreateUser` and returns an error.
These changes should return the appropriated HTTP status code (Bad
Request) when `console.Service.CreateUser` returns a validation error.
Returning the appropriated HTTP statsus code also makes not to show this
as an error in the server logs because the Bad Request sttatus code gets
logged with debug level.
Change-Id: I869ea85788992ae0865c373860fbf93a40d2d387
Updates metadata and metainfo to return object metadata with
FinishCopyObject request.
https://github.com/storj/storj/issues/4474
Change-Id: I32cba5c20a943272e9b5964df1b3d6463ad212dc
We would like to disable in production those parts of code
which are now mixed with new server-side copy logic.
Change-Id: Iff50682bc9545207330f58dd19b5eee53d404d7f
Update the Content Security Policy to whitelist `blob:` for the img-src
and media-src directives. This is necessary to prevent CSP errors in the
object browser while loading previews and object maps.
Change-Id: Ic32bf0954f300c77ec4f0fe11fae63f0c7b622da
sometimes these scripts want to have an access imported
after it has been potentially modified by having the
satellite address and node id added. it used to use the
uplink command to do this, but the cli api for that
has changed. rather than try to have the script detect
which uplink version is in use and call the right thing
it can always write out a valid yaml file and depend on
the new cli migrating it.
Change-Id: Ib82819699333f5f29e00117b99bfb10640033b94
uplink command versions >= 1.48.0 always do multipart
uploads which cannot be downloaded by any of the
versions in the test < v1.27.6. so skip those tests.
Change-Id: I9644afbd14bfce9facfd87644d132f7d66367d62
Remove special characters from the test database name to improve
compatibility with external tools like the Cockroach web gui.
Change-Id: I3a6a368a5051e3064e7279b14eec4f2d4ff3c435
The text has been expanded a bit to clarify that it is necessary to create identity files with an example before using the Docker image.
Changed the <identity-dir> placeholder to <multinode-identity-dir> so no one confuses them with the storagenode identity files.
Changed the <storage-dir> placeholder to <multinode-config-dir> so no one confuses them with the storagenode 'config' folder.
fixed#4547
Part of server-side copy.
For getting a copied segment GetSegmentByPosition needs to retrieve inline_data or remote_alias_pieces and other information from the original segment.
Fixes https://github.com/storj/storj/issues/4477
Change-Id: I283cbc546df6e1e2fa34a803c7ef280ecd0f65d7
In order to tag the latest release for the multinode image, it makes
the most sense to do it at the same time as we release the storagenode
image.
Change-Id: I2d63c1f93858354ad1f9a4fce0ce45a8fda2716f
Closes#4547
We do not build an docker image for the multinode dashboard,
which makes monitoring for docker-focused environments harder.
This adds the basic image and ties it into CI/CD.
Change-Id: I14c01a7f1f0019f6f5c1b8fd75dc424fc362b18d
Currently the metainfo/metabase DB connections are missing the proper
application_name in order to differentiate and filter queries on the DB
side for analytics.
Without it, it is very time-consuming to correlate processes and their load.
This change adds the "check" on DB connection init and passes the fallbacks
in all places to catch connection strings, that do not set it.
Change-Id: Iea5cea8658bc63778ff89038e5c1c352bf482cfd
It can be useful to compare object copies created during unit tests.
They appear in the segment_copies table as couple (stream_id, ancestor_stream_id).
Change-Id: Id335c3ff7084fe30346456d27e670aff329154ea
The "satellite fetch-pieces" command allows a satellite operator to
fetch as many pieces of a segment as possible, along with their
original order limits and hashes as provided by the storage nodes. The
fetched pieces and associated info will be stored on in a specified
folder as they are, rather than being RS-decoded or decrypted.
It is hoped that this will allow easier debugging of certain one-off
problems we've observed in the wild.
Change-Id: I42ae0e9ef0023538e42473a9be5a2460a3ac0f3a
Part of server-side copy implementation.
Creates the methods BeginCopyObject and FinishCopyObject in satellite/metabase/copy_object.go.
This commit makes it possible to copy objects and their segments in metabase.
https://github.com/storj/team-metainfo/issues/79
Change-Id: Icc64afce16e5e0e15b83c43cff36797bb9bd39bc
All code on known satellites at this moment in time should know how to
populate and use the new numeric columns on the
stripecoinpayments_tx_conversion_rates and coinpayments_transactions
tables in the satellite db. However, there are still gob-encoded
big.Float values in the database from before these columns existed. To
get rid of those values, so that we can excise the gob-decoding code
from the relevant sections, however, we need something to read the gob
bytestrings and convert them to numeric values, a few at a time, until
they're all gone.
To accomplish that, this change adds two chores to be run in the
satellite core process- one for the coinpayments_transactions table, and
one for the stripecoinpayments_tx_conversion_rates table. They should
run relatively infrequently, so that we do not impose any undue load on
processing resources or the db.
Both of these chores work without using explicit sql transactions, but
should still be concurrent-safe, since they work by way of
compare-and-swap type operations.
If the satellite core process needs to be restarted, both of these
chores will start scanning for migrateable rows from the beginning of
the id space again. This is not ideal, but shouldn't be a problem (as
far as I can tell, there are only a few thousand rows at most in either
of these tables on any production satellite).
Change-Id: I733b7cd96760d506a1cf52735f598c6c3aa19735
the new uplink command expects to be able to migrate
and so we need to specify the --legacy-config-dir
flag sometimes as well. but unfortunately, the old
uplink will error if it gets a flag it doesn't
understand, so we have to set it as an env var.
but we can't use the env var for the --config-dir
flag all the time because the old uplink doesn't
look for that env var.
Change-Id: I019315192c0e6c348814527794342d823a5f9ec3
some old configs had a value like
access: <data>
in the yaml. this would end up causing migration to
create a json file where it had no access values and
a default name of the data. that's not what the command
expects to operate on, so now we fix that during
migration and add a little mini migration for any
users that may have hit it.
Change-Id: I4c98ca5d09d043fe9338738ef6b4f930f933892c
In addition to upgrading the storj.io/common library, this change
moves off the TCPConnector in favor of the HybridConnector per
the deprecation warning.
Change-Id: I7e7e1e7568e8b95e4a99ad9caa158a799e68e1e3
Tests refactoring to reuse testplanet instance between some
of tests. This should decrease resources needed to run those
tests.
Change-Id: I98f3041ec23085d3903b19acd339904973319ec1
* This commit stubs out the test plan for fuzz testing.
* pushing small change before rebasing.
* This commit implements a rough draft of our security tests, most importantly fuzz tests.
* Moving testplan up a directory and renaming to indicate executive summary.
* Cleaning out fuzzing tests from cmd/uplink,
* updates to tools list