Added failed_login_count and login_lockout_expiration columns to users table to control users failed login attempts.
We want to prevent brute forcing of user login so this is the first step.
Change-Id: I06b0b9f5415a1922e08cd9908893b2fd3c26bca0
Use the same query when deleting a single object or multiple.
I have chosen not to deduplicate the row "scan" logic because
it is less complicated code and this change would expand to other
parts of the codebase.
Part of https://github.com/storj/storj/issues/4700
Change-Id: I7a958c78c903b2bddd72ca217971f7e8e02a0d0c
Initial space used for pieces is calcualted, not retrieved
from storage nodes and at the end of test we are deleting
also copies that become ancestors to verify that all data
was removed from storage nodes.
Change-Id: I9804adb9fa488dc0094a67a6e258c144977e7f5d
Before, the VA query was summing the total and dividing by the number of
rows. This gives the average bytes stored per hour, but we charge for
usage with byte-hours. Why not do value attribution the same way?
To do that, we don't divide by the number of rows. We also have object
and segment fees so return segment-hours and object-hours too.
Change-Id: I1f18b7e1b2bae1d3fae1ca3b93bfc24db5b9b0e6
We implemented server-side copy feature and we would like to
confirm that it is not affecting GC.
Fixes https://github.com/storj/storj/issues/4696
Change-Id: Id391f0badf5fce51f9910f0df732d477b07fa7ac
s3 allows for overwriting an object when using server-side copy.
This change makes overwriting the destination part of the atomic server-side copy operation so that
if copy fails, the old object is still available.
All the segments of the existing destination are deleted. If this destination object is an ancestor of another object, a new ancestor is promoted.
Fixes https://github.com/storj/storj/issues/4607
Change-Id: I85d1250850bb71867586ac230c8275a0cc1b63c3
Implemented new endpoint for project creation using apigen.
Implemented new service method compatible with new generated api.
Change-Id: I2bae22c8b046f21ec5bb6522f09b9c4e74bdba0c
When deleting a bucket, make sure that object copies in other buckets are
promoted to new ancestor and left in a working state.
Closes https://github.com/storj/storj/issues/4591
Change-Id: I019d916cd6de5ed51dd0dd25f47c35d0ec666af6
To save load on DNS servers, the repair code first tries to dial the
last known good ip and port for a node, and then falls back to a DNS
lookup only if we fail to connect to the last known good ip and port.
However, it looks like we are seeing errors during the client stream
Close() call (probably due to quic-go code), and those are classified
the same as errors encountered during Dial. The repairer code sees this
error, assumes that we failed to contact the node, and retries- but
since we did actually succeed in connecting the first time around, this
results in submitting the same order limit (with the same serial number)
to the storage node, which (rightfully) rejects it.
So together with change I055c186d5fd4e79560f67763175bc3130b9bc7d2 in
storj/uplink, this should avoid the double submission and avoid dinging
nodes' suspension scores unfairly.
See https://github.com/storj/storj/issues/4687.
Also, moving the testsuite directory check up above check-monkit in the
Jenkins Lint task, so that a non-tidy testsuite/go.mod can be recognized
and handled before everything breaks weirdly and seemingly randomly
later on.
Change-Id: Icb2b05aaff921d0af6aba10e450ac7e0a7bb2655
Moved invalid email testing to separate test.
Made all the emails used to have .test domain.
Added links to regex resources.
Change-Id: I26920ba7360064528256a6aeaea947bbe56ef618
Implemented account management api key authentication.
Extended IsAuthenticated service method to include both cookie and api key authorization.
Change-Id: I6f2d01fdc6115cb860f2e49c74980a39155afe7e
This change has two purposes. First is to avoid DB call in case
source and destination bucket are the same.
Second is to return bucket not found error in correct order. If
source and destination bucket are different we will first check
source and later destination. Currently we will get first error
about not existing destination bucket.
Because of this change we stop putting bucket placement
into satellite stream id but its not needed as we don't use
this value with finish move/copy object methods.
Change-Id: I0f7b3ba604d53c722e8fa4d7a37843a69d02bebd
Uplink is fixed and now we should always get both key and nonce
or both empty.
Fixes https://github.com/storj/storj/issues/4646
Change-Id: I65dca2d4d5a10787c2fecad39e301121f1ae242a
Latest CRDB version did't work with our server-side copy deletion
query and we had to rewrite it.
Fixes https://github.com/storj/storj/issues/4655
Change-Id: I66d5c4abfc3c3f53dea8bc01ae11cd2b020e4289
Methods was never used in production and it's not sure that
it will be used at all. Let's drop it and restore if will be needed.
Fixes https://github.com/storj/storj/issues/4480
Change-Id: Ifd780d0096b67be7e72dff84bdcf1d957e0b48b5
This sets the corresponding _numeric columns to be NOT NULL (it has been
verified manually that there are no more NULL _numeric values on any
known satellites, and it should be impossible with current code to get
new NULL values in the _numeric columns.
We can't drop the _gob columns immediately, as there will still be code
running that expects them, but once this version is deployed we can
finally drop them and be totally done with this crazy 5-step migration.
Change-Id: I518302528d972090d56b3eedc815656610ac8e73
If a visitor has accepted cookies on www.storj.io, there might be a
"hubspotutk" cookie in their browser upon account creation. This allows
Hubspot to link website activity with a newly created user.
Change-Id: If06c67fb4d2e5dd3cf46c1fe80a0e9d7f25d6e58
We don't need to have every single test for both, only one for
each should be sufficient. For all other tests it doesn't matter
which one we use.
Change-Id: I9962206a4ee025d367332c29ea3e6bc9f0f9a1de
Embedded files significantly increase the binary size for linking.
Add a tag that allows disabling embedding the build npm code.
Change-Id: I9d1fd7376d1fa035965c33d259faaa6c4770dfe1
So far we assumes that metadata key/nonce cannot be empty at all
but at some point we adjusted code to accept empty metadata/key/nonce
to save DB space.
This change is adjusting how we are processing nonce while
FinishMoveObject/FinishCopyObject. We can use storj.Nonce directly
which makes code cleaner. It's also fixing issue in FinishMoveObject
where we didn't convert nonce correctly to []byte.
Part of change is disabling validation for key and nonce until
uplink will be adjusted. We need change uplink to send always
both key and nonce or non of them. Validation will be restored
as soon as change for uplink will be merged.
https://github.com/storj/storj/issues/4646
Change-Id: Ia1772bc430ae591f54c6a9ae0308a4968aa30bed
This also fixes the build order. Unfortunately we need
to ensure that the web frontends are built before installing
Go binaries.
Fixes https://github.com/storj/storj/issues/4654
Change-Id: I5d1c83125fd3d1a454d3400b2cbdd44bd3f2250c
Add uplink-php and nextcloud as user agents. These sending of these
user agents was added to recent releases of these clients.
Change-Id: Ia2732ade1d9e5cf8d4e41fe246faec3feaa58c25
We are in the process of creating an api to allow users to manage their
accounts programmatically. We would like to use api keys for
authorization. We were originally going to create an entirely new table
for these api keys, but seeing as we already have 2 other tables for
keys/tokens, api_keys and oauth_tokens, we thought it might be better to
use one of these. We're using oauth_tokens.
We create a new oidc.OAuthTokenKind for account management api keys:
KindAccountManagementTokenV0. We made the key versioned because we
likely want to improve the implementation in the future, but we want to
get something functional out the door ASAP because the account management
api feature is highly desired.
Add a new method to oidc.OAuthTokens interface for revoking v0 account
management api keys, RevokeAccountManagementTokenV0. Add update method
to dbx implementation to allow updating the expiration. We will revoke
these keys by setting the expiration to 0 so they are expired.
Change-Id: Ideb8ae04b23aa55d5825b064b5e43e32eadc1fba
Uplink have some types aliased from storj/common repo. It's like
that for easier type replacement if we decide to use custom type
instead of aliasing. Because in storj/storj we are not using aliases
it's impossible to do refactoring on uplink side. This change is
cleaning up this situation.
Change-Id: I20c8e31b9a821983483af1c67b2e7bb91397fd9d
Added new endpoint to get all bucket rollups by bucket ID.
Example of response:
vitalii:~/Documents$ ./testapi.sh
HTTP/1.1 200 OK
Content-Type: application/json
Date: Mon, 07 Mar 2022 11:18:55 GMT
Content-Length: 671
[{"projectID":"a9b2b1b6-714a-4c49-99f1-6a53d0852525","bucketName":"demo-bucket","totalStoredData":0.0026272243089674662,"totalSegments":0.05000107166666666,"objectCount":0.03333373083333333,"metadataSize":1.6750359008333334e-9,"repairEgress":0,"getEgress":0,"auditEgress":0,"since":"2022-03-01T11:00:00Z","before":"2022-03-07T11:17:07Z"},{"projectID":"a9b2b1b6-714a-4c49-99f1-6a53d0852525","bucketName":"qwe","totalStoredData":0.000018436725422435552,"totalSegments":0.016667081388888887,"objectCount":0.016667081388888887,"metadataSize":1.933381441111111e-9,"repairEgress":0,"getEgress":0,"auditEgress":0,"since":"2022-03-01T11:00:00Z","before":"2022-03-07T11:17:07Z"}]
Change-Id: I8b04b24dbc67b78be5c309ce542bf03d6f67e65d
Add a missing instruction step for allowing Go to embed the files
generated by the UI build process into the satellite binary.
Change-Id: Ie9223b8bb5317e53e692e3aa1d1086977daa17c9
We have an issue with latest CRDB. Single query cannot modify
the same table multiple times. Now build is blocked.
This change is unblocking build by:
* adjusting query for inserting into repair queue
* temporary removing code for deletion for server-side copy
* temporary disable backward compatibility tests for CRDB
Change-Id: Idd9744ebd228e5dc05bdaf65cfc8f779472a975d
Chronograph statistics indicate that much of our Gateway-MT traffic may
originate from and also is metriced as rclone traffic. This makes it
difficult to understand what our users are doing. This solution makes
it clear what products are actually being used, likely without
increasing the cardinality of our metrics by more than one.
Change-Id: I5d5e2af3715fa0864f69f1145fd78caf7e4a4224
Remove redundant suspension timestamp column from nodes and reputation tables.
Suspended timestamp was moved to unknown_audit_suspended and suspended column is
no longer used so there is no point in keeping both.
Change-Id: Ieea3f12141b33ec9efe7594f4c9dbc7e10675b0e
If B is a copy of A, and C is a copy of B, then in the segment_copies table, it should appear that C is a copy of A.
Fixes https://github.com/storj/storj/issues/4538
Change-Id: I7e6b03f7584597cf616cd1e0cd0156386771d207
This change adds an integration test that performs an OAuth
workflow and verifies the OIDC endpoints are functioning as
expected.
Change-Id: I18a8968b4f0385a1e4de6784dee68e1b51df86f7
In the server-side copy initial implementation, we are inserting segments one by one. This PR inserts them all at once.
Fixes https://github.com/storj/storj/issues/4476
Change-Id: I776dba99be38a0eef73366e8e9287cbb794003dc
For server-side copy we adjusted one method DeleteObjectExactVersion.
Other deletion methods won't be used directly in code at the moment.
We will adjust other methods later or decide if we will need them at
all.
To handle deletion of objects with copies or just copies correctly we
need to use DeleteObjectExactVersion method in two places while:
* removing object before upload
* explicit object deletion
This change is also changing DeleteObjectExactVersion method to
delete pending objects because we need this functionality to
delete object before new upload.
https://github.com/storj/storj/issues/4481
Change-Id: Ieff5cc95732bb70ed8cc0ecdd62e03c929857c02
We were not checking if we were provided an empty StreamID.
Furthermore, this changes returns the object copy with the correct createdAt field.
Change-Id: Iefc563c34ae9d8c1e233895155c1718bf905df91
This change adds endpoints for supporting OpenID Connect (OIDC) and
OAuth requests. This allows application developers to easily
develop apps with Storj using common mechanisms for authentication
and authorization.
Change-Id: I2a76d48bd1241367aa2d1e3309f6f65d6d6ea4dc
Reworked email validation for new users (for old users trying to login or reset password validation remains the same).
Regular expression was built according to RFC 5322 and then extended to include international characters.
Change-Id: Id0224fee21a1ec0f8a2dcca5b8431197dee6b9d3
When performing re-authorizations for OAuth, we need to pull up an
APIKey using it's project id and name. This change also updates the
APIKeyInfo struct to return the head value associated with an API
key.
Change-Id: I4b40f7f13fb9b58a1927dd283b42a39015ea550e
Update the user to the default paid tier project limit, which is currently 3 projects, when the user upgrades to a paid account.
Change-Id: I95b19d62cebc7d878b716355f2ebcaf0b51ca3f7
For nodes in excluded areas, we don't necessarily want to remove them
from the pointer, but we do want to increase the number of pieces in the
segment in case those excluded area nodes go down. To do that, we
increase the number of pieces repaired by the number of pieces in
excluded areas.
Change-Id: I0424f1bcd7e93f33eb3eeeec79dbada3b3ea1f3a
Copy object functionality should support setting new metadata for
copy. This change is adjusting FinishCopyObject method to set new
metadata when OverrideMetadata field is set to true.
Fixes https://github.com/storj/storj/issues/4483
Change-Id: Ica37cb57e8edae301cdc483fbda4f3ddba5d2702
Added new endpoint to get project's single bucket usage rollup.
Extended generation code to handle service method args.
Change-Id: Ief768632a801c047c66e0617056fbd7b30427b33
Getting a copied segment by GetLatestObjectLastSegment needs to retrieve inline_data or remote_alias_pieces and other information from the original segment.
Resolves https://github.com/storj/storj/issues/4478
Change-Id: I8c7822c343b1ec3e04683f31a20f71e3097b4b4a
We decided that we want to have segment limit for paying users high
enough to not have to change it too often.
Fixes https://github.com/storj/storj/issues/4590
Change-Id: Ic1c38bf3e2fcc000548ff4c7e7004647b39fbecf
There are two events in
web/satellite/src/utils/constants/analyticsEventNames.ts which did not
have corresponding entries in the backend analytics service.
Change-Id: If0f67cef2ed312953e580d855d63366e7c12786a
Users will be required to enter a MFA passcode or recovery code
upon attempting a password reset for an account with MFA enabled.
Change-Id: I08d07597035d5a25849dbc70f7fd686753530610
Create global config to specify a list of country codes that should be
excluded from node selection during uploads.
This exclusion is not implemented when the upload selection cache is
disabled.
Change-Id: Ic41e8b4f18857a11045668eac23107da99668a72
This change allows us to send newly registered users to a configured URL
to help us track user conversions for marketing campaigns.
Brave conversions continue to be tracked using the /signup-success page
within the satellite app.
Change-Id: I9b451947ce0f39d3c99b233cb4b806d361151823
Added new projectaccounting query to get project's single bucket usage rollup.
Added new service method to call new query.
Added implementation for IsAuthenticated method which is used by new generated API.
Change-Id: I7cde5656d489953b6c7d109f236362eb465fa64a
Add a RepairExcludedCountryCodes config flag for overlay for providing a list of country codes to exclude nodes from target repair selection.
Mark segments with less than repairThreshold pieces in countries not in the RepairExcludedCountryCodes as not healthy.
With this change, the repair process is not affected. The segment will be removed from the repair queue by the repairer.
Another change will handle the logic at the repairer level.
Fixes https://github.com/storj/team-metainfo/issues/95
Change-Id: I9231b32de117a116488de055a3e94efcabb46e81
Added a feture flag which will be used to indicate if new generated console api is used.
Fixed some comments from previous PR.
Change-Id: Ice31c998b0b347028a491c971a648fd1269bfd49
Return segments when creating a test object so that it can be checked
that the correct segments are remaining after a delete action.
Change-Id: Ifc245948935ba278806e887672c03abc5f2c2654
There was a defined type (`validationErrors`) for gathering several
validation errors and classify them with the `ErrValdiation errs.Class`.
`errs.Combine` doesn't maintain the classes of the errors to combine,
for example
```
var myClass errs.Class = "My error class"
err1 := myClass.Wrap(erros.New("error 1"))
err2 := myClass.Wrap(erros.New("error 2"))
err3 := errors.New("error 3")
combinedErr := errs.Combine(err1, err2, err3)
myClass.Has(combinedErr) // It returns false
// Even only passing errors with a class and with the same one for all
// of them
combinedErr := errs.Combine(err1, err2)
myClass.Has(combinedErr) // It returns false
```
Hence `validationErrors` didn't return what we expected to return when
calling its `Combine` method.
This commit delete the type and it replaces by `errs.Group` when there
are more than one error, and wrapping the `errs.Group.Err` returned
error with `ErrValiation` error class.
The bug caused the HTTP API server to return a 500 status code as you
can seee in the following log message extracted from the satellite
production logs:
```
code: 500
error: "console service: validation: full name can not be empty; validation: Your password needs at least 6 characters long; validation: mail: no address"
errorVerbose: "console service: validation: full name can not be empty; validation: Your password needs at least 6 characters long; validation: mail: no address
storj.io/storj/satellite/console.(*Service).CreateUser:593
storj.io/storj/satellite/console/consoleweb/consoleapi.(*Auth).Register:250
net/http.HandlerFunc.ServeHTTP:2047
storj.io/storj/private/web.(*RateLimiter).Limit.func1:90
net/http.HandlerFunc.ServeHTTP:2047
github.com/gorilla/mux.(*Router).ServeHTTP:210
storj.io/storj/satellite/console/consoleweb.(*Server).withRequest.func1:464
net/http.HandlerFunc.ServeHTTP:2047
net/http.serverHandler.ServeHTTP:2879
net/http.(*conn).serve:1930"
message: "There was an error processing your request"
```
The issues was that not being classified with `ErrValidation` class it
was not picked by the correct switch branch of the
`consoleapi.Auth.getStatusCode` method which is in the call chain to
`consoleapi.Auth.Register` method when it calls
`console.Service.CreateUser` and returns an error.
These changes should return the appropriated HTTP status code (Bad
Request) when `console.Service.CreateUser` returns a validation error.
Returning the appropriated HTTP statsus code also makes not to show this
as an error in the server logs because the Bad Request sttatus code gets
logged with debug level.
Change-Id: I869ea85788992ae0865c373860fbf93a40d2d387
Updates metadata and metainfo to return object metadata with
FinishCopyObject request.
https://github.com/storj/storj/issues/4474
Change-Id: I32cba5c20a943272e9b5964df1b3d6463ad212dc
We would like to disable in production those parts of code
which are now mixed with new server-side copy logic.
Change-Id: Iff50682bc9545207330f58dd19b5eee53d404d7f
Update the Content Security Policy to whitelist `blob:` for the img-src
and media-src directives. This is necessary to prevent CSP errors in the
object browser while loading previews and object maps.
Change-Id: Ic32bf0954f300c77ec4f0fe11fae63f0c7b622da
Remove special characters from the test database name to improve
compatibility with external tools like the Cockroach web gui.
Change-Id: I3a6a368a5051e3064e7279b14eec4f2d4ff3c435
Part of server-side copy.
For getting a copied segment GetSegmentByPosition needs to retrieve inline_data or remote_alias_pieces and other information from the original segment.
Fixes https://github.com/storj/storj/issues/4477
Change-Id: I283cbc546df6e1e2fa34a803c7ef280ecd0f65d7
Currently the metainfo/metabase DB connections are missing the proper
application_name in order to differentiate and filter queries on the DB
side for analytics.
Without it, it is very time-consuming to correlate processes and their load.
This change adds the "check" on DB connection init and passes the fallbacks
in all places to catch connection strings, that do not set it.
Change-Id: Iea5cea8658bc63778ff89038e5c1c352bf482cfd
It can be useful to compare object copies created during unit tests.
They appear in the segment_copies table as couple (stream_id, ancestor_stream_id).
Change-Id: Id335c3ff7084fe30346456d27e670aff329154ea
The "satellite fetch-pieces" command allows a satellite operator to
fetch as many pieces of a segment as possible, along with their
original order limits and hashes as provided by the storage nodes. The
fetched pieces and associated info will be stored on in a specified
folder as they are, rather than being RS-decoded or decrypted.
It is hoped that this will allow easier debugging of certain one-off
problems we've observed in the wild.
Change-Id: I42ae0e9ef0023538e42473a9be5a2460a3ac0f3a
Part of server-side copy implementation.
Creates the methods BeginCopyObject and FinishCopyObject in satellite/metabase/copy_object.go.
This commit makes it possible to copy objects and their segments in metabase.
https://github.com/storj/team-metainfo/issues/79
Change-Id: Icc64afce16e5e0e15b83c43cff36797bb9bd39bc
All code on known satellites at this moment in time should know how to
populate and use the new numeric columns on the
stripecoinpayments_tx_conversion_rates and coinpayments_transactions
tables in the satellite db. However, there are still gob-encoded
big.Float values in the database from before these columns existed. To
get rid of those values, so that we can excise the gob-decoding code
from the relevant sections, however, we need something to read the gob
bytestrings and convert them to numeric values, a few at a time, until
they're all gone.
To accomplish that, this change adds two chores to be run in the
satellite core process- one for the coinpayments_transactions table, and
one for the stripecoinpayments_tx_conversion_rates table. They should
run relatively infrequently, so that we do not impose any undue load on
processing resources or the db.
Both of these chores work without using explicit sql transactions, but
should still be concurrent-safe, since they work by way of
compare-and-swap type operations.
If the satellite core process needs to be restarted, both of these
chores will start scanning for migrateable rows from the beginning of
the id space again. This is not ideal, but shouldn't be a problem (as
far as I can tell, there are only a few thousand rows at most in either
of these tables on any production satellite).
Change-Id: I733b7cd96760d506a1cf52735f598c6c3aa19735
In addition to upgrading the storj.io/common library, this change
moves off the TCPConnector in favor of the HybridConnector per
the deprecation warning.
Change-Id: I7e7e1e7568e8b95e4a99ad9caa158a799e68e1e3
Tests refactoring to reuse testplanet instance between some
of tests. This should decrease resources needed to run those
tests.
Change-Id: I98f3041ec23085d3903b19acd339904973319ec1
added InactivityTimerEnabled flag to enable/disable feature
added InactivityTimerDelay to configure delay time in seconds
default timer set up to 10 minutes
reset dom events: keypress, mouseover, mousedown, touchmove
Change-Id: Idb66067c2902b2cdbe1a972225319c8abff97927
For some reason very small interval for chore is causing
very long execution time for Pause() method and that was
the reason why a simple test was slow.
Change-Id: Iebfdea16e31f9865493de3e1bd7d69c6f9116fc0
The Update method of the usersDB takes a console.User as an argument.
To update the columns in the DB, we have to migrate the fields from the
console.Users struct into a special dbx struct. If one of these fields
is left empty, then the zero value of that field's type will be used to
update the respective column.
In most cases where the users.Update method is called, the entire
console.User is apparently retrieved first, fields are updated, then it
is passed to users.Update. This is not the case for
service.UpdateAccount. Because these fields are not populated in the
user struct in UpdateAccount before it is passed into users.Update,
their respective columns in the database are overwritten with zero:
ProjectLimit, ProjectStorageLimit, ProjectBandwidthLimit,
ProjectSegmentLimit, PaidTier, MfaEnabled, MfaRecoveryCodes, MfaSecretKey
Solution: Do what is done in other places which call users.Update. Take
the console.User from the auth context, update the relevant fields on
that, then pass that in.
Change-Id: I3cbd560e8ea5397e5c27711fb40bb3907d987028
segment_copies table has (stream_id, ancestor_stream_id) as primary key,
but it should have only stream_id as primary as a stream_id can only have one ancestor_stream_id.
https://github.com/storj/team-metainfo/issues/84
Change-Id: I0218295e86e449c9dfefa5a21319f3083bf80afd
A user-agent string can contain multiple "products", in the case of
Gateway-MT at least this includes the HTTP client's full user agent.
This means that "other" is often logged even when we know the Storj
product, and sometimes logged more than once per call to "collect".
This makes sure that "other" is only logged if a product isn't
identified, and only logged once.
Change-Id: I8536f7eb32877e36fec97dab7b8d477ccb10f92e
For a thorough explanation of the overall transition, see the message on
commit c053bdbd70.
This change will rename the columns containing gob-encoded big.Floats
and add new columns which will contain the equivalent data in a more
sql-friendly format.
The change should *not* break already-running satellite processes,
because all functionality touching these tables has already been taught
to work with these new columns if it sees any "undefined column" errors.
Change-Id: I229324376533e383c5d05064b8aedad149cf825b
transfer-sh will be set as of https://github.com/dutchcoders/transfer.sh/pull/467.
filezilla needs to be verified and duplicati is set per info from @TopperDEL
comet and orbiter are added as preparation.
Change-Id: I44d730a7b3ba1969068e48c2477b478831799cd1
Part of server-side copy implementation.
Adds new table to metabase for keeping references between
copied segments and its ancestors.
Fixes https://github.com/storj/team-metainfo/issues/84
Change-Id: I436d4b533a3d951bcd752e222e3b721cee7f0844
This change adds some more checks to the deletion process for projects and
users, since we ran into a race condition during invoicing, where projects
have been deleted before the invoicing was finished, leading to missing
references.
This PR changes the logic to block user deletion if we are in exactly that period,
while also allowing the deletion of projects/users on free tier during the month.
Change-Id: Ic0735205e6633762fb7e3c2fa13e744cdfa5ec32
Finished implementing queries for both bandwidth and storage using pgx.Batch.
Fixed CSP styling issue.
Change-Id: I5f9e10abe8096be3115b4e1f6ed3b13f1e7232df
There is a sev-2 issue to add more browser caching.
In this PR I made object map and object preview to be fetched by signed request with non-public credentials using AWS SignatureV4 package.
Change-Id: Ib5013fa6d6af3faa97eed5168c11a13f9629cd87
Before this change we were returning full DB error message.
That can be very confusing for end user. This change is translating
error message into more user frindly version and fixes also DRPC
error status code.
Fixes https://github.com/storj/team-metainfo/issues/76
Change-Id: I29b06ab4ba50a0d14db7a822a2906d95d65ab524
We don't have different version of object than 1 so at the moment
this method is not needed. Also using GetObjectExactVersion
should be slightly more performant.
Change-Id: I78235d8ae22594cc1d6345dabcc915f41cd7797b
We already split main code base, now we need to split test
to reflect new files structure (bucket/object/segment/other).
Fixes https://github.com/storj/team-metainfo/issues/12
Change-Id: Ica1054c4fc7df764483b03f204b4beba094df8e1
We have here two migrations in fact. One is for existing users,
we need to check if its paying user (paid_tier) and set 1M for
them and 150K for others.
Second migration is to set limits for projects depends on owner.
If owner is a paying user (paid_tier) then project should have
1M limit, otherwise it should be 150K. In this case to make
migration faster initially projects table segment_limit was set
to 1M by default. With migration we are selecting all paying
users and we are setting 150K limit for all projects which owners
are not in paying users set.
Initially we had a concern if that query wil lbe quick enough to
be executed during deployment but after investigation CRDB
team confirms that this should take seconds for out DBs.
Fixes https://github.com/storj/team-metainfo/issues/70
Change-Id: I8be06e9f949b68b993e043cc15525e8483bf49ea
Existing tests are checking errors by comparing error message.
We plan to expose storage/segment limit errors as a public uplink
API but before that will happen we need to update tests to handle
both cases as new uplink error will have a bit different error message
than current tested error.
https://github.com/storj/uplink/issues/77
Change-Id: Id2706323c60d050d96752e66e859d4ec051a69b9
Implemented endpoint and query to get bandwidth chart data for new project dashboard.
Connected backend with frontend.
Storage chart data is mocked right now.
Change-Id: Ib24d28614dc74bcc31b81ee3b8aa68b9898fa87b
This is part of change to split metainfo endpoint into smaller
files. It will be grouped by bucket/object/segment/other requests.
Tests will be split as a separate set of changes.
Updates https://github.com/storj/team-metainfo/issues/12
Change-Id: I6c691e4d0e192fe3ad7974d2d0ab5ced0d272f3c
For better performance we should use AOST for getting
data where being up to data it not very crucial. We don't
care about small differences for segment limit calculation.
Change-Id: I9b2d4f2bd15ebc9d1c46bc84dd51a2e9d9231506
This is part of change to split metainfo endpoint into smaller
files. It will be grouped by bucket/object/segment/other requests.
Tests will be split as a separate set of changes.
Change-Id: I9b097dcc8fa889f985b7f4ef5f8f435a1ff0ef95
Additional test case to cover situation when cache value
expired and we need to get this value directly from DB.
Change-Id: I07bda67e19333e09a567104ce70f112fd47a7845
The database table got invalid input and the resulting error
was not checked. This adds updates that contain invalid fields
to trigger different errors.
Change-Id: Iacea32cbef5599aab562c88e4113073596cc9996
We had two problems here. First was how we were handling
errors message from GetProjectSegmentUsage. We were always
returning error, even for ErrKeyNotFound which should be used
to refresh value in cache and not propagated out of method.
Second, we were using wrong value to update cache. We used
current value from cache which obviously it's not what we intend.
Fixes https://github.com/storj/storj/issues/4389
Change-Id: I4d7ca7a1a0a119587db6a5041b44319102ef64f8
This is part of change to split metainfo endpoint into smaller
files. It will be grouped by bucket/object/segment/other requests.
Tests will be split as a separate set of changes.
Change-Id: I5128c84e06c82777fe71460bf5f9a6e26e52a243
Currently the rate limit has kept per satellite api endpoint.
Since we run 9+ api endpoints in production, we do not need
a limit of 1000, since the intention was to allow 1000 total.
This change reduces the effective limit given 9 instances
down to 900, which should be close enough.
Change-Id: Ia579149ccc3a12e8febe0cfd5586b8a39de40f55
We were returning pure non rpc errors in two cases.
This change added loging and correct rpc error as a
return.
Change-Id: I581ceb17dcdc00921dfa3c1057015c3b4d04308d
Users signing up through a url containing a promo code will have that code applied to their stripe account instead of the free tier coupon.
Change-Id: I071041b0934648ef3f5bdb05b6ec97c400f89ae4
A few months ago we removed all references to the contained
column in nodes and reputations
bb21551a9c
and
56fe636123
But we never did the migration to drop the columns.
This commit will finally do that.
Change-Id: I82aa2f257b1fb14a2f1c4c4a1589f80895360ae4
The previous change (59648dc272) ends up removing a lot of characters
for valid non-English names. Instead, only replace URL characters such
as slashes, colons, and periods. Since someone may use these characters
to separate two parts of a name, e.g. Name1/Name2, replace these
characters with a hyphen.
Change-Id: I4cc3d1bdb05d525a83970cf1b42479414c9678e7
When a user is created, but before verification or forgot password email
is sent, remove any special characters in the provided name. This
protects us against certain phishing attacks.
Change-Id: Ieddd3479da20eb80b9f1b56eb86c8f46bca2642c
We need to combine methods from accounting.Service (ExceedsStorageUsage and ExceedsSegmentUsage)
to run checks concurrently.
Resolves https://github.com/storj/team-metainfo/issues/73
Change-Id: I47831bca92457f16cfda789da89dbd460738ac97
Free-tier segment usage limit was defined as 150k, not 140k. This change
is correcting that.
https://github.com/storj/team-metainfo/issues/8
Change-Id: I71ec0961930b19fd09b2b996e01acd406a8dcf8f
Refactoring to do few things:
* move simple validation before validations with DB calls
* combine validation check/update for storage and segment
limits together
Change-Id: I6c2431ba236d4e388791d2e2d01ca7e0dd4439fc
updating project segment usage entry to redis is enabled only if
segment validation is enabled in config value, so no incompatible
changes will be added while deployment.
Change-Id: I1288cb9ff0a8a00f095dc94e20d2f14393e9a613
The following 2 commits added 2 new query parameters to set the `burst`
and `segments` limits for a project and also to new fields to the
response JSON object body to the "get project limits" endpoint:
* c911360eb5
* b7b010adc9
However, the API documentation and the Typescript client API (used by
the UI) weren't updated with them.
Later, the commit dc6128e9e2 updated the
Typescript client API with the `segments` limit but it didn't update the
documentation to reflect it.
This commit brings all things that were missed in those previous
commits.
Change-Id: Iff12cdd4a0d3c448cd73b57a98d171ba468d2c98
We want to know usage statistics for our main tools
like uplink-cli or rclone. Initially we will collect
only usage stats without relation to specific process
e.g. download or upload.
Change-Id: I203b1a6c07ae014e710368f77163f13fdf10763c
Use overlay db node as primary source of truth for reputation status when node
query it's status via Endpoint.Get. If node was was disqualified in overlay reputation
status may be inconsistend with reputation table.
Change-Id: I1bf858a4b020324035847a9215ef29cf5432f6a4
comment tests that contains fields needs
to rename on uplink side without breaking compatibility.
after rename tests will be moved back from comments.
Change-Id: I3bc4aff6ae7f6711ade956ac389f0d7e1a1ab91a
comment tests that contains fields needs
to rename on uplink side without breaking compatibility.
after rename tests will be moved back from comments.
Change-Id: I439783c62678c32805a85aa52bef1d2b767543a1
We want to be able to limit the number of segments per project for users.
To limit this we need to check limit value associated with project
and value of used segments already in BeginMoveObject, BeginMoveSegment
and increment cache segments usage after each CommitSegment call.
Resolves https://github.com/storj/team-metainfo/issues/1
Change-Id: I6290e67c095a174b9d101c4521802d9bfe0453b8
inconsistency
The original design had a flaw which can potentially cause discrepancy
for nodes reputation status between reputations table and nodes table.
In the event of a failure(network issue, db failure, satellite failure, etc.)
happens between update to reputations table and update to nodes table, data
can be out of sync.
This PR tries to fix above issue by passing through node's reputation from
the beginning of an audit/repair(this data is from nodes table) to the next
update in reputation service. If the updated reputation status from the service
is different from the existing node status, the service will try to update nodes
table. In the case of a failure, the service will be able to try update nodes
table again since it can see the discrepancy of the data. This will allow
both tables to be in-sync eventually.
Change-Id: Ic22130b4503a594b7177237b18f7e68305c2f122
Batching of the order submissions can lead to combining the allocated
traffic totals for two completely different time windows, resulting
in incorrect customer accounting. This change will group the batched
order submissions by projectID as well as time window, leading to
distinct updates of a buckets bandwidth rollup based on the hour
window in which the order was created.
Change-Id: Ifb4d67923eec8a533b9758379914f17ff7abea32
We want to issue a reminder to users when they don't verify their email within 24hrs of registering. This change only adds a column to the users table.
Change-Id: I92e2baeabf179338ffec01574d4752c0ccdba88b
This allows us to distinguish between accounts created from the signup
page vs. from www.storj.io.
Also set a field `account_created=true` when we send so
that we can see when existing leads have created an account.
Change-Id: Ibef34825a08b6c68b8f2869625e576bb837520e5
At some point we missed to add metadata key to list objects
response. Because of that uplink is taking key from pb.SteamMeta.
We need to clean it up.
Tests will be added on uplink side.
https://github.com/storj/uplink/issues/71
Change-Id: I3328e2f1b86bca15aeaf89f8e59cdca3c8e97742
We enabled this chore on different satellites
for testing. Works fine so we can enable this
by default for all configurations.
Change-Id: I987639685d8de5c7e5798adca30fe26bdac9e1d1
Currently most of the satellite log is made up by this specific log
message and thus we should increase the log level. We already have
a monkit event tracking this case. In case we need to look into this
more we should just increase the satellite log level.
Change-Id: I27ebed3e6745701c66c83e8c52ddc836ad9d5f4e
I introduced a change in the UI that was a better way to dynamically
render Svelte components
(https://review.dev.storj.io/c/storj/storj/+/5931/4..5) during the
review of the first version of it, however, while it works perfectly
on development mode it doesn't work when the assets are built for
production, failing silently, because the constructor name get renamed
due to the name mangling caused by the minifyer as stated in the
following issue: https://github.com/sveltejs/svelte/issues/6980
This commit use a different alternative not based on the constructor
name and it works fine with the production build.
Change-Id: I643c405f877a9206cf0e51b44d5138e5a9756a79
Migrate the satellite admin UI web app from the Svelte template used to
generate a Svelte App scaffolding to SvelteKit.
There aren't any functional changes in the application, however, the
commit has a lot because:
1. SvelteKit uses a different directory layout and constraints to it, so
the files have been moved.
2. The files have changed its formatting due to the new default linter
configurations that SvelteKit uses.
3. The linter detected some issues with using `object` and `any` types
in Typescript, so they have been replaced by better general types
(e.g. Record).
The migration allows to use the new tooling rather than Rollup
directly, besides that will empower the future of it when it needs more
features (e.g. different routes, etc.).
Change-Id: Ifa6736c13585708337f6c5a59388077b784eaddd
All limits we have for projects have also parent limits stored
with user data. New created project is first taking limits from
owner (user) limits.
This change is extending users table with project_segment_limit
column and adds functionality to get and set value for this
column.
Change-Id: Iff5e36c62b517652390b649fc05992475916ecff
This change disallows creation of users possessing the same email.
If a user attempts to create an account with an email address
that's already used - whether it belongs to an active account or not -
he will be notified of unsuccessful account creation. If he attempts to
log in using an email address belonging to an inactive account,
he will be presented with a link allowing him to re-send the
verification email. Attempting to register with an email address
belonging to an existing account triggers a password reset email.
Change-Id: Iefd8c3bef00ecb1dd9e8504594607aa0dca7d82e
When we switched to segments loop we stopped
adding bucket name and project ID to order limit
metadata. This is causing lots of logging that we don't need.
Change-Id: Id3f878a170b7a6b801e8a838ee69165715985d60
For backward compatibility we are overriding pb.StreamMeta
values returned as encrypted metadata. It turns out that we
should do it not when target values are missing but when
values to override exists.
This was causing problems after move operation. Details can be
found here https://github.com/storj/uplink/issues/70
Backward compatibility test will be added to storj/uplink testsuite.
Change-Id: I72e7a01226b1dd62902cb0d6ebb1ff91a4693005
We want to set maximum number of segments per project. This change will
add functionality to get number of segments currently used by project.
To avoid often DB calls segment cound will be cached and refreshed every
few minutes.
Change-Id: I2ecb6484f5afc3875c0e0dfaea360e8872f9d196
Exposes functionality to get and update project segment
limit. It will be used to limit number of segments per project
while uploading object.
Change-Id: I971d48eebb4e7db8b01535c3091829e73437f48d
Previously, only valid partner IDs could be used for bucket level value attribution. Now that any useragent byte slice can be used, we should allow for empty useragent strings to be stored rather than throwing an error or leaving the bucket with no attribution.
Change-Id: I7043f835588dab1c401a27e31afd74b6b5a3e44b
Currently detecting if error NotFound is for object
or bucket is very fragile on uplink side. We cannot
change this messages for now.
Change-Id: I6ec4d98116477812f031134e4f1c9e73bdce8b27
We want to set maximum number of segments per
project. This change adds only column to projects table.
Default value 1M is set to make later migration easier as
we need to set 1M for paid tier users and 140K for free
tier users.
Change-Id: I8e83712e08c5bd91dfa59f652d17e45c14240a36
Added new query to get project object and segment count.
Added appropriate object and segment count view for new project dashboard.
Change-Id: I69a2e55442f318c51dc365c0c578b964f2f06c7f
Move an endpoint that was classified to return the geofence
configuration of a bucket to return the bucket information, which
also include the geofence configuration, because it's what it was
returning.
Change-Id: I0e0a6aac330296383a50a92d2352df9088df77d5
The satellite admin server was augmented with 3 new endpoints to manage
buckets geofencing configurations, however, they weren't created in the
admin UI API class for making them available in the web interface.
This commit adds these new endpoints to the admin UI.
Change-Id: If060d1f10a3bc6c365e16a891673d4ffc89e4b41
This reverts commit 2c0a360a14.
Avoid big transactions. We'll do it outside of the migration pipeline.
Change-Id: Iade810d81bb2453c9e351149cb84662b207ee527
Value attribution codes were converted into UUIDs and stored in the users, projects, api_keys, bucket_metainfos, and value_attributions tables in the partner_id column. This migration will lookup the appropriate partner name associated with each of these UUIDs, and store the partner name directly in the user_agent column within each table. If an error occurs during the partner ID to partner name conversion, the partner ID value will be migrated to user_agent.
A note on the migration test data, postgres.v182.sql:
With one exception, all preexisting rows in the relevant tables had a NULL partner_ID. Therefore, we needed to insert new rows with partner_ID set under the OLD DATA section in order to test that the migration works. For each affected table, we insert one row with a valid partner ID which has a corresponding partner name, and one row with a partner ID which would return an error during the conversion to the partner name.
Change-Id: Iad977d72df0ce95a0c5ca80a065c4276ec1f2354
This new method will be used with mechanism to limit number
of segments per project, similar to limiting buckets or bandwidth.
This is only one of multiple changes we will do to implement this
limitation.
Change-Id: Ia516c2a006ad1d7b4431d780679be9d809848f4b
Check if the context is done at the beginning of the download object and
segment and return right away if it's the case.
Check if Redis returned an error due to context cancellation for not
logging the error.
Change some logging messages of Redis errors to reflect on them if the
error happens while downloading an object or a segment.
Change-Id: I8ed8ff9ff7bb170b560f41356ea06820ce6c4e12
If satellite can't find enough nodes to successfully download a segment,
it probably is not the fault of storage nodes.
Change-Id: I681f66056df0bb940da9edb3a7dbb3658c0a56cb
Tests for checking different scenarios for enabling and disabling
geofence config on empty and non-empty buckets.
Change-Id: I0fe9abb1008d2daee660f22ffe4defe6226b9aa7
The main motivation is to wrap the bucket DB and metainfo DB, so we
could check if a bucket is empty before applying geofencing config.
Change-Id: I8bac21555e01d51a663fb557bc1acfc8106bc2e1
Add the project ID to error logs messages about not being able to
retrieve and track storage or bandwidth usage.
Some error logs related to these errors already contained the project ID
but others didn't, having it in the ones that didn't make them more
consistent and will provide more info which may be helpful for
troubleshooting them.
Change-Id: Ia9fc707a7f3aff0867645bb941badc199c2bf832
We implemented filtering of system metadata to
reduce DB load in case user will need only list of
objects but we missed that encryption configuration
is needed to decrypt object key.
This change is always including encryption configuration into
list objects results.
Change-Id: Iaf7b3d8441480e3ad787f19a28f1b648584d5f7d
Don't log as an Error when users make request that cannot be fulfilled
because they exceed their storage or bandwidth limits.
Those are not system errors and the service is handling them correctly
and showing them in the logs an "ERROR" is misleading.
Change-Id: Iac642b7e8ba92840bb943192ad0694b5f4930258
To allow for changing limits for new users, while leaving existing users limits as they are, we must store the project limits for each user. We currently store the limit for the number of projects a user can create in the user DB table. This change would also store the project bandwidth and storage limits in the same table.
Change-Id: If8d79b39de020b969f3445ef2fcc370e51d706c6
Change the satellite Admin HTTP server for:
* Embedding the UI assets into the Go binary.
* Serve the UI assets from the embedded file system or from a specific
directory path through a configuration flag, without requiring
authentication but keeping the authentication verification for the API
endpoints.
* Add tests to verify that the UI assets are served without
authentication.
Change-Id: I9003ac96f1ec585a189b67fc1cb315905403d557
Alter satellites DB nodes table to add `disqualification_reason` int column
which contains disqualification type enum of why node has been disqualified.
Change-Id: Ia514557018ca27e1984216dc5004346d59869d16