Set disqualification reason when reputations stats are updated on DB.Update.
Added tests for DisqualifyNode and for disqualification cases which happens during Update.
Change-Id: I00130ab5d9722422805159ad2f183c205de60f7e
When an api server is processing a graceful exit (node is connected and
getting lists of pieces to transfer), and the api server is shut down,
it was incorrectly marking all pending graceful exits as complete. The
GE then either passed or failed depending on the ratio of successfully
transferred pieces to unsuccessful pieces. In at least one case, _no_
pieces were transferred at all before the GE was marked a success.
Change-Id: I62cfab54a2296572c2e654eb460b62f772b7a60b
Attribution is attached to bucket usage, but that's more granular than
necessary for the attribution report. This change iterates over the
bucket attributions, parses the user agent, converts the first entry
to lower case, and uses that as the key to a map which holds the
attribution totals for each unique user agent.
Change-Id: Ib2962ba0f57daa8a7298f11fcb1ac44a8bb97875
Implemented new endpoint for project update using apigen.
Implemented new service method compatible with new generated api.
Change-Id: Ic0a7e0bbf3ea942275bd927d6e30cfb7e721e9c1
Migrate free tier users to have default limits if their limits were set to 0.
They were affected by incorrect working of Update user query.
Change-Id: I4c49c8d99b12dba2b9b0ab61b2175085976dcc95
We implemented server-side copy feature and we would like to
confirm that it is not affecting accounting/tally service.
Resolves https://github.com/storj/storj/issues/4697
Change-Id: I3944ea52c0acc68107ec15c1911750dc7d947501
Test case to verify if server-side copy doesn't affect
GE in any negative way.
Fixes https://github.com/storj/storj/issues/4699
Change-Id: I8c385767cca61499d46d9cb8de7318c56e5d7397
Added failed_login_count and login_lockout_expiration columns to users table to control users failed login attempts.
We want to prevent brute forcing of user login so this is the first step.
Change-Id: I06b0b9f5415a1922e08cd9908893b2fd3c26bca0
Use the same query when deleting a single object or multiple.
I have chosen not to deduplicate the row "scan" logic because
it is less complicated code and this change would expand to other
parts of the codebase.
Part of https://github.com/storj/storj/issues/4700
Change-Id: I7a958c78c903b2bddd72ca217971f7e8e02a0d0c
Initial space used for pieces is calcualted, not retrieved
from storage nodes and at the end of test we are deleting
also copies that become ancestors to verify that all data
was removed from storage nodes.
Change-Id: I9804adb9fa488dc0094a67a6e258c144977e7f5d
Before, the VA query was summing the total and dividing by the number of
rows. This gives the average bytes stored per hour, but we charge for
usage with byte-hours. Why not do value attribution the same way?
To do that, we don't divide by the number of rows. We also have object
and segment fees so return segment-hours and object-hours too.
Change-Id: I1f18b7e1b2bae1d3fae1ca3b93bfc24db5b9b0e6
We implemented server-side copy feature and we would like to
confirm that it is not affecting GC.
Fixes https://github.com/storj/storj/issues/4696
Change-Id: Id391f0badf5fce51f9910f0df732d477b07fa7ac
s3 allows for overwriting an object when using server-side copy.
This change makes overwriting the destination part of the atomic server-side copy operation so that
if copy fails, the old object is still available.
All the segments of the existing destination are deleted. If this destination object is an ancestor of another object, a new ancestor is promoted.
Fixes https://github.com/storj/storj/issues/4607
Change-Id: I85d1250850bb71867586ac230c8275a0cc1b63c3
Implemented new endpoint for project creation using apigen.
Implemented new service method compatible with new generated api.
Change-Id: I2bae22c8b046f21ec5bb6522f09b9c4e74bdba0c
When deleting a bucket, make sure that object copies in other buckets are
promoted to new ancestor and left in a working state.
Closes https://github.com/storj/storj/issues/4591
Change-Id: I019d916cd6de5ed51dd0dd25f47c35d0ec666af6
To save load on DNS servers, the repair code first tries to dial the
last known good ip and port for a node, and then falls back to a DNS
lookup only if we fail to connect to the last known good ip and port.
However, it looks like we are seeing errors during the client stream
Close() call (probably due to quic-go code), and those are classified
the same as errors encountered during Dial. The repairer code sees this
error, assumes that we failed to contact the node, and retries- but
since we did actually succeed in connecting the first time around, this
results in submitting the same order limit (with the same serial number)
to the storage node, which (rightfully) rejects it.
So together with change I055c186d5fd4e79560f67763175bc3130b9bc7d2 in
storj/uplink, this should avoid the double submission and avoid dinging
nodes' suspension scores unfairly.
See https://github.com/storj/storj/issues/4687.
Also, moving the testsuite directory check up above check-monkit in the
Jenkins Lint task, so that a non-tidy testsuite/go.mod can be recognized
and handled before everything breaks weirdly and seemingly randomly
later on.
Change-Id: Icb2b05aaff921d0af6aba10e450ac7e0a7bb2655
Moved invalid email testing to separate test.
Made all the emails used to have .test domain.
Added links to regex resources.
Change-Id: I26920ba7360064528256a6aeaea947bbe56ef618
Implemented account management api key authentication.
Extended IsAuthenticated service method to include both cookie and api key authorization.
Change-Id: I6f2d01fdc6115cb860f2e49c74980a39155afe7e
This change has two purposes. First is to avoid DB call in case
source and destination bucket are the same.
Second is to return bucket not found error in correct order. If
source and destination bucket are different we will first check
source and later destination. Currently we will get first error
about not existing destination bucket.
Because of this change we stop putting bucket placement
into satellite stream id but its not needed as we don't use
this value with finish move/copy object methods.
Change-Id: I0f7b3ba604d53c722e8fa4d7a37843a69d02bebd
Uplink is fixed and now we should always get both key and nonce
or both empty.
Fixes https://github.com/storj/storj/issues/4646
Change-Id: I65dca2d4d5a10787c2fecad39e301121f1ae242a
Latest CRDB version did't work with our server-side copy deletion
query and we had to rewrite it.
Fixes https://github.com/storj/storj/issues/4655
Change-Id: I66d5c4abfc3c3f53dea8bc01ae11cd2b020e4289
Methods was never used in production and it's not sure that
it will be used at all. Let's drop it and restore if will be needed.
Fixes https://github.com/storj/storj/issues/4480
Change-Id: Ifd780d0096b67be7e72dff84bdcf1d957e0b48b5
This sets the corresponding _numeric columns to be NOT NULL (it has been
verified manually that there are no more NULL _numeric values on any
known satellites, and it should be impossible with current code to get
new NULL values in the _numeric columns.
We can't drop the _gob columns immediately, as there will still be code
running that expects them, but once this version is deployed we can
finally drop them and be totally done with this crazy 5-step migration.
Change-Id: I518302528d972090d56b3eedc815656610ac8e73
If a visitor has accepted cookies on www.storj.io, there might be a
"hubspotutk" cookie in their browser upon account creation. This allows
Hubspot to link website activity with a newly created user.
Change-Id: If06c67fb4d2e5dd3cf46c1fe80a0e9d7f25d6e58
We don't need to have every single test for both, only one for
each should be sufficient. For all other tests it doesn't matter
which one we use.
Change-Id: I9962206a4ee025d367332c29ea3e6bc9f0f9a1de
Embedded files significantly increase the binary size for linking.
Add a tag that allows disabling embedding the build npm code.
Change-Id: I9d1fd7376d1fa035965c33d259faaa6c4770dfe1
So far we assumes that metadata key/nonce cannot be empty at all
but at some point we adjusted code to accept empty metadata/key/nonce
to save DB space.
This change is adjusting how we are processing nonce while
FinishMoveObject/FinishCopyObject. We can use storj.Nonce directly
which makes code cleaner. It's also fixing issue in FinishMoveObject
where we didn't convert nonce correctly to []byte.
Part of change is disabling validation for key and nonce until
uplink will be adjusted. We need change uplink to send always
both key and nonce or non of them. Validation will be restored
as soon as change for uplink will be merged.
https://github.com/storj/storj/issues/4646
Change-Id: Ia1772bc430ae591f54c6a9ae0308a4968aa30bed
This also fixes the build order. Unfortunately we need
to ensure that the web frontends are built before installing
Go binaries.
Fixes https://github.com/storj/storj/issues/4654
Change-Id: I5d1c83125fd3d1a454d3400b2cbdd44bd3f2250c
Add uplink-php and nextcloud as user agents. These sending of these
user agents was added to recent releases of these clients.
Change-Id: Ia2732ade1d9e5cf8d4e41fe246faec3feaa58c25
We are in the process of creating an api to allow users to manage their
accounts programmatically. We would like to use api keys for
authorization. We were originally going to create an entirely new table
for these api keys, but seeing as we already have 2 other tables for
keys/tokens, api_keys and oauth_tokens, we thought it might be better to
use one of these. We're using oauth_tokens.
We create a new oidc.OAuthTokenKind for account management api keys:
KindAccountManagementTokenV0. We made the key versioned because we
likely want to improve the implementation in the future, but we want to
get something functional out the door ASAP because the account management
api feature is highly desired.
Add a new method to oidc.OAuthTokens interface for revoking v0 account
management api keys, RevokeAccountManagementTokenV0. Add update method
to dbx implementation to allow updating the expiration. We will revoke
these keys by setting the expiration to 0 so they are expired.
Change-Id: Ideb8ae04b23aa55d5825b064b5e43e32eadc1fba
Uplink have some types aliased from storj/common repo. It's like
that for easier type replacement if we decide to use custom type
instead of aliasing. Because in storj/storj we are not using aliases
it's impossible to do refactoring on uplink side. This change is
cleaning up this situation.
Change-Id: I20c8e31b9a821983483af1c67b2e7bb91397fd9d
Added new endpoint to get all bucket rollups by bucket ID.
Example of response:
vitalii:~/Documents$ ./testapi.sh
HTTP/1.1 200 OK
Content-Type: application/json
Date: Mon, 07 Mar 2022 11:18:55 GMT
Content-Length: 671
[{"projectID":"a9b2b1b6-714a-4c49-99f1-6a53d0852525","bucketName":"demo-bucket","totalStoredData":0.0026272243089674662,"totalSegments":0.05000107166666666,"objectCount":0.03333373083333333,"metadataSize":1.6750359008333334e-9,"repairEgress":0,"getEgress":0,"auditEgress":0,"since":"2022-03-01T11:00:00Z","before":"2022-03-07T11:17:07Z"},{"projectID":"a9b2b1b6-714a-4c49-99f1-6a53d0852525","bucketName":"qwe","totalStoredData":0.000018436725422435552,"totalSegments":0.016667081388888887,"objectCount":0.016667081388888887,"metadataSize":1.933381441111111e-9,"repairEgress":0,"getEgress":0,"auditEgress":0,"since":"2022-03-01T11:00:00Z","before":"2022-03-07T11:17:07Z"}]
Change-Id: I8b04b24dbc67b78be5c309ce542bf03d6f67e65d
Add a missing instruction step for allowing Go to embed the files
generated by the UI build process into the satellite binary.
Change-Id: Ie9223b8bb5317e53e692e3aa1d1086977daa17c9
We have an issue with latest CRDB. Single query cannot modify
the same table multiple times. Now build is blocked.
This change is unblocking build by:
* adjusting query for inserting into repair queue
* temporary removing code for deletion for server-side copy
* temporary disable backward compatibility tests for CRDB
Change-Id: Idd9744ebd228e5dc05bdaf65cfc8f779472a975d
Chronograph statistics indicate that much of our Gateway-MT traffic may
originate from and also is metriced as rclone traffic. This makes it
difficult to understand what our users are doing. This solution makes
it clear what products are actually being used, likely without
increasing the cardinality of our metrics by more than one.
Change-Id: I5d5e2af3715fa0864f69f1145fd78caf7e4a4224
Remove redundant suspension timestamp column from nodes and reputation tables.
Suspended timestamp was moved to unknown_audit_suspended and suspended column is
no longer used so there is no point in keeping both.
Change-Id: Ieea3f12141b33ec9efe7594f4c9dbc7e10675b0e
If B is a copy of A, and C is a copy of B, then in the segment_copies table, it should appear that C is a copy of A.
Fixes https://github.com/storj/storj/issues/4538
Change-Id: I7e6b03f7584597cf616cd1e0cd0156386771d207
This change adds an integration test that performs an OAuth
workflow and verifies the OIDC endpoints are functioning as
expected.
Change-Id: I18a8968b4f0385a1e4de6784dee68e1b51df86f7
In the server-side copy initial implementation, we are inserting segments one by one. This PR inserts them all at once.
Fixes https://github.com/storj/storj/issues/4476
Change-Id: I776dba99be38a0eef73366e8e9287cbb794003dc
For server-side copy we adjusted one method DeleteObjectExactVersion.
Other deletion methods won't be used directly in code at the moment.
We will adjust other methods later or decide if we will need them at
all.
To handle deletion of objects with copies or just copies correctly we
need to use DeleteObjectExactVersion method in two places while:
* removing object before upload
* explicit object deletion
This change is also changing DeleteObjectExactVersion method to
delete pending objects because we need this functionality to
delete object before new upload.
https://github.com/storj/storj/issues/4481
Change-Id: Ieff5cc95732bb70ed8cc0ecdd62e03c929857c02
We were not checking if we were provided an empty StreamID.
Furthermore, this changes returns the object copy with the correct createdAt field.
Change-Id: Iefc563c34ae9d8c1e233895155c1718bf905df91
This change adds endpoints for supporting OpenID Connect (OIDC) and
OAuth requests. This allows application developers to easily
develop apps with Storj using common mechanisms for authentication
and authorization.
Change-Id: I2a76d48bd1241367aa2d1e3309f6f65d6d6ea4dc
Reworked email validation for new users (for old users trying to login or reset password validation remains the same).
Regular expression was built according to RFC 5322 and then extended to include international characters.
Change-Id: Id0224fee21a1ec0f8a2dcca5b8431197dee6b9d3
When performing re-authorizations for OAuth, we need to pull up an
APIKey using it's project id and name. This change also updates the
APIKeyInfo struct to return the head value associated with an API
key.
Change-Id: I4b40f7f13fb9b58a1927dd283b42a39015ea550e
Update the user to the default paid tier project limit, which is currently 3 projects, when the user upgrades to a paid account.
Change-Id: I95b19d62cebc7d878b716355f2ebcaf0b51ca3f7
For nodes in excluded areas, we don't necessarily want to remove them
from the pointer, but we do want to increase the number of pieces in the
segment in case those excluded area nodes go down. To do that, we
increase the number of pieces repaired by the number of pieces in
excluded areas.
Change-Id: I0424f1bcd7e93f33eb3eeeec79dbada3b3ea1f3a
Copy object functionality should support setting new metadata for
copy. This change is adjusting FinishCopyObject method to set new
metadata when OverrideMetadata field is set to true.
Fixes https://github.com/storj/storj/issues/4483
Change-Id: Ica37cb57e8edae301cdc483fbda4f3ddba5d2702
Added new endpoint to get project's single bucket usage rollup.
Extended generation code to handle service method args.
Change-Id: Ief768632a801c047c66e0617056fbd7b30427b33
Getting a copied segment by GetLatestObjectLastSegment needs to retrieve inline_data or remote_alias_pieces and other information from the original segment.
Resolves https://github.com/storj/storj/issues/4478
Change-Id: I8c7822c343b1ec3e04683f31a20f71e3097b4b4a
We decided that we want to have segment limit for paying users high
enough to not have to change it too often.
Fixes https://github.com/storj/storj/issues/4590
Change-Id: Ic1c38bf3e2fcc000548ff4c7e7004647b39fbecf
There are two events in
web/satellite/src/utils/constants/analyticsEventNames.ts which did not
have corresponding entries in the backend analytics service.
Change-Id: If0f67cef2ed312953e580d855d63366e7c12786a
Users will be required to enter a MFA passcode or recovery code
upon attempting a password reset for an account with MFA enabled.
Change-Id: I08d07597035d5a25849dbc70f7fd686753530610
Create global config to specify a list of country codes that should be
excluded from node selection during uploads.
This exclusion is not implemented when the upload selection cache is
disabled.
Change-Id: Ic41e8b4f18857a11045668eac23107da99668a72
This change allows us to send newly registered users to a configured URL
to help us track user conversions for marketing campaigns.
Brave conversions continue to be tracked using the /signup-success page
within the satellite app.
Change-Id: I9b451947ce0f39d3c99b233cb4b806d361151823
Added new projectaccounting query to get project's single bucket usage rollup.
Added new service method to call new query.
Added implementation for IsAuthenticated method which is used by new generated API.
Change-Id: I7cde5656d489953b6c7d109f236362eb465fa64a
Add a RepairExcludedCountryCodes config flag for overlay for providing a list of country codes to exclude nodes from target repair selection.
Mark segments with less than repairThreshold pieces in countries not in the RepairExcludedCountryCodes as not healthy.
With this change, the repair process is not affected. The segment will be removed from the repair queue by the repairer.
Another change will handle the logic at the repairer level.
Fixes https://github.com/storj/team-metainfo/issues/95
Change-Id: I9231b32de117a116488de055a3e94efcabb46e81
Added a feture flag which will be used to indicate if new generated console api is used.
Fixed some comments from previous PR.
Change-Id: Ice31c998b0b347028a491c971a648fd1269bfd49
Return segments when creating a test object so that it can be checked
that the correct segments are remaining after a delete action.
Change-Id: Ifc245948935ba278806e887672c03abc5f2c2654
There was a defined type (`validationErrors`) for gathering several
validation errors and classify them with the `ErrValdiation errs.Class`.
`errs.Combine` doesn't maintain the classes of the errors to combine,
for example
```
var myClass errs.Class = "My error class"
err1 := myClass.Wrap(erros.New("error 1"))
err2 := myClass.Wrap(erros.New("error 2"))
err3 := errors.New("error 3")
combinedErr := errs.Combine(err1, err2, err3)
myClass.Has(combinedErr) // It returns false
// Even only passing errors with a class and with the same one for all
// of them
combinedErr := errs.Combine(err1, err2)
myClass.Has(combinedErr) // It returns false
```
Hence `validationErrors` didn't return what we expected to return when
calling its `Combine` method.
This commit delete the type and it replaces by `errs.Group` when there
are more than one error, and wrapping the `errs.Group.Err` returned
error with `ErrValiation` error class.
The bug caused the HTTP API server to return a 500 status code as you
can seee in the following log message extracted from the satellite
production logs:
```
code: 500
error: "console service: validation: full name can not be empty; validation: Your password needs at least 6 characters long; validation: mail: no address"
errorVerbose: "console service: validation: full name can not be empty; validation: Your password needs at least 6 characters long; validation: mail: no address
storj.io/storj/satellite/console.(*Service).CreateUser:593
storj.io/storj/satellite/console/consoleweb/consoleapi.(*Auth).Register:250
net/http.HandlerFunc.ServeHTTP:2047
storj.io/storj/private/web.(*RateLimiter).Limit.func1:90
net/http.HandlerFunc.ServeHTTP:2047
github.com/gorilla/mux.(*Router).ServeHTTP:210
storj.io/storj/satellite/console/consoleweb.(*Server).withRequest.func1:464
net/http.HandlerFunc.ServeHTTP:2047
net/http.serverHandler.ServeHTTP:2879
net/http.(*conn).serve:1930"
message: "There was an error processing your request"
```
The issues was that not being classified with `ErrValidation` class it
was not picked by the correct switch branch of the
`consoleapi.Auth.getStatusCode` method which is in the call chain to
`consoleapi.Auth.Register` method when it calls
`console.Service.CreateUser` and returns an error.
These changes should return the appropriated HTTP status code (Bad
Request) when `console.Service.CreateUser` returns a validation error.
Returning the appropriated HTTP statsus code also makes not to show this
as an error in the server logs because the Bad Request sttatus code gets
logged with debug level.
Change-Id: I869ea85788992ae0865c373860fbf93a40d2d387
Updates metadata and metainfo to return object metadata with
FinishCopyObject request.
https://github.com/storj/storj/issues/4474
Change-Id: I32cba5c20a943272e9b5964df1b3d6463ad212dc
We would like to disable in production those parts of code
which are now mixed with new server-side copy logic.
Change-Id: Iff50682bc9545207330f58dd19b5eee53d404d7f
Update the Content Security Policy to whitelist `blob:` for the img-src
and media-src directives. This is necessary to prevent CSP errors in the
object browser while loading previews and object maps.
Change-Id: Ic32bf0954f300c77ec4f0fe11fae63f0c7b622da
Remove special characters from the test database name to improve
compatibility with external tools like the Cockroach web gui.
Change-Id: I3a6a368a5051e3064e7279b14eec4f2d4ff3c435
Part of server-side copy.
For getting a copied segment GetSegmentByPosition needs to retrieve inline_data or remote_alias_pieces and other information from the original segment.
Fixes https://github.com/storj/storj/issues/4477
Change-Id: I283cbc546df6e1e2fa34a803c7ef280ecd0f65d7
Currently the metainfo/metabase DB connections are missing the proper
application_name in order to differentiate and filter queries on the DB
side for analytics.
Without it, it is very time-consuming to correlate processes and their load.
This change adds the "check" on DB connection init and passes the fallbacks
in all places to catch connection strings, that do not set it.
Change-Id: Iea5cea8658bc63778ff89038e5c1c352bf482cfd
It can be useful to compare object copies created during unit tests.
They appear in the segment_copies table as couple (stream_id, ancestor_stream_id).
Change-Id: Id335c3ff7084fe30346456d27e670aff329154ea
The "satellite fetch-pieces" command allows a satellite operator to
fetch as many pieces of a segment as possible, along with their
original order limits and hashes as provided by the storage nodes. The
fetched pieces and associated info will be stored on in a specified
folder as they are, rather than being RS-decoded or decrypted.
It is hoped that this will allow easier debugging of certain one-off
problems we've observed in the wild.
Change-Id: I42ae0e9ef0023538e42473a9be5a2460a3ac0f3a
Part of server-side copy implementation.
Creates the methods BeginCopyObject and FinishCopyObject in satellite/metabase/copy_object.go.
This commit makes it possible to copy objects and their segments in metabase.
https://github.com/storj/team-metainfo/issues/79
Change-Id: Icc64afce16e5e0e15b83c43cff36797bb9bd39bc
All code on known satellites at this moment in time should know how to
populate and use the new numeric columns on the
stripecoinpayments_tx_conversion_rates and coinpayments_transactions
tables in the satellite db. However, there are still gob-encoded
big.Float values in the database from before these columns existed. To
get rid of those values, so that we can excise the gob-decoding code
from the relevant sections, however, we need something to read the gob
bytestrings and convert them to numeric values, a few at a time, until
they're all gone.
To accomplish that, this change adds two chores to be run in the
satellite core process- one for the coinpayments_transactions table, and
one for the stripecoinpayments_tx_conversion_rates table. They should
run relatively infrequently, so that we do not impose any undue load on
processing resources or the db.
Both of these chores work without using explicit sql transactions, but
should still be concurrent-safe, since they work by way of
compare-and-swap type operations.
If the satellite core process needs to be restarted, both of these
chores will start scanning for migrateable rows from the beginning of
the id space again. This is not ideal, but shouldn't be a problem (as
far as I can tell, there are only a few thousand rows at most in either
of these tables on any production satellite).
Change-Id: I733b7cd96760d506a1cf52735f598c6c3aa19735
In addition to upgrading the storj.io/common library, this change
moves off the TCPConnector in favor of the HybridConnector per
the deprecation warning.
Change-Id: I7e7e1e7568e8b95e4a99ad9caa158a799e68e1e3
Tests refactoring to reuse testplanet instance between some
of tests. This should decrease resources needed to run those
tests.
Change-Id: I98f3041ec23085d3903b19acd339904973319ec1
added InactivityTimerEnabled flag to enable/disable feature
added InactivityTimerDelay to configure delay time in seconds
default timer set up to 10 minutes
reset dom events: keypress, mouseover, mousedown, touchmove
Change-Id: Idb66067c2902b2cdbe1a972225319c8abff97927
For some reason very small interval for chore is causing
very long execution time for Pause() method and that was
the reason why a simple test was slow.
Change-Id: Iebfdea16e31f9865493de3e1bd7d69c6f9116fc0
The Update method of the usersDB takes a console.User as an argument.
To update the columns in the DB, we have to migrate the fields from the
console.Users struct into a special dbx struct. If one of these fields
is left empty, then the zero value of that field's type will be used to
update the respective column.
In most cases where the users.Update method is called, the entire
console.User is apparently retrieved first, fields are updated, then it
is passed to users.Update. This is not the case for
service.UpdateAccount. Because these fields are not populated in the
user struct in UpdateAccount before it is passed into users.Update,
their respective columns in the database are overwritten with zero:
ProjectLimit, ProjectStorageLimit, ProjectBandwidthLimit,
ProjectSegmentLimit, PaidTier, MfaEnabled, MfaRecoveryCodes, MfaSecretKey
Solution: Do what is done in other places which call users.Update. Take
the console.User from the auth context, update the relevant fields on
that, then pass that in.
Change-Id: I3cbd560e8ea5397e5c27711fb40bb3907d987028
segment_copies table has (stream_id, ancestor_stream_id) as primary key,
but it should have only stream_id as primary as a stream_id can only have one ancestor_stream_id.
https://github.com/storj/team-metainfo/issues/84
Change-Id: I0218295e86e449c9dfefa5a21319f3083bf80afd