after I copied and checked an acknowledgment i was able to delete the passphrase and create the access grant
now if I do this "copied" and "acknowledge" statuses if passphrase is changed
Change-Id: I57199a476c802eb3c44e4dddc43ef40fcedcee2f
We retry a GET_REPAIR operation in one case, and one case only (as far
as I can determine): when we are trying to connect to a node using its
last known working IP and port combination rather than its supplied
hostname, and we think the operation failed the first time because of a
Dial failure.
However, logs collected from storage node operators along with logs
collected from satellites are strongly indicating that we are retrying
GET_REPAIR operations in some cases even when we succeeded in connecting
to the node the first time. This results in the node complaining loudly
about being given a duplicate order limit (as it should), whereupon the
satellite counts that as an unknown error and potentially penalizes the
node.
See discussion at
https://forum.storj.io/t/get-repair-error-used-serial-already-exists-in-store/17922/36
.
Investigation into this problem has revealed that
`!piecestore.CloseError.Has(err)` may not be the best way of determining
whether a problem occurred during Dial. In fact, it is probably
downright Wrong. Handling of errors on a stream is somewhat complicated,
but it would appear that there are several paths by which an RPC error
originating on the remote side might show up during the Close() call,
and would thus be labeled as a "CloseError".
This change creates a new error class, repairer.ErrDialFailed, with
which we will now wrap errors that _really definitely_ occurred during
a Dial call. We will use this class to determine whether or not to retry
a GET_REPAIR operation. The error will still also be wrapped with
whatever wrapper classes it used to be wrapped with, so the potential
for breakage here should be minimal.
Refs: https://github.com/storj/storj/issues/4687
Change-Id: Ifdd3deadc8258f34cf3fbc42aff393fa545794eb
Added new email html template.
It is sent when user tries to reset password with unknown or unverified account.
Made a couple of minor config changes.
Issue: https://github.com/storj/storj/issues/4913
Change-Id: I730f48b3478e302d1e38e1f8a27c75f66a8ba6fd
Some nodes were added to the nodes table due to a bug in quic
based storagenode contact code. This is a tool to clean up these nodes.
Delete with batch-size 1k seems to take ~400ms on local CockroachDB.
Change-Id: Ic0c1180528c27952e19c431fc9cc327292a10a5f
Add payments method to payments to DepositWallets interface.
Exposes payments retrieval API for a particular wallet to
other systems such as console billing API.
Change-Id: Ifcb3a35514aab50be00f6360007954980b5d8b38
Use DownloadSelectionCache to avoid querying database for every
download.
This change only addresses downloads from users. The download selection
cache is not currently used for audit and repair.
Change-Id: I96a49e121dac0b4204f97592a63131edabd73fb5
- Created screen for billing overview tab
- Added total estimated charges and balance
- Added breakdown of individual projects
- Links to billing history and payment methods
- Mobile Responsive
Adding go.mod into node_modules is not sufficient, because npm install
wipes them quite often out. Similarly, when running npm install locally
it will remove it, causing the git state to be dirty.
Rather than having them committed, add them after running npm install.
Change-Id: Iaf21a9c6e198dc31fe50345ec5dee85b44617176
This just cleanup change to unblock libuplink to reorganize types
which are aliases to storj types.
Change-Id: Id3edf13f1b0aef52d7606d545aa7a6594cf8d13f
Add timeout to npm install and increase logging level.
npm install is still taking sometimes too long and it's not clear why,
verbose logging is not sufficient.
Change-Id: Ib72f9823f30c9744562e279c2a5481f096e38128
* tabs and routing implemented
* added feature flag
* added billing history buildout
* Update to use real data for table population
Story: https://github.com/storj/storj/issues/4633
Co-authored-by: cl-mitch <mitch.george@compozelabs.com>
The builds for npm still fail, however, `--timing` does not provide
sufficient data to debug the situation.
Change-Id: I7e618ba8cac775748ebea6145cd5c180d2dc7883
Don't abbreviate multinode in the command help message because there
isn't a need for it and the abbreviation isn't clear at all.
Change-Id: I7a1f2be6ae1f7d4b287c18c48b22c630549b731f
`npm ci` deletes the node_modules directory, which also removes go.mod
from that folder.
Add --loglevel timing, so we can debug install slowness, whenever it
happens.
Change-Id: Ide613c4124bfdca9ae978876b2deed8abf86f987
Ask for encryption passphrase when user tries to navigate inside bucket using next flow: Buckets -> BucketDetails.
We don't ask for passphrase if user uses this flow: FileBrowser -> BucketDetails.
Don't allow empty passphrase in open bucket modal.
Fetch buckets using our API after new bucket creation flow (instead of S3 fetch).
Change-Id: Ia0894d6bb4a764c4ff0974fb16ed89bb82699807
Add go.mod to node_modules folder, that way Go compiler doesn't
need to scan the node_module directories for any Go code.
Change-Id: I747909416490c847d6b4bfa3438fea66660fcd53
It seems the tests relied on time.Now(), which might cause some
discrepancies in calculations. Use a fixed time.Now() rather than
recalculating.
As a sidefix, remove "Test" prefix from t.Run. These are unnecessary.
Change-Id: I1de903fcf0fcf46fc8e3acf2463e17239b8e3cc6
The MinDownloadTimeout 950ms and delay of 1s were quiet close, possibly
causing flaky behavior in TestVerifierSlowDownload.
Change-Id: I4f6c1554a118b21427357642abe39986fd0af38d
Classify errors related to invalid tokens for activating user accounts
for returning 400 status code rather than 500 status code.
Don't log all the errors with "error" level, only the ones related to
internal server errors and the rest log them with "debug" level because
they pollute the production satellite errors with errors that are
misguiding.
Change-Id: Id2bd737edba8550ce08965b51b8bf2540bd13ca4
Previously copying an object to it's ancestor location (copy of copy)
broke the object and all copies.
This fixes this by calling the existing delete method rather than a
custom one when there is an existing object at the copy destination.
The check for existing object at destination has been moved to an
earlier point in FinishCopy.
metabase.DeleteObject exposes a transaction parameter so that it can be
reused within metabase.
Closes https://github.com/storj/storj/issues/4707
Uplink test at https://review.dev.storj.io/c/storj/uplink/+/7557
Change-Id: I418fc3337fa9f30146ccc1db456af168ae41c326
- instead of closing over the outer err variable, potentially
overwriting some errors or something, declare local variables.
- double check that we got the number of rows we expected to get
and error otherwise. this prevents a possible source of inserting
bogus rows into the database.
Change-Id: I30662be2727afe0a90e4215a182fedc2648d1169
Part of the delete query cause a full table scan of segment_copies. This
slowed down the system. This change should have the same semantics but
improved performance.
Part of https://github.com/storj/storj/issues/4898
Change-Id: I4afe23df05467eafc9c91591f47a7251a0f3dd31
Show free/pro badge in collapsed nav side bar.
Use getter to indicate which badge to show instead of calculating once after first render.
Fix project overview label for smaller screens. The problem was with new project dashboard route name.
Change-Id: I6ec340b14fe7cf11ba96a0d4ae6771e830c2ed94
Read the source object and write the destination object in the same
transaction, to prevent breaking the object because it was deleted
simultaneously.
This is probably the root cause of the metainfo loop halting from
2022-06-21 onwards, where 2 objects lost their root_piece_id during
copying.
Part of https://github.com/storj/storj/issues/4930
Change-Id: I9c45d56a7bfb48ecd5f4906ee1cca42922901e90