I don't think it should matter for correctness whether this matches the
segment size or not, so I think there is something else wrong. However,
making this change seems to eliminate the "corruption when ulimit -n is
too low" problem we're seeing right now.
Change-Id: I232fe0d0a371b86ddf902e8c2d4778e140b2f1fc
Store selected os tab during onboarding flow.
Use stored value to show correct tab during all the CLI steps.
Change-Id: I82e9af7e5dd3a7eaf503ed4f38789b7b0fb4aa84
When an api server is processing a graceful exit (node is connected and
getting lists of pieces to transfer), and the api server is shut down,
it was incorrectly marking all pending graceful exits as complete. The
GE then either passed or failed depending on the ratio of successfully
transferred pieces to unsuccessful pieces. In at least one case, _no_
pieces were transferred at all before the GE was marked a success.
Change-Id: I62cfab54a2296572c2e654eb460b62f772b7a60b
Attribution is attached to bucket usage, but that's more granular than
necessary for the attribution report. This change iterates over the
bucket attributions, parses the user agent, converts the first entry
to lower case, and uses that as the key to a map which holds the
attribution totals for each unique user agent.
Change-Id: Ib2962ba0f57daa8a7298f11fcb1ac44a8bb97875
Implemented new endpoint for project update using apigen.
Implemented new service method compatible with new generated api.
Change-Id: Ic0a7e0bbf3ea942275bd927d6e30cfb7e721e9c1
Migrate free tier users to have default limits if their limits were set to 0.
They were affected by incorrect working of Update user query.
Change-Id: I4c49c8d99b12dba2b9b0ab61b2175085976dcc95
We implemented server-side copy feature and we would like to
confirm that it is not affecting accounting/tally service.
Resolves https://github.com/storj/storj/issues/4697
Change-Id: I3944ea52c0acc68107ec15c1911750dc7d947501
Now that we have both the storagenode and updater processes running
in a single docker container, we need a way to know which log entry
is logged by any of the processes.
This change includes a Process field in the log entries.
Resolves https://github.com/storj/storj/issues/4648
Change-Id: I167b9ab65728a41136d264b5fe2c41bb64ed1785
Test case to verify if server-side copy doesn't affect
GE in any negative way.
Fixes https://github.com/storj/storj/issues/4699
Change-Id: I8c385767cca61499d46d9cb8de7318c56e5d7397
We had an issue with CRDB where newer version forbid type
of SQL query we were using. We disabled CRDB bc tests until we
will have release without problematic query. Now we can enable
tests.
Change-Id: I275cbecebdcbfef587281f2daaf677d01860b23d
Added failed_login_count and login_lockout_expiration columns to users table to control users failed login attempts.
We want to prevent brute forcing of user login so this is the first step.
Change-Id: I06b0b9f5415a1922e08cd9908893b2fd3c26bca0
Store demo-bucket creation status in browser local store so that it's not created every time when user enters buckets screen.
It will start working after first usage in new browser.
So if user removes bucket in Chrome and opens Buckets screen in Firefox demo-bucket will be created but only once.
Change-Id: I9f5811d97ab6208c5f757ededcd7c36cd864795c
Use the same query when deleting a single object or multiple.
I have chosen not to deduplicate the row "scan" logic because
it is less complicated code and this change would expand to other
parts of the codebase.
Part of https://github.com/storj/storj/issues/4700
Change-Id: I7a958c78c903b2bddd72ca217971f7e8e02a0d0c
Testplanet limits the execution of a single test case to 3 minutes.
This change adds a Timeout field to the Testplanet's config, so test
cases can configure their timeout. This is helpful when executing larger
3rd party test suite on top of Testplanet.
Change-Id: Ibbf7c5ffdc0a9e723e7e28b885eac084f04c6ca1
Initial space used for pieces is calcualted, not retrieved
from storage nodes and at the end of test we are deleting
also copies that become ancestors to verify that all data
was removed from storage nodes.
Change-Id: I9804adb9fa488dc0094a67a6e258c144977e7f5d
Before, the VA query was summing the total and dividing by the number of
rows. This gives the average bytes stored per hour, but we charge for
usage with byte-hours. Why not do value attribution the same way?
To do that, we don't divide by the number of rows. We also have object
and segment fees so return segment-hours and object-hours too.
Change-Id: I1f18b7e1b2bae1d3fae1ca3b93bfc24db5b9b0e6
We implemented server-side copy feature and we would like to
confirm that it is not affecting GC.
Fixes https://github.com/storj/storj/issues/4696
Change-Id: Id391f0badf5fce51f9910f0df732d477b07fa7ac
s3 allows for overwriting an object when using server-side copy.
This change makes overwriting the destination part of the atomic server-side copy operation so that
if copy fails, the old object is still available.
All the segments of the existing destination are deleted. If this destination object is an ancestor of another object, a new ancestor is promoted.
Fixes https://github.com/storj/storj/issues/4607
Change-Id: I85d1250850bb71867586ac230c8275a0cc1b63c3
New blueprint describes a design which provides the satellite greater
control over sessions authorized to use the web app.
Change-Id: I5af227aef6d6b096167e2e8a60f1e8214c2cd71f
Implemented new endpoint for project creation using apigen.
Implemented new service method compatible with new generated api.
Change-Id: I2bae22c8b046f21ec5bb6522f09b9c4e74bdba0c
We've had a lot of issues with alpine and currently there's a broken
network issue on alpine for users running on RPI arm32 architechture
which requires a workaround before docker is able to sync time between
the host and the container: https://wiki.alpinelinux.org/wiki/Release_Notes_for_Alpine_3.13.0\#time64_requirements.
Since we're switching the base image of the storagenode to debian,
it's best to switch the base image of all our docker images to
debian as well for consistency; less drift across them and keeps
the push target consistent.
Change-Id: If3adf7a57dc59f19ef2221b892f340d919798fc5
Indentations are off in the push-storagenode-images target in the makefile which is possibly causing build to fail
Change-Id: Ia25b8f700f49c551e3f201c988747a83e04ad83c
When deleting a bucket, make sure that object copies in other buckets are
promoted to new ancestor and left in a working state.
Closes https://github.com/storj/storj/issues/4591
Change-Id: I019d916cd6de5ed51dd0dd25f47c35d0ec666af6
This change introduces a DEVELOPING.md file, commonly seen across
open source repositories as a way to communicate how to get started
contributing code to new-comers.
Change-Id: Idb92231e025250a4c6d2fc789cab5f78ca87086a
To save load on DNS servers, the repair code first tries to dial the
last known good ip and port for a node, and then falls back to a DNS
lookup only if we fail to connect to the last known good ip and port.
However, it looks like we are seeing errors during the client stream
Close() call (probably due to quic-go code), and those are classified
the same as errors encountered during Dial. The repairer code sees this
error, assumes that we failed to contact the node, and retries- but
since we did actually succeed in connecting the first time around, this
results in submitting the same order limit (with the same serial number)
to the storage node, which (rightfully) rejects it.
So together with change I055c186d5fd4e79560f67763175bc3130b9bc7d2 in
storj/uplink, this should avoid the double submission and avoid dinging
nodes' suspension scores unfairly.
See https://github.com/storj/storj/issues/4687.
Also, moving the testsuite directory check up above check-monkit in the
Jenkins Lint task, so that a non-tidy testsuite/go.mod can be recognized
and handled before everything breaks weirdly and seemingly randomly
later on.
Change-Id: Icb2b05aaff921d0af6aba10e450ac7e0a7bb2655
Moved invalid email testing to separate test.
Made all the emails used to have .test domain.
Added links to regex resources.
Change-Id: I26920ba7360064528256a6aeaea947bbe56ef618
Implemented account management api key authentication.
Extended IsAuthenticated service method to include both cookie and api key authorization.
Change-Id: I6f2d01fdc6115cb860f2e49c74980a39155afe7e
This change has two purposes. First is to avoid DB call in case
source and destination bucket are the same.
Second is to return bucket not found error in correct order. If
source and destination bucket are different we will first check
source and later destination. Currently we will get first error
about not existing destination bucket.
Because of this change we stop putting bucket placement
into satellite stream id but its not needed as we don't use
this value with finish move/copy object methods.
Change-Id: I0f7b3ba604d53c722e8fa4d7a37843a69d02bebd
Uplink is fixed and now we should always get both key and nonce
or both empty.
Fixes https://github.com/storj/storj/issues/4646
Change-Id: I65dca2d4d5a10787c2fecad39e301121f1ae242a
In the migration to migrate the corresponding name of the partner id to
user agent, part of the requirement was to migrate the partner id
itself if there was no partner name associated. This turned out to not
be so good. When we parse the user_agent column later, it is returning an
error if the user agent is one of these UUIDs.
Change-Id: I776ea458b82e1f99345005e5ba73d92264297bec
Latest CRDB version did't work with our server-side copy deletion
query and we had to rewrite it.
Fixes https://github.com/storj/storj/issues/4655
Change-Id: I66d5c4abfc3c3f53dea8bc01ae11cd2b020e4289