* better error handling when Exists method is not avaialble on SN
* more optimal processing of response from Exists method
Change-Id: I6d61c09473e9f5ab76a4601720e8bd520767f4c2
This change creates a new independent process, the 'auditor', comparable
to the repairer, gc, and api processes. This will allow auditors to be
scaled independently of the core.
Refs: https://github.com/storj/storj/issues/5251
Change-Id: I8a29eeb0a6e35753dfa0eab5c1246048065d1e91
Now that all the reverification changes have been made and the old code
is out of the way, this commit renames the new things back to the old
names. Mostly, this involves renaming "newContainment" to "containment"
or "NewContainment" to "Containment", but there are a few other renames
that have been promised and are carried out here.
Refs: https://github.com/storj/storj/issues/5230
Change-Id: I34e2b857ea338acbb8421cdac18b17f2974f233c
Now that we are doing scalable piecewise reverifications, the code for
handling the old way of doing things (containment, pending audits,
reporting, testing) can now be removed.
Refs: https://github.com/storj/storj/issues/5230
Change-Id: Ief1a75f423eff682e8f3d57804e343b3409a6631
This adds the capability to the segment-verify tool of checking all
pieces of every indicated segment.
Pieces which could not be accessed (i.e. we couldn't get a single
byte from them) are recorded in a csv file.
I haven't been able to test this in any very meaningful way, yet, but I
am comforted by the fact that the worst things it could possibly do are
(a) download pieces too many times, and (b) miss downloading some
pieces.
Change-Id: I3aba30921572c974993363eb36d0fd5b3ae97907
Provides the `segment-verify run buckets` command for verifying segments within a list of buckets.
Bucket list is a csv file with `project_id,bucket_name` to be checked.
https://github.com/storj/storj-private/issues/101
Change-Id: I3d25c27b56fcab4a6a1aebb6f87514d6c97de3ff
Because --readonly is default true, passing something like
--disallow-deletes=false would not actually update that
value because the readonly flag would override. this makes it
so that the --disallow-* flags override the --readonly and
--writeonly flags.
Also fixes some minor formatting issues with share like an
extra space after the "Public Access:" entry.
Simplifies the handling of the explicit "none" by making the
flags for the dates optional and using nil to signify that
the value was left unset.
Bump the go.mod to go1.18 to enable the use of generics and
add a small generic function. This can easily be backed out
if it causes problems.
Change-Id: I1c5f1321ad17b8ace778ce55561cbbfc24321a68
NewContainment will replace Containment later in this commit chain, but
for now it is not yet being used.
NewContainment will allow a node to be contained for multiple pending
reverify jobs at a time. It is implemented by way of the reverify queue.
Refs: https://github.com/storj/storj/issues/5231
Change-Id: I126eda0b3dfc4710a88fe4a5f41780618ec19101
The default 'info' level for the storagenode will dump dozens of
lines every second. This change adds the ability to configure
the log.level argument at run time using LOG_LEVEL env variable.
Co-authored-by: Clement Sam <clementsam75@gmail.com>
Uplink doesn't have a `save` command, however, it's referred on an error
message that's returned when the `access register` command is executed
without having any default access configured.
The correct command to mention is `import`.
Change-Id: Ia2092d02965737f421683fc98c52a51c9529b86e
Reputation updates during repair currently consumes a lot of database
resources. Sometimes increasing the rate of repair is more important
than auditing a node based on whether they have or don't have the
correct piece during repair. This is the job of the audit service.
This commit is to implement an intermediate solution from this issue: https://github.com/storj/storj/issues/5089
This commit does not address the more in-depth fix discussed here: https://github.com/storj/storj/issues/4939
Change-Id: I4163b18d78a96fadf5265789fd73c8aa8def0e9f
If we are processing list of segments (csv) we should not stop if one of
segments is not found in DB.
Change-Id: I720f85dc7601c2ca77032e20c1577de55092bd9b
Current option is to put stream id and position as an input but
it's not very efficient when we have long list of segments to repair.
This change adds option to read whole csv file and process each entry
one by one.
If command will have single argument then it will treat it as csv file
location and if will have two arguments then it will parse it just as
stream id and position.
Change-Id: I1e91cf57a794d81d74af0091c24a2e7d00d1fab9
Implements logic for satellite command to repair a single segment.
Segment will be repaired even if it's healthy. This is not checked
during this process. As a part of repair command will download whole
segment into memory and will try to reupload segment to number of new
nodes that equal to existing number of pieces. After successful
upload new pieces will completely replace existing pieces.
Command:
satellite repair-segment <streamid> <position>
https://github.com/storj/storj/issues/5254
Change-Id: I8e329718ecf8e457dbee3434e1c68951699007d9
This patch makes it possible to use `uplink share` in test environment (like storj-up) where authservice doesn't have full secure endpoint.
This supposed to be an undocumented feature (no flag, just a custom prefix) to avoid any confusion for regular users.
Change-Id: I256aefc944066e52c72224e7b6f1a593b5bc57f7
Add nodeevents.DB to satellite overlay service so we can insert node
events into the nodeevents DB.
Change-Id: I642c0ccc9941ecdb08cb22d5c8cf701959a55156
New flag 'MultipleVersions' was not correctly passed from metainfo
configuration to metabase configuration. Because configuration was
set correctly for unit tests we didn't catch it and issue was found
while testing on QA satellite.
This change reduce number of places where new metabase flags needs
to be propagated from metainfo configuration to avoid problems with
setting new flags in the future.
Fixes https://github.com/storj/storj/issues/5274
Change-Id: I74bc122649febefd87f665be2fba628f6bfd9044
The current deployment strategy requires that the GC bloomfilter generation process executes only once and exits.
Change-Id: I952991f126596aa165d1f2e9fce6f8548c21bdba
It's quite likely to hit some timelimit, rather than giving up
immediately, let's retry after the throttle amount.
Change-Id: I20944b058d771f5d4bfa0eea7a2c26cefcd74739
We want to send emails to SNOs. Node status changes go through the
overlay service, so it's a good place to add the mail service.
Add the mailservice.Service, satellite address, and satellite name to
overlay service. Also add feature flag --overlay.send-node-emails
Change-Id: I3bd2cb3bf22f9724954ce2374f8b651b902b3a24
Add getSalt to projects api. Add action, GET_SALT, on Store
Projects module to make the api request and return the salt
string everywhere in the web app that generates an access grant.
The Wasm code which is used to create the access grant has been
changed to decode the salt as a base64 encoded string. The names
of the function calls in the changed Wasm code have also been
changed to ensure that access grant creation fails if JS access
grant worker code and Wasm code are not the same version.
https://github.com/storj/storj-private/issues/64
Change-Id: Ia2bc4cbadad84b066ca1882b042a3f0bb13c783a
there's not really anything better to send. uplinks have
rotating node ids on each startup, so that's not right
here. i don't think anyone will use instance id for
uplinks so let's just fold and send nothing.
Change-Id: I2511605e95eba1816d662d385b28d5feab8c4eb0
The flags weren't properly loading from config.
The code assumed that every node that's online for downloading also have
data uploaded to them -- which is not true.
Change-Id: Ifd65a47b9eca5b4841231928244fab17acbde6fb
This patch addresses the following issues:
1. Running full migration in cockroachdb is quite slow. We already have an approach for unit tests to start from the latest snapshot. This patch makes it possible to use it for integrations tests.
2. Migration requires executing a separated command which makes it hard to run application in containerized test environments (like storj-up) or from IDE. This patch introduces a hidden flag to run migration.
3. Test user creation is painful. We do it with calling GraphQL + admin API. Providing an option with testuser makes the integration tests significant more simple (especially as the projectID -> access grant can be predictable)
Change-Id: I61010728727b490ea6aac32620f2da0484966727
Add an extra parameter to the pay-invoices command that can be used to restrict which invoices will have a payment attempted in stripe. The parameter should be of the form MM/DD/YYY and any invoices created on or after the date will have token balances applied and be processed for payment according to stripe subscriptions settings.
Change-Id: I5da5070d3ac97f45c05c02f2849254bdc44413c3
This change introduces the generate-invoices satellite billing
command whose functionality is equivalent to running
apply-free-coupons, prepare-invoice-records,
create-project-invoice-items, and create-invoices in order.
Invoice finalization must still be performed separately.
Change-Id: Ia3d80b95eef1f2776c38bd730ed731e42ec4c35e
allow multiple source paths and a single destination path.
this makes commands like `uplink cp foo* sj://bucket` work
as expected.
require at least one remote path when copying. this ensures
that users don't accidentally overwrite their local files
with other local files, which is almost never what they wanted
because they would just use cp.
Change-Id: I28948f4ff735d29db06de81fc8c2a15b9f4ee3f5
Due to a programming error it was possible to "share" without an expiry
implicitly. This pollutes the auth service database.
fixes https://github.com/storj/storj/issues/5188
Change-Id: I04a345662c26948c6be6c1ae6bee3b5a583bebc4
* Disallow too large listing limit, which would cause a lot of memory to
be consumed.
* Fix throttling logic and add a test.
* Fix read error handling; depending on the concurrency it can return
the NotFound status either in the Read or Close.
Change-Id: I778f04a5961988b2480df5c7faaa22393fc5d760
Rather than using Invoice Items to account for storjscan token payments, credit notes will be used and applied to the users finalized invoice. This credit note will reduce the amount due of the users invoice based on the amount of storj token balance the user has on the satellite. Applying credit notes to a finalized invoice also requires that the invoice not be automatically paid when finalized. Therefore, a new command (pay-invoices) was added to initiate payment for users invoices.
Change-Id: Ie539375a10e842e3cb64bf0140834bbab0774f54
They are needed for segment-verify tool.
Also rename some of the conversion methods to make clear,
which of them have side-effects.
Change-Id: Ie9a0952548e9ed5068c7a30c2fd2134b07139bca
This adds parts for:
1. iterating over the segments
2. using an interface for writing the segments
3. stubs for handling deleted segments
Change-Id: I76a17cac6deb0b6c042a8ab7c4155a890db9da84
This patch fix the beavior of the distributed tracing reporter.
1. For developer build we don't append the date
* We don't need to separate service instances in jaeger (search by trace ID)
* It's usually 0000-00-000 anyway as release.sh is not used for dev builds
2. Tracing ID MUST be unique
* Instead of trusting the user to set a unique value (how can they do it?), we generate a random number
3. To make it possible to find the trace, there is a new flag to print out the generated tracing ID
4. Monkit `remoteTrace` call is replaced with normal monkit Task.
* remoteTrace call assumes that we have a parent span in an other service (which is already sent to the server)
* Here we must send out the parent span, as this is the beginning of the trace
5. We properly close the Jaeger UDP collector, and we wait until remaining messages are sent out
Change-Id: Iabf5abf25f4f20881188f88edcbadca95ac74927
Implements creating roughly load-balanced set of batched
that can be used to make multiple requests.
Change-Id: I349b276176dcb8ba9163e7e06a94509d73fa5ddc
This is to update all projects to have a public_id if they do not have
one.
github issue: https://github.com/storj/storj/issues/4861
Change-Id: Icfa42b62e15ca75d3c04a0aab48a3c3b0f3a9d6e
We would like to have separate process/command to collect bloom
filters from source different than production DBs. Such process will
use segment loop to build bloom filters for all storage nodes and
will send it to Storj bucket. This change is extending satellite binary
with appropriete command.
New GC service for collecting bloom filter will be a subsequent
change.
Updates https://github.com/storj/team-metainfo/issues/120
Change-Id: Ibc03e119c340919cf468fc1f5a4f3d187bb3a5a1
As a reminder: latest clingy removed the requirement of having custom context (which made the usage of context.WithValue harder) and uses simple context instead.
Clingy saves the stdin/stdout/stderr to the context (earlier to separated context type) to make it available for unit testing.
Change-Id: I8896574f4670721de43a577cd4b35952e3b5d00e
Decode the JSON input string to its corresponding unicode
decoding if the special byte order mark is present.
Fixes https://github.com/storj/storj/issues/4950
Change-Id: If91bac22590c89b35c58bf54f6d3bdb8a67d7a4f
We couldn't use environment variables safely to configure storagenode, since we introduced the embedded updater.
For example STORJ_DEBUG_ADDR=localhost:11111 would try to set debug port 11111 for both the storagenode and storagenode-updated, causing port conflict.
This small change enables to configure storagenode-updater with STORJUPDATER_... environment variables.
Tested with creating custom image and installing to my own storage node.
Change-Id: I6b0a601a4dc63d2d1ff3c191ae89981434e55c30
Sessions now expire after a much shorter amount of time, requiring
clients to issue API requests for session extension. This is handled
behind the scenes as the user interacts with the page, but once session
expiration is imminent, a modal appears which informs the user of his
inactivity and presents him with the choice of loging out or preserving
his session.
Change-Id: I68008d45859c814a835d65d882ad5ad2199d618e
When enconding structs into JSON, byte slices are marshalled as base64
encoded string using the base64.StdEncoding.Encode():
ea9c3fd42d/src/encoding/json/encode.go (L833-L861)
We, however, expect API Secrets to be encoded as base64URL, so when
an marshalled secret (with byte slice type) is added to the multinode
dashboard, it fails with `illegal base64 data at input byte XX`.
This change changes the type of APISecret field in the
multinode/nodes.Nodes struct to use multinodeauth.Secret type instead
of []byte.
multinodeauth.Secret is extended with custom MarshalJSON and
UnmarshalJSON methods which implement the json.Marshaler and
json.Unmarshaler interfaces, respectively.
Resolves https://github.com/storj/storj/issues/4949
Change-Id: Ib14b5f49ceaac109620c25d7ff83be865c698343
Added Setup of access maker call into cmd_access_setup to use flags during cmd call
Closes https://github.com/storj/storj/issues/4766
Change-Id: I0c75f224414099573b021b18b87d9e17192cecc5
It's possible that content was not being flushed from processes.
For now, ignore other process failures under storj-sim network test.
Once we get other processes stable, we can repropagate the error.
Change-Id: I01ed572d7c779ab6451124f1e24e3d1168b3ea79
this allows one to specify a trace id and cause any remote
spans to be sent up to wherever. it doesn't collect any
local traces.
Change-Id: Ia87e294bb276d966f9f3dbfbaf6e7916b1ec7af9
Some nodes were added to the nodes table due to a bug in quic
based storagenode contact code. This is a tool to clean up these nodes.
Delete with batch-size 1k seems to take ~400ms on local CockroachDB.
Change-Id: Ic0c1180528c27952e19c431fc9cc327292a10a5f
Don't abbreviate multinode in the command help message because there
isn't a need for it and the abbreviation isn't clear at all.
Change-Id: I7a1f2be6ae1f7d4b287c18c48b22c630549b731f
It seems the tests relied on time.Now(), which might cause some
discrepancies in calculations. Use a fixed time.Now() rather than
recalculating.
As a sidefix, remove "Test" prefix from t.Run. These are unnecessary.
Change-Id: I1de903fcf0fcf46fc8e3acf2463e17239b8e3cc6
Current pipelining to stdout is synchronous so we don't have any
advantage from using --parallelism flag. This change adds buffer
while writing to stdout. Each part is first read into the buffer
and flushed only when all data was read from this part.
https://github.com/storj/uplink/issues/105
Change-Id: I07bec0f4864dc4fccb42224e450d85d4d196f2ee
This change adds project ID and bucket name columns to the generated
partner attribution report. Attribution values are now summed based on
their project ID and bucket name in addition to their user agent.
Additionally, the command to generate the attribution report has been
modified to optionally include only certain user agents.
Change-Id: I61a1d854379134f26b31467d9e83a787beb451dd
The admin UI assets aren't inside of the `web` directory they are
directly in the `satellite` one.
The invalid path provoked that storj-sim generated a satellite
configuration with an invalid path to the Admin UI static assets
provoking that it didn't load the UI by default.
Change-Id: I49fb289377f51634057173690fbd8cf863ca9a9d
Main issue was that when one part copy failed while being inside
goroutine (limiter) and another part was still collecting src/dst parts
it was possible to drop errors from failed part copy. It was possible
bacause on fail context was canceled and if we were still getting
part src/dst then it was returning error immediately and error
group with errors from goroutine was ignored.
Change-Id: I75c6799eba358741629795f2971c7a964cb2c9ce
Few improvements were made to how we are handling errors
while doing parallel upload/download for single object:
* unhide error under 'context canceled' which was shown in most of
cases
* add part number to error message
* don't try to commit if any error occurs while operation
* combine errors into more readable form, example:
---
failed to download part 3: uplink: eestream: failed to download stripe 0:
error retrieving piece 00: ecclient: piecestore: rpc: tcp connector failed: rpc: dial tcp 97.119.158.36:28967: i/o timeout
...
error retrieving piece 89: ecclient: piecestore: rpc: tcp connector failed: rpc: dial tcp 161.129.152.194:28967: i/o timeout
failed to download part 1: uplink: eestream: failed to download stripe 0:
error retrieving piece 01: io: read/write on closed pipe
...
error retrieving piece 97: io: read/write on closed pipe
failed to download part 2: uplink: eestream: failed to download stripe 0:
error retrieving piece 00: io: read/write on closed pipe
...
error retrieving piece 01: ecclient: piecestore: rpc: tcp connector failed: rpc: dial tcp 180.183.132.234:28967: operation was canceled
error retrieving piece 96: io: read/write on closed pipe
main.(*cmdCp).parallelCopy:418
main.(*cmdCp).copyFile:262
main.(*cmdCp).Execute:156
main.(*external).Wrap:123
github.com/zeebo/clingy.(*Environment).dispatchDesc:126
github.com/zeebo/clingy.(*Environment).dispatch:53
github.com/zeebo/clingy.Environment.Run:34
main.main:26
runtime.main:250
---
Change-Id: I9bb70b3f754567761fa8d17bef8ef59b0709e33b
At some point uplink cli lost ability to set metadata. This change
brings back this functionality for 'cp' operation.
https://github.com/storj/storj/issues/3848
Change-Id: Ia5f60eb577fcab8a38d94730d8cdc6e0338d3b46
Uplink can upload from stdin and download to stdout. We had
such tests for old binary but now we were missing it.
Change-Id: I5110a9f531f5cc21277fa53611995fb5b556ff16
if you somehow get an invalid access grant in your
config file, it'd be nice to be able to list it
and delete it and stuff.
Change-Id: I7e335bf32353f294d5abb6a7c5f8f3aa18f2f6a7
The current supervisord condifguration sets up the HTTP server
to listen on a tcp socket which is private i.e. available only
on localhost. This poses a regression where multiple containers
cannot be run if the host network interface is used when docker
container is run with `--network host` option.
This change adds a new env variable `SUPERVISOR_SERVER`, with
potential values `unix | private_port | public_port`, where
`unix` is set as the default value.
By default, the HTTP server is now set to listen on a UNIX
domain socket.
The file path is set to `/etc/supervisor/supervisor.sock`
instead of the /tmp directory since some systems
periodically delete older files in /tmp. If the socket file is
deleted, supervisorctl will be unable to connect to supervisord.
When SUPERVISOR_SERVER is set to `public_port` or `private_port`,
the HTTP server is set to listen on a TCP socket.
Resolves https://github.com/storj/storj/issues/4661
Change-Id: I224836dcae0293bcfe49874f2748be7723944687
This changes allows fetching the file size more easily (for supported
files) in order to afterwards calculate the multipart part size
accordingly.
Change-Id: Idabba4c2ee794ee471973889f5843174a7acad35
This change allows the uplink to bump the part size based on the
content length that is being copied. This ensures we are staying
below the 10k part limit currently enforced on the satellites.
If the user specifies the flag, it will error out if the value
chosen by the user is too low. Otherwise it will use it.
Change-Id: I00d30f603d941c2f7703ba19d5923e668629a7b9
Things that make debugging easier.
* Added logging to automatic link clicking to make it obvious, when it
fails.
* Added monitoring to oidc.
* Made dbx create calls noreturn for oauth_*
Change-Id: I37397b4e84ce5bfd82954aed9c38fdfd52595f24
recursive copy had a bug with relative local paths.
this fixes that bug and changes the test framework
to use more of the code that actually runs in uplink
and only mocks out the direct interaction with the
operating system.
Change-Id: I9da2a80bfda8f86a8d05879b87171f299f759c7e
Implement a buffer for inserting repair items into the queue in a batch.
Part of https://github.com/storj/storj/issues/4727
Change-Id: I718472b2f2b1f4993c3d6f15c44923776407155a
The new storagenode base image version contains the fix for the
failing "processes" supervisord event listener.
Resolves https://github.com/storj/storj/issues/4772
Change-Id: I6d67aa6f85ee33cd9abe6a663e4f9a84ea57fdbf
/bin/stop-supervisor fails in posix shell since the standard read utility
takes at least one variable's name as argument.
Changing the header #!bin/sh to #!/bin/bash fixes this issue.
`read` with no variable's name works in bash.
Looks like the shell in alpine isn't POSIX-compliant so we didn't
encounter this issue on alpine.
Also, I changed the name from "processes" to "processes-exit-eventlistener"
to make it clearer in the logs since supervisord spawns event listeners as
separate processes.
Change-Id: Ife9378c2013e2eb54f2adcd52a163d64eaacbbab
When running the docker auto-updater image as non-root user,
supervisord logs a "CRIT could not write pidfile /run/supervisord.pid"
since the user does not have permission to the /run directory.
Changing the location to /etc/supervisor fixes it because permissions
are set for non-root access of the /etc/supervisor directory.
Closes https://github.com/storj/storj/issues/4730
Change-Id: Id463f3a08db44dd9283921ece4575abdad9bd7f2
With this change users can use the uplink cli in
scripts (ie. bash) more easily, since the output
can be switched to an easier processable json format.
It keeps the default of tabbed output.
Change-Id: I37e2c55f75c2250c3119fd8df8b66a766ff9096b
When ctx is cancelled limiter won't start a new goroutine.
The code didn't immediately return an error in that case.
The dst.Commit(ctx) would fail anyways due to a cancelled ctx.
However, we can make the behavior clearer by returning immediately.
Change-Id: I65df7ca85de55813f3200a50db2eaaa7a297ba2c
It was possible for the a previous write / part to fail or be aborted
and the next part write still happened. This causes a data ordering
corruption.
The whole write to parallel stdout fails, so there shouldn't be
confusion with regards to the output acceptability. However, it would
be clearer, if we avoided writing out-of-order data... mainly to be
clear that we didn't corrupt the data, just that it's incomplete.
Change-Id: I97b0d14404f29e8615e7d29b10cbd61ccb861e40
Also ensure that abort is given at least 5 seconds to clear up any
pending uploads on cancellation.
Change-Id: I814aa407ee5783f2609a76b54de2879dcd5f89bb
If the cp command is executed with higher level of parallelism, it would
open more connections to storage nodes at the same time. Therefore, the
connection pool capacity should be expanded accordingly.
The pool capacity is set to 100 * parallelism.
Change-Id: Ia8b3ab6a99340d8cbb87a7b80c3354b2b21c1958
I don't think it should matter for correctness whether this matches the
segment size or not, so I think there is something else wrong. However,
making this change seems to eliminate the "corruption when ulimit -n is
too low" problem we're seeing right now.
Change-Id: I232fe0d0a371b86ddf902e8c2d4778e140b2f1fc
Attribution is attached to bucket usage, but that's more granular than
necessary for the attribution report. This change iterates over the
bucket attributions, parses the user agent, converts the first entry
to lower case, and uses that as the key to a map which holds the
attribution totals for each unique user agent.
Change-Id: Ib2962ba0f57daa8a7298f11fcb1ac44a8bb97875
Now that we have both the storagenode and updater processes running
in a single docker container, we need a way to know which log entry
is logged by any of the processes.
This change includes a Process field in the log entries.
Resolves https://github.com/storj/storj/issues/4648
Change-Id: I167b9ab65728a41136d264b5fe2c41bb64ed1785
Before, the VA query was summing the total and dividing by the number of
rows. This gives the average bytes stored per hour, but we charge for
usage with byte-hours. Why not do value attribution the same way?
To do that, we don't divide by the number of rows. We also have object
and segment fees so return segment-hours and object-hours too.
Change-Id: I1f18b7e1b2bae1d3fae1ca3b93bfc24db5b9b0e6
We've had a lot of issues with alpine and currently there's a broken
network issue on alpine for users running on RPI arm32 architechture
which requires a workaround before docker is able to sync time between
the host and the container: https://wiki.alpinelinux.org/wiki/Release_Notes_for_Alpine_3.13.0\#time64_requirements.
Since we're switching the base image of the storagenode to debian,
it's best to switch the base image of all our docker images to
debian as well for consistency; less drift across them and keeps
the push target consistent.
Change-Id: If3adf7a57dc59f19ef2221b892f340d919798fc5
In the migration to migrate the corresponding name of the partner id to
user agent, part of the requirement was to migrate the partner id
itself if there was no partner name associated. This turned out to not
be so good. When we parse the user_agent column later, it is returning an
error if the user agent is one of these UUIDs.
Change-Id: I776ea458b82e1f99345005e5ba73d92264297bec
We are switching from alpine to debian due to a network issue
introduced in alpine 3.13+ which fails to verify certificates
due to not all armhf boards meet the time64 requirement:
https://wiki.alpinelinux.org/wiki/Release_Notes_for_Alpine_3.13.0\#time64_requirements
Also, Debian does not have official imagess for arm32v6 architecture
so we are building with arm32v5 arch in the Makefile.
Change-Id: I3660c3f64b7c2b342dd4ccb876af5f4e3036ea9d
When there is an error fetching a piece, the reader might be present or
it might not, depending on how far the fetch operation got. The
fetch-pieces code did not handle the "reader-not-present" case. Now it
should.
Change-Id: I263657d544d0ab8ba5d307a34ffc76bbf56835d0
Updating the version of the base image for the storagenode docker image.
Also fixes the non-root permission issue to /app directory
Change-Id: I8b55a1e3062f55ce6fc52e126ec1a18bfa24e669
This change fixes the following issues:
wget: Alpine docker image by default uses the builtin BusyBox wget which is not capable of handling SSL traffic via proxy unlike the GNU wget. We have to replace BusyBox wget with GNU wget.
updater failing to restart the node: supervisorctl pointing to wrong config file. We remove the default configuration file and point supervisorctl to custom config in systemctl
updates https://github.com/storj/storj/issues/4489
Change-Id: I24a7f18377ba723bbc377bb5d25aaa14f37021b1
Add ability to limit updates in migrations.
To make sure things are looking okay in the migration, we can run it
with a limit of something like 10 or 30. We can look at the output of
the migrated columns to see if they are correct. This should have no
effect on subsequently running the full migration.
Change-Id: I2c74879c8909c7938f994e1bd972d19325bc01f0
This change fixes the `sed: can't create temp file '/etc/supervisor/supervisord.confXXXXXX': Permission denied` issue when editing the supervisord.conf file during runtime as a non-root user.
While editing the config file, Sed creates a temporary file, saves the result and then finally mv the original file with the temporary one. So we need to set the permission for the /etc/supervisor where the temporary file is created.
Change-Id: Ic9c147a9cf0a6ef94adf702e33054edce1828806