the progress bar was being set to inconsistent lengths
multiple times, causing a crash. this fixes that by
only setting the progress bar length once to the length
of the full object. it avoids a round trip by doing so
only after it has gotten the first read handle from the
source, so the length information is cached.
Change-Id: I112d7c79016e54ba3794e96c6174cc01b8baedb4
for very large machines (>10Gbit) it is still useful
to have parallelism for uploads because we're actually
bound by getting new pieces from the satellite, so doing
that in parallel provides a big win.
this change adds back that flag to exist for uploads, and
removes the backwards compatibility code for the flag with
the maximum-concurrent-pieces as they are now independent.
the upload code parallelism story is now this:
- each object is a transfer
- each transfer happens in N parts (size dynamically
chosen to avoid having >10000 parts)
- each part can happen in parallel up to the limit
specified
- each parallel part can have up to the limit of
max concurrent pieces and segments
this change also changes some defaults to be better.
- the connection pool capacity now takes into acount
transfers, parallelism and max concurrent pieces
- the default smallest part size is 1GiB to allow the
new upload code path to upload multiple segments
Change-Id: Iff6709ae73425fbc2858ed360faa2d3ece297c2d
downloads still need the old copy code because they aren't
parallel in the same way uploads are. revert all the code
that removed the parallel copy, only use the non-parallel
copy for uploads, and add back the parallelism and chunk
size flags and have them set the maximum concurrent pieces
flags to values based on each other when only one is set
for backwards compatibility.
mostly reverts 54ef1c8ca2
Change-Id: I8b5f62bf18a6548fa60865c6c61b5f34fbcec14c
We have a special method to exclude methods (which are called to frequently) from distributed traces.
https://github.com/storj/common/blob/main/tracing/excluded.go
But this works only, if we define the exclusion function during the initialization.
Let's do it in this patch.
Change-Id: Icf12202bd7213b5c0009332ce2755b267f2bdbae
also change the config creation to be more robust to
changes that add defaults in the future by not fully
reconstructing the config value passed in to the
project.
Change-Id: I673e8b54ce0b951ae735bf4658525c477c26ac5a
the parallelism and parallelism-chunk-size flags
which used to control how many parts to split a
segment into and many to perform in parallel
are now deprecated and replaced by
maximum-concurrent-pieces and long-tail-margin.
now, for an individual transfer, the total number
of piece uploads that transfer will perform is
controlled by maximum-concurrent-pieces, and
segments within that transfer will automatically
be performed in parallel. so if you used to set
your parallelism to n, a good value for the pieces
might be something approximately like 130*n, and
the parallelism-chunk-size is unnecessary.
Change-Id: Ibe724ca70b07eba89dad551eb612a1db988b18b9
updates flag descriptions with correct punctuation, and fix errors
to not be capitalized.
Updates #5623
Change-Id: I9c6ef6d9888b2fb90b17db8775cc6abe803e102f
adds an additional flag to return an additional TXT record that will
enable TLS on custom domains with Linksharing.
Closes#5623
Change-Id: I941616362d7dcd9aec20dfd10346e483021516a4
Option added to `uplink access setup` and `uplink access create`
commands to disable object key encryption.
Related to https://github.com/storj/storj/issues/5678
Change-Id: I4789a94143742ff4b232fd60decc029ad2883c2a
We do regular testing with executing uplink. But sometimes the recorded execution time showed spikes.
Would be nice to know the reason of the spikes (just internet blip, or something what we should be worried about).
We can collect distributed traces, but it's not easy to find the right trace in Jaeger.
* We can provide a random trace-id, but it should be persisted / processed
* We can also save standard output and use `--trace-verbose` which prints out the used trace id, but it's also complicated to collect all of them in a DB
Would be nice to attach additional metadata to traces to make sure that we can filter all traces of one specific kind of test.
This patch provides this feature:
* It always adds hostname to the trace (if you opt-in to distributed tracing, which is turned off by default)
* Additional tags can be defined with CLI flag
Tags can be used to find the right trace in Jaeger (or in Elastic search backend of Jaeger).
Change-Id: I08f10023bbebd783f812cfca95ac6237360ac2b0
quic is still configurable based on the quic rollout
environment variables in storj.io/common. this stops
using a method removed in:
https://review.dev.storj.io/c/storj/uplink/+/9815
Change-Id: Ibfe28cfb19e5672630970b9e2c8c6ac0c98d4822
I use `uplink share` command but I always fail to set the --not-before parameter.
* Usually I try +2d when I see in the help that +2h is possible --> fail
* When it fails, I try to set explicit date, like 2012-12-23 --> fail
This patch makes it possible to use:
* day duration (like +3d)
* shorter date definition (like `2023-12-12` or `2023-12-12T12:40`)
Change-Id: I2243b36f59c8929eb0473c4bb4fed19220890c71
listing "/" on windows was not returning files from
the root because it was adding an extra separator
unconditionally. the docs for filepath.Clean say
The returned path ends in a slash only if it represents
a root directory, such as "/" on Unix or `C:\` on Windows.
so we need to add the slash only if it doesn't already have
one to avoid the double slash problem while still ensuring
the path ends with a slash.
Change-Id: I98afc1f1a06bb06035c7647ecb0da3214080162d
Add a flag to enable/disable analytics so uplink can be run
non-interactively. Also when run non-interactively for the first time
it will not error any more but instead default to disable analytics.
Part of https://github.com/storj/storj/issues/5126
Change-Id: I07ac8a040664334efcb4e2536f26c330c1751a6f
Add a docker image for uplink-cli and push it to docker hub.
We used to have this before the change to uplinkng. I'm not
sure if the pushing works, we'll see after merge.
To test, build an image with `make uplink-image`, read the tag from the
output and run normal uplink-cli commands using
`docker run -it storjlabs/uplink:df9bbceca-uplink-docker-go1.18.8-amd64 [command]`
Part of https://github.com/storj/uplink/issues/109
Change-Id: I8a10aab2b778951ff42a22ba2f252c581eb66b65
The copyFile method has some safeguards to ensure that the multipart
write is aborted. This is accomplished by always calling abort on the
MultiWriteHandle when the copy is finished, whether or not there was a
failure or it was successfully committed. If the copy was committed,
then this RPC is a no-op on the metainfo server.
Regardless, the calls to abort to constitute an additional RPC to the
satellite for no benefit. This is exacerbated by the fact that the code
currently ends up calling abort twice.
This change updates the libuplink-backed MultiWriteHandle implementation
to not call abort if the write is committed and vice-versa. This
eliminates the two wasteful RPC calls.
Change-Id: I13679234f6f473e9a93179e6791fb57eac512f25
This change is similar to
https://review.dev.storj.io/c/storj/storj/+/7687 but applied when
uploading from stdin with parallelism > 1.
Currently, the paralellism from stdin scales up to 3 or 4, but not
greater than that. If we buffer the content from stdin more aggressively
the parallelism scales to higher levels and reaches the performance of
reading directly from a file.
Change-Id: I1f447686a88074882709992ee6d52dd262e220fb
This new advanced flag configures libuplink to store in-memory the
erasure-coded pieces that are temporarily created during upload.
By default, libuplink writes the erasure-coded pieces as temp files on
the disk, but this results in additional IOPS that affect the
performance in hot-rodded scenarios.
If the erasure-coded pieces are kept in-memory and the system has enough
RAM, the upload speed may be boosted with 20-30%.
The flag is added as "advanced" as we don't recommend it by default.
Co-authored-by: Stefan Benten <mail@stefan-benten.de>
Change-Id: Icc54f03b6c0bc27c97126f6f1d22748d21a15959
Because --readonly is default true, passing something like
--disallow-deletes=false would not actually update that
value because the readonly flag would override. this makes it
so that the --disallow-* flags override the --readonly and
--writeonly flags.
Also fixes some minor formatting issues with share like an
extra space after the "Public Access:" entry.
Simplifies the handling of the explicit "none" by making the
flags for the dates optional and using nil to signify that
the value was left unset.
Bump the go.mod to go1.18 to enable the use of generics and
add a small generic function. This can easily be backed out
if it causes problems.
Change-Id: I1c5f1321ad17b8ace778ce55561cbbfc24321a68
Uplink doesn't have a `save` command, however, it's referred on an error
message that's returned when the `access register` command is executed
without having any default access configured.
The correct command to mention is `import`.
Change-Id: Ia2092d02965737f421683fc98c52a51c9529b86e
This patch makes it possible to use `uplink share` in test environment (like storj-up) where authservice doesn't have full secure endpoint.
This supposed to be an undocumented feature (no flag, just a custom prefix) to avoid any confusion for regular users.
Change-Id: I256aefc944066e52c72224e7b6f1a593b5bc57f7
there's not really anything better to send. uplinks have
rotating node ids on each startup, so that's not right
here. i don't think anyone will use instance id for
uplinks so let's just fold and send nothing.
Change-Id: I2511605e95eba1816d662d385b28d5feab8c4eb0
allow multiple source paths and a single destination path.
this makes commands like `uplink cp foo* sj://bucket` work
as expected.
require at least one remote path when copying. this ensures
that users don't accidentally overwrite their local files
with other local files, which is almost never what they wanted
because they would just use cp.
Change-Id: I28948f4ff735d29db06de81fc8c2a15b9f4ee3f5
Due to a programming error it was possible to "share" without an expiry
implicitly. This pollutes the auth service database.
fixes https://github.com/storj/storj/issues/5188
Change-Id: I04a345662c26948c6be6c1ae6bee3b5a583bebc4
This patch fix the beavior of the distributed tracing reporter.
1. For developer build we don't append the date
* We don't need to separate service instances in jaeger (search by trace ID)
* It's usually 0000-00-000 anyway as release.sh is not used for dev builds
2. Tracing ID MUST be unique
* Instead of trusting the user to set a unique value (how can they do it?), we generate a random number
3. To make it possible to find the trace, there is a new flag to print out the generated tracing ID
4. Monkit `remoteTrace` call is replaced with normal monkit Task.
* remoteTrace call assumes that we have a parent span in an other service (which is already sent to the server)
* Here we must send out the parent span, as this is the beginning of the trace
5. We properly close the Jaeger UDP collector, and we wait until remaining messages are sent out
Change-Id: Iabf5abf25f4f20881188f88edcbadca95ac74927
As a reminder: latest clingy removed the requirement of having custom context (which made the usage of context.WithValue harder) and uses simple context instead.
Clingy saves the stdin/stdout/stderr to the context (earlier to separated context type) to make it available for unit testing.
Change-Id: I8896574f4670721de43a577cd4b35952e3b5d00e
Added Setup of access maker call into cmd_access_setup to use flags during cmd call
Closes https://github.com/storj/storj/issues/4766
Change-Id: I0c75f224414099573b021b18b87d9e17192cecc5
this allows one to specify a trace id and cause any remote
spans to be sent up to wherever. it doesn't collect any
local traces.
Change-Id: Ia87e294bb276d966f9f3dbfbaf6e7916b1ec7af9
It seems the tests relied on time.Now(), which might cause some
discrepancies in calculations. Use a fixed time.Now() rather than
recalculating.
As a sidefix, remove "Test" prefix from t.Run. These are unnecessary.
Change-Id: I1de903fcf0fcf46fc8e3acf2463e17239b8e3cc6
Current pipelining to stdout is synchronous so we don't have any
advantage from using --parallelism flag. This change adds buffer
while writing to stdout. Each part is first read into the buffer
and flushed only when all data was read from this part.
https://github.com/storj/uplink/issues/105
Change-Id: I07bec0f4864dc4fccb42224e450d85d4d196f2ee
Main issue was that when one part copy failed while being inside
goroutine (limiter) and another part was still collecting src/dst parts
it was possible to drop errors from failed part copy. It was possible
bacause on fail context was canceled and if we were still getting
part src/dst then it was returning error immediately and error
group with errors from goroutine was ignored.
Change-Id: I75c6799eba358741629795f2971c7a964cb2c9ce
Few improvements were made to how we are handling errors
while doing parallel upload/download for single object:
* unhide error under 'context canceled' which was shown in most of
cases
* add part number to error message
* don't try to commit if any error occurs while operation
* combine errors into more readable form, example:
---
failed to download part 3: uplink: eestream: failed to download stripe 0:
error retrieving piece 00: ecclient: piecestore: rpc: tcp connector failed: rpc: dial tcp 97.119.158.36:28967: i/o timeout
...
error retrieving piece 89: ecclient: piecestore: rpc: tcp connector failed: rpc: dial tcp 161.129.152.194:28967: i/o timeout
failed to download part 1: uplink: eestream: failed to download stripe 0:
error retrieving piece 01: io: read/write on closed pipe
...
error retrieving piece 97: io: read/write on closed pipe
failed to download part 2: uplink: eestream: failed to download stripe 0:
error retrieving piece 00: io: read/write on closed pipe
...
error retrieving piece 01: ecclient: piecestore: rpc: tcp connector failed: rpc: dial tcp 180.183.132.234:28967: operation was canceled
error retrieving piece 96: io: read/write on closed pipe
main.(*cmdCp).parallelCopy:418
main.(*cmdCp).copyFile:262
main.(*cmdCp).Execute:156
main.(*external).Wrap:123
github.com/zeebo/clingy.(*Environment).dispatchDesc:126
github.com/zeebo/clingy.(*Environment).dispatch:53
github.com/zeebo/clingy.Environment.Run:34
main.main:26
runtime.main:250
---
Change-Id: I9bb70b3f754567761fa8d17bef8ef59b0709e33b