Rather than using Invoice Items to account for storjscan token payments, credit notes will be used and applied to the users finalized invoice. This credit note will reduce the amount due of the users invoice based on the amount of storj token balance the user has on the satellite. Applying credit notes to a finalized invoice also requires that the invoice not be automatically paid when finalized. Therefore, a new command (pay-invoices) was added to initiate payment for users invoices.
Change-Id: Ie539375a10e842e3cb64bf0140834bbab0774f54
They are needed for segment-verify tool.
Also rename some of the conversion methods to make clear,
which of them have side-effects.
Change-Id: Ie9a0952548e9ed5068c7a30c2fd2134b07139bca
This adds parts for:
1. iterating over the segments
2. using an interface for writing the segments
3. stubs for handling deleted segments
Change-Id: I76a17cac6deb0b6c042a8ab7c4155a890db9da84
This patch fix the beavior of the distributed tracing reporter.
1. For developer build we don't append the date
* We don't need to separate service instances in jaeger (search by trace ID)
* It's usually 0000-00-000 anyway as release.sh is not used for dev builds
2. Tracing ID MUST be unique
* Instead of trusting the user to set a unique value (how can they do it?), we generate a random number
3. To make it possible to find the trace, there is a new flag to print out the generated tracing ID
4. Monkit `remoteTrace` call is replaced with normal monkit Task.
* remoteTrace call assumes that we have a parent span in an other service (which is already sent to the server)
* Here we must send out the parent span, as this is the beginning of the trace
5. We properly close the Jaeger UDP collector, and we wait until remaining messages are sent out
Change-Id: Iabf5abf25f4f20881188f88edcbadca95ac74927
Implements creating roughly load-balanced set of batched
that can be used to make multiple requests.
Change-Id: I349b276176dcb8ba9163e7e06a94509d73fa5ddc
This is to update all projects to have a public_id if they do not have
one.
github issue: https://github.com/storj/storj/issues/4861
Change-Id: Icfa42b62e15ca75d3c04a0aab48a3c3b0f3a9d6e
We would like to have separate process/command to collect bloom
filters from source different than production DBs. Such process will
use segment loop to build bloom filters for all storage nodes and
will send it to Storj bucket. This change is extending satellite binary
with appropriete command.
New GC service for collecting bloom filter will be a subsequent
change.
Updates https://github.com/storj/team-metainfo/issues/120
Change-Id: Ibc03e119c340919cf468fc1f5a4f3d187bb3a5a1
As a reminder: latest clingy removed the requirement of having custom context (which made the usage of context.WithValue harder) and uses simple context instead.
Clingy saves the stdin/stdout/stderr to the context (earlier to separated context type) to make it available for unit testing.
Change-Id: I8896574f4670721de43a577cd4b35952e3b5d00e
Decode the JSON input string to its corresponding unicode
decoding if the special byte order mark is present.
Fixes https://github.com/storj/storj/issues/4950
Change-Id: If91bac22590c89b35c58bf54f6d3bdb8a67d7a4f
We couldn't use environment variables safely to configure storagenode, since we introduced the embedded updater.
For example STORJ_DEBUG_ADDR=localhost:11111 would try to set debug port 11111 for both the storagenode and storagenode-updated, causing port conflict.
This small change enables to configure storagenode-updater with STORJUPDATER_... environment variables.
Tested with creating custom image and installing to my own storage node.
Change-Id: I6b0a601a4dc63d2d1ff3c191ae89981434e55c30
Sessions now expire after a much shorter amount of time, requiring
clients to issue API requests for session extension. This is handled
behind the scenes as the user interacts with the page, but once session
expiration is imminent, a modal appears which informs the user of his
inactivity and presents him with the choice of loging out or preserving
his session.
Change-Id: I68008d45859c814a835d65d882ad5ad2199d618e
When enconding structs into JSON, byte slices are marshalled as base64
encoded string using the base64.StdEncoding.Encode():
ea9c3fd42d/src/encoding/json/encode.go (L833-L861)
We, however, expect API Secrets to be encoded as base64URL, so when
an marshalled secret (with byte slice type) is added to the multinode
dashboard, it fails with `illegal base64 data at input byte XX`.
This change changes the type of APISecret field in the
multinode/nodes.Nodes struct to use multinodeauth.Secret type instead
of []byte.
multinodeauth.Secret is extended with custom MarshalJSON and
UnmarshalJSON methods which implement the json.Marshaler and
json.Unmarshaler interfaces, respectively.
Resolves https://github.com/storj/storj/issues/4949
Change-Id: Ib14b5f49ceaac109620c25d7ff83be865c698343
Added Setup of access maker call into cmd_access_setup to use flags during cmd call
Closes https://github.com/storj/storj/issues/4766
Change-Id: I0c75f224414099573b021b18b87d9e17192cecc5
It's possible that content was not being flushed from processes.
For now, ignore other process failures under storj-sim network test.
Once we get other processes stable, we can repropagate the error.
Change-Id: I01ed572d7c779ab6451124f1e24e3d1168b3ea79
this allows one to specify a trace id and cause any remote
spans to be sent up to wherever. it doesn't collect any
local traces.
Change-Id: Ia87e294bb276d966f9f3dbfbaf6e7916b1ec7af9
Some nodes were added to the nodes table due to a bug in quic
based storagenode contact code. This is a tool to clean up these nodes.
Delete with batch-size 1k seems to take ~400ms on local CockroachDB.
Change-Id: Ic0c1180528c27952e19c431fc9cc327292a10a5f
Don't abbreviate multinode in the command help message because there
isn't a need for it and the abbreviation isn't clear at all.
Change-Id: I7a1f2be6ae1f7d4b287c18c48b22c630549b731f
It seems the tests relied on time.Now(), which might cause some
discrepancies in calculations. Use a fixed time.Now() rather than
recalculating.
As a sidefix, remove "Test" prefix from t.Run. These are unnecessary.
Change-Id: I1de903fcf0fcf46fc8e3acf2463e17239b8e3cc6
Current pipelining to stdout is synchronous so we don't have any
advantage from using --parallelism flag. This change adds buffer
while writing to stdout. Each part is first read into the buffer
and flushed only when all data was read from this part.
https://github.com/storj/uplink/issues/105
Change-Id: I07bec0f4864dc4fccb42224e450d85d4d196f2ee
This change adds project ID and bucket name columns to the generated
partner attribution report. Attribution values are now summed based on
their project ID and bucket name in addition to their user agent.
Additionally, the command to generate the attribution report has been
modified to optionally include only certain user agents.
Change-Id: I61a1d854379134f26b31467d9e83a787beb451dd
The admin UI assets aren't inside of the `web` directory they are
directly in the `satellite` one.
The invalid path provoked that storj-sim generated a satellite
configuration with an invalid path to the Admin UI static assets
provoking that it didn't load the UI by default.
Change-Id: I49fb289377f51634057173690fbd8cf863ca9a9d
Main issue was that when one part copy failed while being inside
goroutine (limiter) and another part was still collecting src/dst parts
it was possible to drop errors from failed part copy. It was possible
bacause on fail context was canceled and if we were still getting
part src/dst then it was returning error immediately and error
group with errors from goroutine was ignored.
Change-Id: I75c6799eba358741629795f2971c7a964cb2c9ce
Few improvements were made to how we are handling errors
while doing parallel upload/download for single object:
* unhide error under 'context canceled' which was shown in most of
cases
* add part number to error message
* don't try to commit if any error occurs while operation
* combine errors into more readable form, example:
---
failed to download part 3: uplink: eestream: failed to download stripe 0:
error retrieving piece 00: ecclient: piecestore: rpc: tcp connector failed: rpc: dial tcp 97.119.158.36:28967: i/o timeout
...
error retrieving piece 89: ecclient: piecestore: rpc: tcp connector failed: rpc: dial tcp 161.129.152.194:28967: i/o timeout
failed to download part 1: uplink: eestream: failed to download stripe 0:
error retrieving piece 01: io: read/write on closed pipe
...
error retrieving piece 97: io: read/write on closed pipe
failed to download part 2: uplink: eestream: failed to download stripe 0:
error retrieving piece 00: io: read/write on closed pipe
...
error retrieving piece 01: ecclient: piecestore: rpc: tcp connector failed: rpc: dial tcp 180.183.132.234:28967: operation was canceled
error retrieving piece 96: io: read/write on closed pipe
main.(*cmdCp).parallelCopy:418
main.(*cmdCp).copyFile:262
main.(*cmdCp).Execute:156
main.(*external).Wrap:123
github.com/zeebo/clingy.(*Environment).dispatchDesc:126
github.com/zeebo/clingy.(*Environment).dispatch:53
github.com/zeebo/clingy.Environment.Run:34
main.main:26
runtime.main:250
---
Change-Id: I9bb70b3f754567761fa8d17bef8ef59b0709e33b
At some point uplink cli lost ability to set metadata. This change
brings back this functionality for 'cp' operation.
https://github.com/storj/storj/issues/3848
Change-Id: Ia5f60eb577fcab8a38d94730d8cdc6e0338d3b46
Uplink can upload from stdin and download to stdout. We had
such tests for old binary but now we were missing it.
Change-Id: I5110a9f531f5cc21277fa53611995fb5b556ff16
if you somehow get an invalid access grant in your
config file, it'd be nice to be able to list it
and delete it and stuff.
Change-Id: I7e335bf32353f294d5abb6a7c5f8f3aa18f2f6a7
The current supervisord condifguration sets up the HTTP server
to listen on a tcp socket which is private i.e. available only
on localhost. This poses a regression where multiple containers
cannot be run if the host network interface is used when docker
container is run with `--network host` option.
This change adds a new env variable `SUPERVISOR_SERVER`, with
potential values `unix | private_port | public_port`, where
`unix` is set as the default value.
By default, the HTTP server is now set to listen on a UNIX
domain socket.
The file path is set to `/etc/supervisor/supervisor.sock`
instead of the /tmp directory since some systems
periodically delete older files in /tmp. If the socket file is
deleted, supervisorctl will be unable to connect to supervisord.
When SUPERVISOR_SERVER is set to `public_port` or `private_port`,
the HTTP server is set to listen on a TCP socket.
Resolves https://github.com/storj/storj/issues/4661
Change-Id: I224836dcae0293bcfe49874f2748be7723944687
This changes allows fetching the file size more easily (for supported
files) in order to afterwards calculate the multipart part size
accordingly.
Change-Id: Idabba4c2ee794ee471973889f5843174a7acad35
This change allows the uplink to bump the part size based on the
content length that is being copied. This ensures we are staying
below the 10k part limit currently enforced on the satellites.
If the user specifies the flag, it will error out if the value
chosen by the user is too low. Otherwise it will use it.
Change-Id: I00d30f603d941c2f7703ba19d5923e668629a7b9