It seems the tests relied on time.Now(), which might cause some
discrepancies in calculations. Use a fixed time.Now() rather than
recalculating.
As a sidefix, remove "Test" prefix from t.Run. These are unnecessary.
Change-Id: I1de903fcf0fcf46fc8e3acf2463e17239b8e3cc6
Current pipelining to stdout is synchronous so we don't have any
advantage from using --parallelism flag. This change adds buffer
while writing to stdout. Each part is first read into the buffer
and flushed only when all data was read from this part.
https://github.com/storj/uplink/issues/105
Change-Id: I07bec0f4864dc4fccb42224e450d85d4d196f2ee
This change adds project ID and bucket name columns to the generated
partner attribution report. Attribution values are now summed based on
their project ID and bucket name in addition to their user agent.
Additionally, the command to generate the attribution report has been
modified to optionally include only certain user agents.
Change-Id: I61a1d854379134f26b31467d9e83a787beb451dd
The admin UI assets aren't inside of the `web` directory they are
directly in the `satellite` one.
The invalid path provoked that storj-sim generated a satellite
configuration with an invalid path to the Admin UI static assets
provoking that it didn't load the UI by default.
Change-Id: I49fb289377f51634057173690fbd8cf863ca9a9d
Main issue was that when one part copy failed while being inside
goroutine (limiter) and another part was still collecting src/dst parts
it was possible to drop errors from failed part copy. It was possible
bacause on fail context was canceled and if we were still getting
part src/dst then it was returning error immediately and error
group with errors from goroutine was ignored.
Change-Id: I75c6799eba358741629795f2971c7a964cb2c9ce
Few improvements were made to how we are handling errors
while doing parallel upload/download for single object:
* unhide error under 'context canceled' which was shown in most of
cases
* add part number to error message
* don't try to commit if any error occurs while operation
* combine errors into more readable form, example:
---
failed to download part 3: uplink: eestream: failed to download stripe 0:
error retrieving piece 00: ecclient: piecestore: rpc: tcp connector failed: rpc: dial tcp 97.119.158.36:28967: i/o timeout
...
error retrieving piece 89: ecclient: piecestore: rpc: tcp connector failed: rpc: dial tcp 161.129.152.194:28967: i/o timeout
failed to download part 1: uplink: eestream: failed to download stripe 0:
error retrieving piece 01: io: read/write on closed pipe
...
error retrieving piece 97: io: read/write on closed pipe
failed to download part 2: uplink: eestream: failed to download stripe 0:
error retrieving piece 00: io: read/write on closed pipe
...
error retrieving piece 01: ecclient: piecestore: rpc: tcp connector failed: rpc: dial tcp 180.183.132.234:28967: operation was canceled
error retrieving piece 96: io: read/write on closed pipe
main.(*cmdCp).parallelCopy:418
main.(*cmdCp).copyFile:262
main.(*cmdCp).Execute:156
main.(*external).Wrap:123
github.com/zeebo/clingy.(*Environment).dispatchDesc:126
github.com/zeebo/clingy.(*Environment).dispatch:53
github.com/zeebo/clingy.Environment.Run:34
main.main:26
runtime.main:250
---
Change-Id: I9bb70b3f754567761fa8d17bef8ef59b0709e33b
At some point uplink cli lost ability to set metadata. This change
brings back this functionality for 'cp' operation.
https://github.com/storj/storj/issues/3848
Change-Id: Ia5f60eb577fcab8a38d94730d8cdc6e0338d3b46
Uplink can upload from stdin and download to stdout. We had
such tests for old binary but now we were missing it.
Change-Id: I5110a9f531f5cc21277fa53611995fb5b556ff16
if you somehow get an invalid access grant in your
config file, it'd be nice to be able to list it
and delete it and stuff.
Change-Id: I7e335bf32353f294d5abb6a7c5f8f3aa18f2f6a7
The current supervisord condifguration sets up the HTTP server
to listen on a tcp socket which is private i.e. available only
on localhost. This poses a regression where multiple containers
cannot be run if the host network interface is used when docker
container is run with `--network host` option.
This change adds a new env variable `SUPERVISOR_SERVER`, with
potential values `unix | private_port | public_port`, where
`unix` is set as the default value.
By default, the HTTP server is now set to listen on a UNIX
domain socket.
The file path is set to `/etc/supervisor/supervisor.sock`
instead of the /tmp directory since some systems
periodically delete older files in /tmp. If the socket file is
deleted, supervisorctl will be unable to connect to supervisord.
When SUPERVISOR_SERVER is set to `public_port` or `private_port`,
the HTTP server is set to listen on a TCP socket.
Resolves https://github.com/storj/storj/issues/4661
Change-Id: I224836dcae0293bcfe49874f2748be7723944687
This changes allows fetching the file size more easily (for supported
files) in order to afterwards calculate the multipart part size
accordingly.
Change-Id: Idabba4c2ee794ee471973889f5843174a7acad35
This change allows the uplink to bump the part size based on the
content length that is being copied. This ensures we are staying
below the 10k part limit currently enforced on the satellites.
If the user specifies the flag, it will error out if the value
chosen by the user is too low. Otherwise it will use it.
Change-Id: I00d30f603d941c2f7703ba19d5923e668629a7b9
Things that make debugging easier.
* Added logging to automatic link clicking to make it obvious, when it
fails.
* Added monitoring to oidc.
* Made dbx create calls noreturn for oauth_*
Change-Id: I37397b4e84ce5bfd82954aed9c38fdfd52595f24
recursive copy had a bug with relative local paths.
this fixes that bug and changes the test framework
to use more of the code that actually runs in uplink
and only mocks out the direct interaction with the
operating system.
Change-Id: I9da2a80bfda8f86a8d05879b87171f299f759c7e
Implement a buffer for inserting repair items into the queue in a batch.
Part of https://github.com/storj/storj/issues/4727
Change-Id: I718472b2f2b1f4993c3d6f15c44923776407155a
The new storagenode base image version contains the fix for the
failing "processes" supervisord event listener.
Resolves https://github.com/storj/storj/issues/4772
Change-Id: I6d67aa6f85ee33cd9abe6a663e4f9a84ea57fdbf
/bin/stop-supervisor fails in posix shell since the standard read utility
takes at least one variable's name as argument.
Changing the header #!bin/sh to #!/bin/bash fixes this issue.
`read` with no variable's name works in bash.
Looks like the shell in alpine isn't POSIX-compliant so we didn't
encounter this issue on alpine.
Also, I changed the name from "processes" to "processes-exit-eventlistener"
to make it clearer in the logs since supervisord spawns event listeners as
separate processes.
Change-Id: Ife9378c2013e2eb54f2adcd52a163d64eaacbbab
When running the docker auto-updater image as non-root user,
supervisord logs a "CRIT could not write pidfile /run/supervisord.pid"
since the user does not have permission to the /run directory.
Changing the location to /etc/supervisor fixes it because permissions
are set for non-root access of the /etc/supervisor directory.
Closes https://github.com/storj/storj/issues/4730
Change-Id: Id463f3a08db44dd9283921ece4575abdad9bd7f2
With this change users can use the uplink cli in
scripts (ie. bash) more easily, since the output
can be switched to an easier processable json format.
It keeps the default of tabbed output.
Change-Id: I37e2c55f75c2250c3119fd8df8b66a766ff9096b
When ctx is cancelled limiter won't start a new goroutine.
The code didn't immediately return an error in that case.
The dst.Commit(ctx) would fail anyways due to a cancelled ctx.
However, we can make the behavior clearer by returning immediately.
Change-Id: I65df7ca85de55813f3200a50db2eaaa7a297ba2c
It was possible for the a previous write / part to fail or be aborted
and the next part write still happened. This causes a data ordering
corruption.
The whole write to parallel stdout fails, so there shouldn't be
confusion with regards to the output acceptability. However, it would
be clearer, if we avoided writing out-of-order data... mainly to be
clear that we didn't corrupt the data, just that it's incomplete.
Change-Id: I97b0d14404f29e8615e7d29b10cbd61ccb861e40
Also ensure that abort is given at least 5 seconds to clear up any
pending uploads on cancellation.
Change-Id: I814aa407ee5783f2609a76b54de2879dcd5f89bb
If the cp command is executed with higher level of parallelism, it would
open more connections to storage nodes at the same time. Therefore, the
connection pool capacity should be expanded accordingly.
The pool capacity is set to 100 * parallelism.
Change-Id: Ia8b3ab6a99340d8cbb87a7b80c3354b2b21c1958
I don't think it should matter for correctness whether this matches the
segment size or not, so I think there is something else wrong. However,
making this change seems to eliminate the "corruption when ulimit -n is
too low" problem we're seeing right now.
Change-Id: I232fe0d0a371b86ddf902e8c2d4778e140b2f1fc
Attribution is attached to bucket usage, but that's more granular than
necessary for the attribution report. This change iterates over the
bucket attributions, parses the user agent, converts the first entry
to lower case, and uses that as the key to a map which holds the
attribution totals for each unique user agent.
Change-Id: Ib2962ba0f57daa8a7298f11fcb1ac44a8bb97875
Now that we have both the storagenode and updater processes running
in a single docker container, we need a way to know which log entry
is logged by any of the processes.
This change includes a Process field in the log entries.
Resolves https://github.com/storj/storj/issues/4648
Change-Id: I167b9ab65728a41136d264b5fe2c41bb64ed1785
Before, the VA query was summing the total and dividing by the number of
rows. This gives the average bytes stored per hour, but we charge for
usage with byte-hours. Why not do value attribution the same way?
To do that, we don't divide by the number of rows. We also have object
and segment fees so return segment-hours and object-hours too.
Change-Id: I1f18b7e1b2bae1d3fae1ca3b93bfc24db5b9b0e6
We've had a lot of issues with alpine and currently there's a broken
network issue on alpine for users running on RPI arm32 architechture
which requires a workaround before docker is able to sync time between
the host and the container: https://wiki.alpinelinux.org/wiki/Release_Notes_for_Alpine_3.13.0\#time64_requirements.
Since we're switching the base image of the storagenode to debian,
it's best to switch the base image of all our docker images to
debian as well for consistency; less drift across them and keeps
the push target consistent.
Change-Id: If3adf7a57dc59f19ef2221b892f340d919798fc5
In the migration to migrate the corresponding name of the partner id to
user agent, part of the requirement was to migrate the partner id
itself if there was no partner name associated. This turned out to not
be so good. When we parse the user_agent column later, it is returning an
error if the user agent is one of these UUIDs.
Change-Id: I776ea458b82e1f99345005e5ba73d92264297bec
We are switching from alpine to debian due to a network issue
introduced in alpine 3.13+ which fails to verify certificates
due to not all armhf boards meet the time64 requirement:
https://wiki.alpinelinux.org/wiki/Release_Notes_for_Alpine_3.13.0\#time64_requirements
Also, Debian does not have official imagess for arm32v6 architecture
so we are building with arm32v5 arch in the Makefile.
Change-Id: I3660c3f64b7c2b342dd4ccb876af5f4e3036ea9d
When there is an error fetching a piece, the reader might be present or
it might not, depending on how far the fetch operation got. The
fetch-pieces code did not handle the "reader-not-present" case. Now it
should.
Change-Id: I263657d544d0ab8ba5d307a34ffc76bbf56835d0
Updating the version of the base image for the storagenode docker image.
Also fixes the non-root permission issue to /app directory
Change-Id: I8b55a1e3062f55ce6fc52e126ec1a18bfa24e669
This change fixes the following issues:
wget: Alpine docker image by default uses the builtin BusyBox wget which is not capable of handling SSL traffic via proxy unlike the GNU wget. We have to replace BusyBox wget with GNU wget.
updater failing to restart the node: supervisorctl pointing to wrong config file. We remove the default configuration file and point supervisorctl to custom config in systemctl
updates https://github.com/storj/storj/issues/4489
Change-Id: I24a7f18377ba723bbc377bb5d25aaa14f37021b1
Add ability to limit updates in migrations.
To make sure things are looking okay in the migration, we can run it
with a limit of something like 10 or 30. We can look at the output of
the migrated columns to see if they are correct. This should have no
effect on subsequently running the full migration.
Change-Id: I2c74879c8909c7938f994e1bd972d19325bc01f0
This change fixes the `sed: can't create temp file '/etc/supervisor/supervisord.confXXXXXX': Permission denied` issue when editing the supervisord.conf file during runtime as a non-root user.
While editing the config file, Sed creates a temporary file, saves the result and then finally mv the original file with the temporary one. So we need to set the permission for the /etc/supervisor where the temporary file is created.
Change-Id: Ic9c147a9cf0a6ef94adf702e33054edce1828806
The supervisord.conf file is edited to set the args for the storagenode and storagenode-updater binaries at runtime. This change moves the config file to the base image so we can set the permission to allow non-root users edit the config file.
Non-root user permission is also needed for the /app directory so we can install/update the binaries when run as a non-root user.
Updates https://github.com/storj/storj/issues/4489
Change-Id: If7a51a00ea171253e41923501174a43393f4638c
When copying an object from cli you can now set the expiry.
It uses the same datetime format as restricting access grants.
Closes https://github.com/storj/storj/issues/4595
Change-Id: Icab73a64a9589817d6bc6d702b765b166ca1350d
Having the storagenode and storagenode-updater processes in one container
requires a process manager to properly handle the individual processes.
Using a process manager like supervisord requires that you package
supervisord and it configuration in the image, along with the storagenode
and storagenode-updater binaries.
Installing supervisord requires that we run apk to install it and its
dependencies at build time which makes it difficult to build multi-platoform
images; executing apk forces a requirement of the build system to run
foreign architechtures.
This change adds a dockerfile which will be used to build the base image
for the storagenode and has supervisord packaged. The base image will be
built manually using docker buildx, with QEMU binfmt support.
Updates https://github.com/storj/storj/issues/4489
Change-Id: I33f8f01398a7207bca08d8a4a43f4ed56b6a2473
We would like to disable in production those parts of code
which are now mixed with new server-side copy logic.
Change-Id: Iff50682bc9545207330f58dd19b5eee53d404d7f
The text has been expanded a bit to clarify that it is necessary to create identity files with an example before using the Docker image.
Changed the <identity-dir> placeholder to <multinode-identity-dir> so no one confuses them with the storagenode identity files.
Changed the <storage-dir> placeholder to <multinode-config-dir> so no one confuses them with the storagenode 'config' folder.
fixed#4547
Closes#4547
We do not build an docker image for the multinode dashboard,
which makes monitoring for docker-focused environments harder.
This adds the basic image and ties it into CI/CD.
Change-Id: I14c01a7f1f0019f6f5c1b8fd75dc424fc362b18d
Currently the metainfo/metabase DB connections are missing the proper
application_name in order to differentiate and filter queries on the DB
side for analytics.
Without it, it is very time-consuming to correlate processes and their load.
This change adds the "check" on DB connection init and passes the fallbacks
in all places to catch connection strings, that do not set it.
Change-Id: Iea5cea8658bc63778ff89038e5c1c352bf482cfd
The "satellite fetch-pieces" command allows a satellite operator to
fetch as many pieces of a segment as possible, along with their
original order limits and hashes as provided by the storage nodes. The
fetched pieces and associated info will be stored on in a specified
folder as they are, rather than being RS-decoded or decrypted.
It is hoped that this will allow easier debugging of certain one-off
problems we've observed in the wild.
Change-Id: I42ae0e9ef0023538e42473a9be5a2460a3ac0f3a
some old configs had a value like
access: <data>
in the yaml. this would end up causing migration to
create a json file where it had no access values and
a default name of the data. that's not what the command
expects to operate on, so now we fix that during
migration and add a little mini migration for any
users that may have hit it.
Change-Id: I4c98ca5d09d043fe9338738ef6b4f930f933892c
In addition to upgrading the storj.io/common library, this change
moves off the TCPConnector in favor of the HybridConnector per
the deprecation warning.
Change-Id: I7e7e1e7568e8b95e4a99ad9caa158a799e68e1e3
Change the implementation of register and share so that it uses the
uplink method to contact the Auth Service. The network protocol switches
from HTTP to DRPC.
Closes https://github.com/storj/storj/issues/4324
Change-Id: Ib8fdb1665c6385bb39a546ba46a8df43a136df9c
Through `docker run storjlabs/storagenode:latest --help` we have always
made available around 100 command-line arguments.
However if you now pass such an argument it will be passed to
storagenode-update and it may no longer be recognized. This will cause
the storagenode not to start.
This was introduced in
https://review.dev.storj.io/c/storj/storj/+/5426
This change restores previous functionality.
Change-Id: I06823283ff82ffda12aee48c4d83717bddfbfdac
Change the order of when the storage node setup node loads the identity
for avoiding to write anything in the disk in the case that there is an
error loading the identity.
This bug was reported by @onionjake Github username's and the specific
changes to make.
Closes#4387#4396
Change-Id: I360fff3c23b160c9e055203d3526d749edfd9129
Get storagenode and storagenode-updater binaries during
run of the container to not to release new docker image
on each new version of the storagenode binary.
Fixes https://github.com/storj/storj/issues/4176
Change-Id: I994c4942136a2cc7298eb0346238689eb406ae5b
Use satellite.DB method TestingMigrateToLatest instead of
MigrateToLatest. TestingMigrateToLatest is much faster.
Also, run package tests in parallel.
Change-Id: I18bc0926dcfb80ace30d0b401e64ed919bfb966f
Value attribution codes were converted into UUIDs and stored
in the users, projects, api_keys, bucket_metainfos, and
value_attributions tables in the partner_id column. This
migration will lookup the appropriate partner name associated
with each of these UUIDs, and store the partner name directly
in the user_agent column within each table. If no corresponding
partner name exists for a partner_id, the partner_id value will
be stored instead.
Add migration for users table with tests.
Change-Id: I61254d9b81c474e76bcfc1c8cd863697c6ef44b6
Users signing up through a url containing a promo code will have that code applied to their stripe account instead of the free tier coupon.
Change-Id: I071041b0934648ef3f5bdb05b6ec97c400f89ae4
Currently the address being used is most of the time just :28967, which is not the correct address to reach the node from the public on.
This change uses the designated contact external address value that contains the set and preferred way to reach the node.
Change-Id: I99e979c2541043755b81e65c36c4289bfa3f60f3
this makes the flags match rclone nomenclature
fixes test-uplinkng to use the temporary config dir
instead of the machine default, and clean up some.
bumps clingy so that the command errors when an unknown
command is specified.
also fixes some printfs in share to use clingy stdout.
it still does some external actions that should be
passed through a ulext.External for mocking, but
that's ok for now.
Change-Id: Icc231e7e26393541c312396fec907b640b97718e
The Add method on the multinode DB interface accepts
a set of parameters which are already fields in the nodes.Node struct
excluding the name field.
When adding a new node, you're forced to call UpdateName() method
after calling the Add() method in order to save a node and update
the name.
This change allows passing the nodes.Node entity which includes
the name field. With this, a new node can be added together with
the name without invoking the UpdateName() method.
Change-Id: I281ec628dffaade35d6db4479a84f39636200072
The info command prints the details of the storagenode
to stdout.
It returns the storagenode info in JSON format
if --json flag is specified which can be piped
to the multinode add command.
Change-Id: I0163db8e02c4ec7346bfa69274d1772669357c6c
This change adds an add command to the multinode CLI.
The add command takes a json <file> as argument.
If dash (-) is specified, it reads data from stdin.
The <file> specified can be json file containing array of
nodes data or a single node object.
Change-Id: I44d68486dc9aea0bd0311a40e84d3262a0303aef
We want to monitor traffic and tools that are used
to interact with our network so we need to append
its user agent.
The same user agent is appended for current uplink
and uplinkng as eventually we will remove first.
Change-Id: I116080d6c2c6c85d591771facf01356de02a9392
this allows commands like
uplinkng cp -r sj://foo sj://bar
to work correctly, rather than complain that sj://foo is
not a boolean.
Change-Id: I003e47aabb85566bc2b454851cf55043b17ee7ea
In uplink we have command uplink share and we need
to port it to uplinkng to have command with same functionality.
Command could be executed with parameter or without.
without any flags we share as readonly.
Change-Id: I973b11d00da237358834acf5a863ebab37e684cc
In uplink we have command uplink access inspect and we need
a command with the same functionality for uplinkng.
Command could be executed with parameter or without.
without parameter - we should show default access.
If parameter exists - it could be access name or value.
If access name or value is wrong - we show error.
F.e.
uplinkng access inspect
uplinkng access inspect accessName
uplinkng access inspect accessValue
https://storjlabs.atlassian.net/browse/PG-318
Change-Id: I85fd961283850feb8684db2d126441f6b9bf0270
Test external implementation doesn't support OpenAccess
method. This makes imposible to test output of commands like
inspect or share. This implements only basic functionality.
Change-Id: I127ef0bb45a01634bd5265ed80840f8095c72794
The main motivation is to wrap the bucket DB and metainfo DB, so we
could check if a bucket is empty before applying geofencing config.
Change-Id: I8bac21555e01d51a663fb557bc1acfc8106bc2e1
func init() code isn't that well defined and reordering of them
could cause problems when starting the whole process from it.
Change-Id: I4088a0db156ece15354877011a481f6f91c9b332
This change adds the ability to download byte ranges
to uplinkng.
Extended the uplinkng Filesystem interface with Stat
method and an OpenOptions struct as parameter for the
Open method.
Also added a few tests for the ranged download
Change-Id: I89a7276a75c51a4b22d7a450f15b3eb18ba838d4
Add support for node url-s with PEER_URL to help configuring
authservice and gateway.
storj-sim network env SATELLITE_0_URL
Change-Id: I8b595b398007730662f100a4e6b3529cc0a7512a
Currently if the nodes is below the minimum version it will immediately
update to the suggested version, regardless if its eligible from the seed
or not. This change corrects the behaviour to update to the minimum
version only and then properly respect/wait for the rollout to include it.
Updates based on logic here: https://review.dev.storj.io/c/storj/private/+/6187
Change-Id: Ic6c91c48ae9b8a116378b2573fbfca7e7bd5cc32
If the directory doesn't exist, then the first run in a non-migration
setting will error because it cannot create the config file. This
change creates the directory.
Change-Id: I439159c00047e9ae20e139318dad5a047c853253
Usage: from a host for the affected satellite, issue this command
and supply the number of segments that have been permanently lost. For
example, to indicate 2 segments permanently lost:
satellite register-lost-segments 2
If the unthinkable happens and this is necessary to use, it is presumed
that we will also remove the non-recoverable segments from the metainfo
db, so they will not continue to appear as 'temporarily unavailable' to
the repair checker.
The influxQL query for this will probably look something like:
SELECT sum(total) AS "sum_total"
FROM "v3_stats_new"."autogen"."lost-segments"
WHERE time > :dashboardTime: AND "scope" = 'segment-durability'
GROUP BY time(:interval:), "application", "instance"
FILL(null)
Or use the Flux language in order to get the benefit of the
cumulativeSum function:
from(bucket: "v3_stats_new/autogen")
|> range(start: dashboardTime)
|> filter(fn: (r) => r._measurement == "lost-segments"
and r.scope == "segment-durability"
and r._field == "total")
|> group(columns: ["application", "instance"])
|> cumulativeSum()
Change-Id: I73d364937705aa815af7520ab79c00bc2aea09f6
Multipart upload limits added. Last part has no size limit.
Max number of parts: 10000, min part size: 5 MiB
Change-Id: Ic2262ce25f989b34d92f662bde720d4c4d0dc93d
https://storjlabs.atlassian.net/browse/PG-305
we should extend method move of cmd/uplink/cmd/mv.go
if both parameters end with slash -
i should list all files and call move method in loop
parameters:
uplink mv sj://bucket/a/prefix/ sj://new-bucket/a/new-prefix/
Change-Id: Ic24c2af83153ea60ec74393e65736af094877151
Removes database tables and functionality related to our custom
coupon implementation because it has been superseded by the Stripe
coupon and promo code system. Requires implementations of the
payments Invoices interface to return coupon usages along with
invoices.
Change-Id: Iac52d2ff64afca8cc4dbb2d1f20e6ad4b39ddfde
New command for cli to move object to different
location.
uplink mv sj://bucket/your-object sj://bucket/moved-object
Change-Id: I85a4961aa59f250819954e78f20363ac3c570938
Make bucket command was using full location
specified in command line instead only bucket name.
As an addition change contains basic integration tests
with storj-sim.
Change-Id: Ie3b5283468b7fbde0b1333f01dc4fc2a2952e1a1
Currently TextMaxVerifyCount flakes in some tests, try increasing the
sleep time to ensure that things are slow enough to trigger the error
condition.
Also pass ctx to all the funcs so we can handle sleep better.
Change-Id: I605b6ea8b14a0a66d81a605ce3251f57a1669c00
Use multipart upload to upload single object in parts in
parallel. Its using parallelism flag added earlier.
Change-Id: I45b531a5db43c86f0112a5e3bb4a83bc1d65650f
Now that the command has been run on all production satellites
(US1, AP1, EU1), we should not need it again.
Change-Id: I25a4ffb03a7172445d90a04ec539be36c4eb2c8e
Adds support for new uplink method DownloadObjectAt which
gives ability to download single object in parallel.
Change-Id: I8388653429992b0d24c383d17d7e90904203fe77
This command is intended to be run as part of invoice generation - it
iterates over Stripe customers, and applies the free tier coupon to any
customer who doesn't already have a coupon.
This way, we can ensure that all customers have at least the free tier
coupon before and after invoice generation, in case a different coupon
has expired.
Change-Id: I33a4aff9174049f9e051de53ef65298ca65ed688
this makes the distinction between an "access name" and
an "access value" and talks about which is expected for
commands. most are "access name or value".
Change-Id: I43c0043a17d37e89ab5f87388ae9e890a8b59958
this was just supposed to add parallel uploads/downloads
and it does do that, but i then found a bunch of bugs
with respect to path handling that i thought i had under
control. oops.
so this adds a ton of tests and tries to make the logic
in ulloc to be more consistent. almost all of the actual
file handling bits and knowledge happens in cmd_cp now
where it should belong.
additionally, the s3 command has the behavior that if your
bucket has the file s3://bucket/file, then executing
s3 ls s3://bucket/fi returns nothing. this change makes
uplinkng match that behavior even if i don't personally
like it.
a big portion of the weirdness is the concept introduced
that i've named "directoryish", which intends to capture
the behavior that if a user copies a file to that location
then the base name of the source should be appended on
rather than a direct copy. this concept is entirely a
based on the string value and not the actual filesystem
state. hence, the cp command is responsible for checking
if local paths are actually a directory, and adding a
trailing slash if necessary to make them "directoryish".
additionally, the empty key for a bucket and the empty
string for local paths are considered "directoryish".
Change-Id: I9120d18616fd813b29ff81beed4f5993caa99fb6
This change adds a NOT NULL constraint to the created_at column in the segment table.
All occurrences of CreatedAt as a pointer are changed to non pointer version (metabase, segment loop, etc)
Change-Id: I3efd476ebd1edd3327b69c9223d9edc800e1cc52
We made decision to avoid satellite shutdown when segment loop
will return error. Loop still can reeturn error but it will be logged
and we will make monitoring/alert around that error.
Change-Id: I6aa8e284406edf644a09d6b1fe00c3155c5430c9
There are some users in our QA satellite which are no longer in Stripe,
and there are some users in Stripe which are not on our QA satellite.
This change allows us to test the paid tier conversion script in QA
despite these problems.
Change-Id: If94c9e882327841d1fd294d75fd302e6a7feee41
To be sure we are comparing the same set of objects
and segments lets ignore objects and segments created
after processing was started.
Segmets without objects cannot be created in normal
way so we will have them only if we broke something in
the past.
Change-Id: I96c07caf9e5091775d4dc8dfc0fef2b08b87957c
This tool has two commands to execute. One is to 'report'
orphaned segments. Second to 'delete' orphaned
segments.
To find such segments tool is first finding all unique
segment stream ids. As a next step its removing from this
list stream ids of existing objects. What is left is a list of
orphaned segments.
Change-Id: I4a0ae3ad0b10a8d16572bfd22ac92cfa15ca19b3
Bucket tally calculation will be removed from metaloop and will
use metabase objects iterator directly.
At the moment only bucket tally needs objects so it make no sense
to implement separate objects loop.
Change-Id: Iee60059fc8b9a1bf64d01cafe9659b69b0e27eb1
The auth service has no way to remove access grant registrations that
lack expiration dates. We want to encourage people to set them, so as
to slow the rate at which the auth service DB fills up.
Change-Id: I1ccf629cd995dc184d2d90333166eab34d34ae07
We have implemented the paid tier, but it currently only handles new
users entering paid tier. It does not convert users who have already
added a credit card previously. We still want to convert these users'
project limits. This billing command can be run once to convert all old
customers with a credti card. Afterwards, we should be able to safely
remove it.
Change-Id: Ia496580b8e72ef436375b74f590fe57cca704fa8
The user must complete a reCAPTCHA in order to register.
ReCAPTCHA verification failure results in rejection of the
registration attempt.
Change-Id: I34ba7db414d756fd1aaebdc3d19cccbfc7fc1ea3
this changes globalFlags to be a ulext.External
interface value that is passed to each command.
rather than have the ulext.External have a Setup
call in the way that the projectProvider used to
we make all of the state arguments to the functions
and have the commands call setup themselves.
the reason it is in its own package is so that
cmd/uplinkng can import cmd/uplinkng/ultest
but cmd/uplinkng/ultest needs to refer to whatever
the interface type is to call the function that
creates the commands.
there's also quite a bit of shuffling around of
code and names. sorry if that makes it tricky
to review. there should be no logic changes, though.
a side benefit is there's no longer a need to do
a type assertion in ultest to make it set the
fake filesystem to use. that can be passed in
directly now. additionally, this makes the
access commands much easier to test.
Change-Id: I29cf6a2144248a58b7a605a7ae0a5ada5cfd57b6
We want to use StreamID/Position to identify injured
segment. As it is hard to alter existing injuredsegments
table we are adding a new table that will replace existing
one. Old table will be dropped later.
Change-Id: I0d3b06522645013178b6678c19378ebafe485c49
Because of our free/paid tier plan, we do not need a paywall anymore. We
have not used it in a while, but still have leftover code laying around.
Change-Id: Iaea8c39faf042a2f7a6b837727bb135c8bdf2907
Expires_at column was added to segments table and
we need to migrate this value for existing segments
from corresponding objects. This standalone tool
will read all objects and if expires_at is set then
will send update query for this object segments.
Updates will be send in batches and in parallel.
Change-Id: I1fddf0af8cde0f560582d25c6d0e07a00b58e534
This is part of metaloop refactoring. We plan to remove
irreparable at some point but there was not time for it.
Now instead refatoring it for segmentloop its just easier
to drop it.
Later we still need to drop table with migration step.
Change-Id: I270e77f119273d39a1ecdcf5e1c37a5662a29ab4
this implements the rm command which has to add
a Remove method to the fileystem interface and
implement it for local, remote and test filesystems.
Change-Id: Id41add28f01938893530aae0b4b73c8954e9b715
this adds a helper method to prompt for a line of
input using the clingy context to the global flag
state that errors if interactive mode is disabled.
Change-Id: Ie113c8920dfa4719e85cc24f11401d91b32812f9
this adds some stuff to ultest so that the set of
files created by a test can be inspected after so
that we can write some tests for the cp command
to observe that it does what it is supposed to do.
Change-Id: I98b8fb214058140dfbb117baa7acea6a2cc340e1
the directory was starting to get pretty large and
it was making it hard to pick concise names for
types and variables. this moves the location
stuff into a cmd/uplinkng/ulloc package, the
filesystem stuff into a cmd/uplinkng/ulfs package,
and the testing stuff into a cmd/uplinkng/ultest
package.
this should make the remaining stuff in cmd/uplinkng
only the business logic of how to implement the
commands, rather than also including a bunch of
helper utilities and scaffolding.
Change-Id: Id0901625ebfff9b1cf2dae52366aceb3b6c8f5b6
this adds a test framework with fake implementations of a
filesystem so that unit tests can be written asserting
the output of different command invocations as well as
the effects they have on a hypothetical filesystem and
storj network.
it also implements the mb command lol
Change-Id: I134c7ea6bf34f46192956c274a96cb5df7632ac0
This enables use of the --progress, --expires, and --metadata flags with
'uplink put'. These flags work similarly to their counterparts in the
'uplink cp' command.
Small difference from cp: "progress" is on by default in 'uplink cp',
but (for backwards compatibility) off by default in 'uplink put'.
Why: Requested by @Toyoo on the forum:
https://forum.storj.io/t/explicit-data-expiration-questions/13854/2
..and it's an easy addition to make.
Change-Id: Id20aedd3a280db43e4883338f92f6beec7a400de
This expanded format shows expiration times for objects and how much
custom metadata each object has.
This commit also organizes output formatting code together in one
section for simpler adjustments in the future.
Change-Id: Ica041c8a1de6ee73c104a0554c5c259e447536c4
Currently the interface is not useful. When we need to vary the
implementation for testing purposes we can introduce a local interface
for the service/chore that needs it, rather than using the large api.
Unfortunately, this requires adding a cleanup callback for tests, there
might be a better solution to this problem.
Change-Id: I079fe4dbe297b0ae08c10081a1cea4dfbc277682
Initially metabase was developed separately and it was useful to have a
separate environment flag for tests, however, it's more convenient to
use the same as rest of the testsuite.
Change-Id: Ia4d79be27ce5911cbae68d57cdf0b30f63459444
Use the 'AS OF SYSTEM TIME' Cockroach DB clause for the Graceful Exit
(a.k.a GE) queries that count the delete the GE queue items of nodes
which have already exited the network.
Split the subquery used for deleting all the transfer queue items of
nodes which has exited when CRDB is used and batch the queries because
CRDB struggles when executing in a single query unlike Postgres.
The new test which has been added to this commit to verify the CRDB
batch logic for deleting all the transfer queue items of the exited
nodes has raised that the Enqueue method has to run in baches when CRDB
is used otherwise CRDB has return the error "driver: bad connection"
when a big a amount of items are passed to be enqueued. This error
didn't happen with the current test implementation it was with an
initial one that it was creating a big amount of exited nodes and
transfer queue items for those nodes.
Change-Id: I6a099cdbc515a240596bc93141fea3182c2e50a9
multiregion satellites have complex database connection strings
largely due to using a different backend for the repair queue than
cockroach.
billing stuff didn't work right with this.
Change-Id: Ie8759a8c47e71347c3a190abfc9d53945d7b8855
errs.Class should not contain "error" in the name, since that causes a
lot of stutter in the error logs. As an example a log line could end up
looking like:
ERROR node stats service error: satellitedbs error: node stats database error: no rows
Whereas something like:
ERROR nodestats service: satellitedbs: nodestatsdb: no rows
Would contain all the necessary information without the stutter.
Change-Id: I7b7cb7e592ebab4bcfadc1eef11122584d2b20e0
Initially there were pkg and private packages, however for all practical
purposes there's no significant difference between them. It's clearer to
have a single private package - and when we do get a specific
abstraction that needs to be reused, we can move it to storj.io/common
or storj.io/private.
Change-Id: Ibc2036e67f312f5d63cb4a97f5a92e38ae413aa5
cache is really common variable and type name and we have already used
the package name alias in multiple places.
Change-Id: I6435785b7549b541d533de59ec94557b9bd11e04
Initially we duplicated the code to avoid large scale changes to
the packages. Now we are past metainfo refactor we can remove the
duplication.
Change-Id: I9d0b2756cc6e2a2f4d576afa408a15273a7e1cef
We need some chores to join without triggering the loop.
For example it's fine to run metrics, only when something else is
running.
Change-Id: I9d8bd16f59c28c540c8d72971bc4e233a8660c02
Currently the tool verifies:
* validity of plain_offset
* whether plain_size is smaller than encrypted_size
Change-Id: I9ec4fb5ead3356a196392c26ca377fcdb367138e
This tool attempts to make TCP+TLS+DRPC and QUIC+DRPC connections to a
specified IP:port. If successful with either, it shows the node ID of
the opposite end of the connection.
This should be useful for SNOs who want to check that their packet
forwarding is working or that their node is listening for both
connection types.
Change-Id: I935dff037dd7f1106941a35567f6445230259d1a
Currently the loop handling is heavily related to the metabase rather
than metainfo.
metainfo over time has become related to the "public API" for accessing
the metabase data.
Currently updates monkit.lock, because monkit monitoring does not handle
ScopeNamed correctly. Needs a followup change to monitoring check.
Change-Id: Ie50519991d718dfb872ec9a0176a82e732c97584
Get storagenode and storagenode-updater binaries during
run of the container to not to release new docker image
on each new version of the storagenode binary.
Change-Id: Ic0eb4a9c18a98598dfd9b96c1d352c7399496fd2
metabase has become a central concept and it's more suitable for it to
be directly nested under satellite rather than being part of metainfo.
metainfo is going to be the "endpoint" logic for handling requests.
Change-Id: I53770d6761ac1e9a1283b5aa68f471b21e784198
Currently os.Create was leaving a file open causing atomic write file to
fail with access denied.
Also add a specific test for importing an access.
Change-Id: Id188bc480e795849ec7fdc72b1fc86433d76c47a
Check that the bloom filter creation date is earlier than the
metainfo loop system time used for db scanning.
Change-Id: Ib0f47c124f5651deae0fd7e7996abcdcaac98fb4
We already merged the multipart-upload branch to main. These two tools
make only sense if we are migrating a satellite from Pointer DB to
Metabase. There is one remaining satellite to migrate, but these tools
should be used from the respective release branch instead of from main.
Removing these tools from main will:
1) Avoid the mistake to use them from the main branch instead of from
the respective release branch.
2) Allow to finally remove any code related to the old Pointer DB.
Change-Id: Ied66098c5d0b8fefeb5d6e92b5e0ef5c6603df5d
We recently added create_at column to segments table.
Old segments needs to get this value from objects table.
This tool will iterate over all objects and update corresponding
segments if create_at column is not set.
Change-Id: Ib5aedc384637e739ee9af84454af0639e2559416
* Add a nullable billing_periods column in the coupons table
* Add nullable billing_periods column to the currently unused
coupon_codes table
* Drop the duration column from the coupon_codes table
* Replace duration config type so that the default promotional coupon
can be configured to never expire
Zero downtime migration plan:
* Add billing_periods column to coupons and coupon_codes tables (this change)
* After one release, remove all references to the old duration column,
replacing with references to billing_periods. At this point, we can also
change the defult promotional coupon to never expire and migrate over
values from the old duration column.
* After another release, drop the duration column.
Change-Id: I374e8dc9fab9f81b4a5bc681771955662d4c007a
Test had two issues:
* sub test was using parent test to fail (ctx.Check)
* pointerDB and metabaseDB were using the same unique DB and pointerDB was closing/deleting it first
Change-Id: I5741b76d518663d80c4e6448c76e4ee9dd86c8e1
instead of only generating invoices for nodes that had some
activity, we generate it for every node so that we can find
and pay terminal nodes that did not meet thresholds before
we recognized them as terminal.
Change-Id: Ibb3433e1b35f1ddcfbe292c034238c9fa1b66c44
Rename the functions that are prefixed with 'New' which connect with
Redis by 'Open' to make clear that they perform network operations.
Change-Id: I1351e89a642e8e2c2586626646315ad0fb2c6242
Uplink tests are running storj/storj tests only against postgres. Without this change integration test on uplink will fail as 'omit' is not supported in migrator tests code.
Change-Id: Ic72406f52439e98683d050508fb42aa41632e183
Update the Redis dependency to use the last major production version.
The last version accepts a context parameter in all the network methods
so it allows us to pass it through them.
Change-Id: I34121b2ec3c2728602115c724933ad24c9e6e4fd
Old pointerdbs can have key with just project id, segment index and bucket, without object key. We need to ignore such keys.
Change-Id: I80a466a94e317a229da236fe6bc9e762e6f7ced6
We can have objects with missing segments. Such objects will be removed by segment reaper but between executions we can have it. We should not interrupt migration but collect such objects to cleanup migrated DB later.
Change-Id: I5cc4a66395c1773a6430f34cc25a0f2449133f80
The new 'consistency ge-cleanup-orphaned-data' cli command deleted
orphaned transfer queue items, but not entries in the
graceful_exit_progress table. This will delete orphaned entries
from the exit progress table too.
Change-Id: I5f927aac1f258490678deaf179be92ccfe10fcd8
should be
https://link.tardigradeshare.io/s/<access>/<bucket>/<path>
legacy URLs have the /s/ missing but a redirect is issued
Change-Id: Ic2a3dc092ff68d7706fd888a9fbfc27716877c6e
This PR removes all back-end related referral program code including the
marketing portal.
We will have a separate PR for front-end code and database migration to
drop `offers` and `usercredits` table
Change-Id: If59f952cddfe0558a7dc03a0eac7cc1081517f88