The audit chore will be pushing a large number of segments to be
audited, and the db might choke on that large insert when under load.
This change divides the insert up into batches, which can be sized
however is optimal for the backing database. It also arranges for
segments to be inserted in the order of the primary key, which helps
performance on some systems.
Refs: https://github.com/storj/storj/issues/5228
Change-Id: I941f580f690d681b80c86faf4abca2995e37135d
Reputation updates during repair currently consumes a lot of database
resources. Sometimes increasing the rate of repair is more important
than auditing a node based on whether they have or don't have the
correct piece during repair. This is the job of the audit service.
This commit is to implement an intermediate solution from this issue: https://github.com/storj/storj/issues/5089
This commit does not address the more in-depth fix discussed here: https://github.com/storj/storj/issues/4939
Change-Id: I4163b18d78a96fadf5265789fd73c8aa8def0e9f
We tested new upload flow (with multiple versions) to fix inconsistency
while uploading object on QA/EUN1/SLC. Now we would like to enable it
for all satellites by default. Tests required small adjustments.
Fixes https://github.com/storj/storj/issues/5283
Change-Id: I0d53c041abebc0d182ba5a88bb1dac906c29caf0
As part of the effort of splitting out the auditor workers to their own
process, we are transitioning the communication between the auditor
chore and the verification workers to a queue implemented in the
database, rather than the sequence of in-memory queues we used to use.
This logical database is safely partitionable from the rest of
satelliteDB.
Refs: https://github.com/storj/storj/issues/5251
Change-Id: I6cd31ac5265423271fbafe6127a86172c5cb53dc
We added alternative way to calculate bucket tallies for accounting and
now it's tested and we will enable it by default.
CollectBucketTallies was extended to support overriding current time
to be able to test handling expired objects.
Change-Id: I738b99a33fd2e086245f92d874c1cbb806e834c0
Add a new chore to periodically insert nodes who are offline and
have not gotten an offline email in a certain amount of time into node
events
Change-Id: I658b385bb777b0240c98092946a93d65bee94abc
Create NodeEvents Chore on satellite core to read nodeevents DB and
notify node operators on node events. The chore sends notifications
grouped by email and event type: it selects the oldest entry in
nodeevents.DB and also any other event with the same email and event
type no matter how old it is. The oldest entry of a group must exist for
a minimum amount of time before that group can be selected, however.
This minimum amount of time is a configurable value:
--node-events.selection-wait-period. This wait period allows us to
combine events of the same time and same email address into a singular
email.
Change-Id: I8b444aa324d2dae265cc27d9e9e85faef79195d8
This change causes the session inactivity timer to be enabled unless
expressly specified otherwise.
Change-Id: I85b4014394afac2feb21f383cac414cddb09ca8f
Added new feature flag.
Reworked vuex logic to work properly with project level passphrase.
Implemented new simple set project level passphrase modal.
Issue:
https://github.com/storj/storj/issues/5280
Change-Id: I6a15e90ee9fa7aa8a09c67022466787090120f9c
Hubspot is migrating from using API keys for authentication to OAuth.
This change migrates our Hubspot integration to use OAuth tokens.
It modifies the EnqueueCreateUser code to not send empty HubspotUTK to hubspot, and to return error for failed requests.
see: https://developers.hubspot.com/changelog/upcoming-api-key-sunset
Change-Id: I422f00e3e3caeff3ff3d08ddec059502b9addaee
This patch is required to fix the nightly deployment:
* We need to use the exact docker image tag what we built earlier
* Migration should be full instead of snapshot (snapshot couldn't update existing, but older dbs)
Change-Id: Id2a2070638072a7b0021326326b0d53533817168
New flag 'MultipleVersions' was not correctly passed from metainfo
configuration to metabase configuration. Because configuration was
set correctly for unit tests we didn't catch it and issue was found
while testing on QA satellite.
This change reduce number of places where new metabase flags needs
to be propagated from metainfo configuration to avoid problems with
setting new flags in the future.
Fixes https://github.com/storj/storj/issues/5274
Change-Id: I74bc122649febefd87f665be2fba628f6bfd9044
We need to make exceptions for older uplink versions, because it does
not compile with newer Go versions due to quic dependency.
Change-Id: I3e073694f0942029c56740f0689088058ee068c3
since amount of objects is growing and looping through all of them
starts taking lot of time, we are switching for SQL query to do it
in chunks of tallies per bucket. 2nd part of issue fix.
Closes https://github.com/storj/team-metainfo/issues/125
Change-Id: Ia26bcac0a7e2c6503df9ebbf4817a636841d3284
The current deployment strategy requires that the GC bloomfilter generation process executes only once and exits.
Change-Id: I952991f126596aa165d1f2e9fce6f8548c21bdba
Earthly is a build tool, it uses buildkitd to create reproducable and highly cacheable builds.
It is used by a new experimental nightly build to easily create storj-up images. (but can be used for any ad-hoc storj-up cluster to create the images).
To make the nightly more robust, I would prefer to commit the helper files (today I do a rebase every time, but sometimes it fails).
More detailed information about Earthly can be found at https://earthly.dev or https://www.youtube.com/watch?v=nChpMEdOaCQ
Change-Id: I683601e0558aca53b45ed3819c46c909534f8b15
The threshold of piece deletions from the nodes during CommitObject
when overriding an existing object seemed to cause a race condition in
tests.
This change makes the threshold configurable so we can set it to maximum
so CommitObject waits until all pieces are removed from the nodes in the
test.
Change-Id: Idf6b52e71d0082a1cd87ad99a2edded6892d02a8
We want to send emails to SNOs. Node status changes go through the
overlay service, so it's a good place to add the mail service.
Add the mailservice.Service, satellite address, and satellite name to
overlay service. Also add feature flag --overlay.send-node-emails
Change-Id: I3bd2cb3bf22f9724954ce2374f8b651b902b3a24
Change the default loop interval for querying for new payments and adding them into the billing table from 1 minute to 15 seconds.
Change-Id: I26cf4a764cbe1de4c9b839ad60352374d8231522
Change the default number of required block confirmations for a payment to be confirmed from 12 to 15.
Change-Id: I44c258134c293e7691623bc00c504130aa69a96a
We will introduce new logic for creating new objects (BeginObject).
Instead of using single version internally (1) we will be selecting first
available version during object creation. Because we need to be sure
that everything is wired up correctly we need a feature flag to be
able to control if new feature is enabled.
Change-Id: If0f8496397130811f43bf9db9fdcc2b30cd2e4ca
Implement a new service to read retain filter from a bucket and
send them out to storagenodes.
This allows the retain filters to be generated by a separate command on
a backup of the database.
Paralellism (setting ConcurrentSends) and end-to-end garbage collection
tests will be restored in a subsequent commit.
Solves https://github.com/storj/team-metainfo/issues/121
Change-Id: Iaf8a33fbf6987676cc3cf74a18a8078916fe673d
Doing some cleanup in "scripts" folder. All integration like tests are
moved under "test" directory (integration, bc, redis) and bash scripts
are adjusted to reflect new location.
As an addition "scripts/install-awscli.sh" was deleted as it was not
used.
Change-Id: I152905c4258f471a71f2d0e8731d91bb075e99c1
We would like to have separate process/command to collect bloom
filters from source different than production DBs. Such process will
use segment loop to build bloom filters for all storage nodes and
will send it to Storj bucket.
This change add main logic to new service. After collecting all bloom
filters with segment loop and piece tracker all filters are marshaled
and packed into zip files. Each zip contains up to "ZipBatchSize" bloom
filters and it's uploaded to specified in configuration bucket.
All uploaded objects have specified expiration time to not delete them
manually.
Updates https://github.com/storj/team-metainfo/issues/120
Change-Id: I2b6bc02a7dd7c3a639e75810fd013ae4afdc80a2
We would like to have separate process/command to collect bloom
filters from source different than production DBs. Such process will
use segment loop to build bloom filters for all storage nodes and
will send it to Storj bucket. This this initial change to add such
service. Added service is joining segment loop and collects all
bloom filters.
Sending bloom filters to the bucket will be added as a subsequent
change.
Updates https://github.com/storj/team-metainfo/issues/120
Change-Id: I2551723605afa41bec84826b0c647cd1f61f3b14
Sessions now expire after a much shorter amount of time, requiring
clients to issue API requests for session extension. This is handled
behind the scenes as the user interacts with the page, but once session
expiration is imminent, a modal appears which informs the user of his
inactivity and presents him with the choice of loging out or preserving
his session.
Change-Id: I68008d45859c814a835d65d882ad5ad2199d618e
This is in response to community feedback that our existing reputation
calculation is too likely to disqualify storage nodes unfairly with
extreme swings up and down.
For details and analysis, please see the data_loss_vs_dq_chance_sim.py
tool, the "tuning reputation further.ipynb" Jupyter notebook in the
storj/datascience repository, and the discussion at
https://forum.storj.io/t/tuning-audit-scoring/14084
In brief: changing the lambda and initial-alpha parameters in this way
causes the swings in reputation to be smaller and less likely to put a
node past the disqualification threshold unfairly.
Note: this change will cause a one-time reset of all (non-disqualified)
node reputations, because the new initial alpha value of 1000 is
dramatically different, and the disqualification threshold is going to
be much higher.
Change-Id: Id6dc4ba8fde1be3db4255b72282207bab5491ca3
Created new modal which shows user their native STORJ token wallet address.
There are QR and copy buttons.
It will be used only in new billing screen.
Change-Id: Icef3c8668c548b779c07fe2b85eb5761cd1221a3
Jenkins doesn't do a very good job with identifying what has been changed.
While it has a syntax to defined patterns, it compares the current build with the previous build (in case of git-verify it can be a totally different branch) instead of checking the HEAD commit.
This patch introduces shell scripts to do this better:
* It doesn't depend on Jenkins any more
* It can be executed locally
* It can detect web changes properly (see the relation change as an example).
Change-Id: I9d37775e3818c08c4aa96ffb78f84d57f28a2c95
We have enabled the new project dashboard in production. Change the
default to true so that we do not need an explicit configuration in
prod.
Change-Id: I0f93773965283e7b0682f6586685224281cbf78c
Implemented Recaptcha and Hcaptcha for login screen.
Slightly refactored registration page implementation.
Made 2 different login/registration captcha configs on server side to easily swap between captchas independently.
Issue: https://github.com/storj/storj/issues/4982
Change-Id: I362bd5db2d59010e90a22301893bc3e1d860293a
removed segment limit validation and checks in metainfo endpoint and accounting/projectusage
since feature is live and has always has segment limitation now
Resolves: https://github.com/storj/storj/issues/4470
Change-Id: I8cf87cbbc40ac61262f9f05e52573d3ae6410611
Currently we have a significant number of tallies that need to be
deleted together. Add a limit (by default 10k) to how many will
be deleted at the same time.
Change-Id: If530383f19b4d3bb83ed5fe956610a2e52f130a1
Added new email html template.
It is sent when user tries to reset password with unknown or unverified account.
Made a couple of minor config changes.
Issue: https://github.com/storj/storj/issues/4913
Change-Id: I730f48b3478e302d1e38e1f8a27c75f66a8ba6fd
We don't build "multinode" nor "storagenode" docker images for armv6
architecture, instead we build for armv5.
Fix the script that publish the manifest for those images for a specific
tag to use armv5 for not failing when executing.
Change-Id: I7d859d8718240e1cd0dae6489e7e5c3b4068ff6e
This change integrates the session management database functionality
with the web application. Claim-based authentication has been removed
in favor of session token-based authentication.
Change-Id: I62a4f5354a3ed8ca80272814aad2448f901eab1b
prevent network enumeration by rejecting privateIPs in PingMe and
Checkin endpoints
Closesstorj/storj-private#32
Change-Id: I63f00483ff4128ebd5fa9b7b8da826a5706748c9
Add storjscan wallets implementation to the satellite. The wallets interface allows you to add and claim new wallets as called by the API. The storjscan specific implementation of this interface uses a wallets DB to associate the user to a wallet address, as well as a storjscan client to request and associate new wallets to the satellite.
Change-Id: I54081edb5545d4e3ee07cf1cce3d3e87cc00c4a1
An older change plummed the full console config as subconfig of
the admin api configuration in. This bloated the generated satellite
configuration unnecessarily while also allow for confusion/mistakes.
Change-Id: Icf49cc1f147711e37e85f6eac1143fab8ddf1659
`os/exec.Cmd.CombineOutput` runs the command, hence, it cannot be used
after calling the `Run` method.
Because `CombineOutput` already runs the command, we can use it directly
instead of `Run`.
Without this change if the command returns an non-zero code we get an
error because of the command already started and we don't get the
output.
Example removing a copyright notice from one file and running the linter
(only showing the affected printed line)
Without this fix
2022/05/26 15:48:40 [/storj check-copyright] error exec: already started
With this fix
2022/05/26 16:22:40 [/storj check-copyright] error missing copyright certificate/doc.go: %!w(<nil>)
NOTE the `%!w(<nil)` is a bug in the check-copyright linter.
Change-Id: I40b64842028399b92a8982bfb143e1f87f92467b
Our linting process depends heavily on custom tools and linting
configuration. To help improve this process for running on local
developer machines, we can run the various tasks in our existing CI
container.
Change-Id: I60407686ce9233cc4f16e3724c5e8d44367aa200
logo redirects to homepage on login, signup, forgot password, reset
password, and activate account pages
Change-Id: I992aeae197004d620addd8d515cae1c1ca80a778
old bucket creation flow removed
new flow added
name and passphrase splitted into separate views
demo bucket will not be created automatically
bucket creation progress bar added
Change-Id: I2a1d7d77c3038caaafb3c06bdb0ac5dd1ad17599
Uplink can upload from stdin and download to stdout. We had
such tests for old binary but now we were missing it.
Change-Id: I5110a9f531f5cc21277fa53611995fb5b556ff16
We want to remind unverified users to verify their emails:
once after 24 hours has passed and again after 5 days has passed.
Add mailservice.Service to satellite core because it is needed by the
chore for sending emails. To add the mailservice.Service to the core,
we create a helper function in satellite/peer.go to avoid duplicating
the code in both api.go and core.go. In addition to the chore, this
change adds methods to users.DB to get unverified users in need of
reminder.
Change-Id: I4e515bdf43f922788b4f965b2efb34fa32288bd1
We want to send email verification reminders to users from the satellite
core, but some of the functionality required to do so exists in the
satellite console service. We could simply import the console service
into the core to achieve this, but the service requires a lot of
dependencies that would go unused just to be able to send these emails.
Instead, we break out the needed functionality into a new service which
can be imported separately by the console service and the future email
chore.
The consoleauth service creates, signs, and checks the expiration of auth
tokens.
Change-Id: I2ad794b7fd256f8af24c1a8d73a203d508069078
Adds a new configuration for hcaptcha enabled, secretkey, and sitekey.
If both reCAPTCHA and hCaptcha are configured as "enabled", reCAPTCHA
will be used.
Change-Id: I73cc6e133d8da3555e0ed8b2b377cf9eb263e6dc
Added account locking on 3 or more login attempts.
Includes both password and MFA failed attempts on login.
Unlock account on successful password reset.
Change-Id: If4899b40ab4a77d531c1f18bfe22cee2cffa72e0
* Added new feature Flag for new Access Grant Flow.
* Added 3 cards to access grant view for S3, CLI and Access grant to replace old header
* Added new formatting, text and Icon for Access Grant Delete Popup modal
"REST API" is a more accurate descriptor of the generated API in the
console package than "account management API". The generated API is very
flexible and will allow us to implement many more endpoints outside the
scope of "account management", and "account management" is not very well
defined to begin with.
Change-Id: Ie87faeaa3c743ef4371eaf0edd2826303d592da7
This also fixes the build order. Unfortunately we need
to ensure that the web frontends are built before installing
Go binaries.
Fixes https://github.com/storj/storj/issues/4654
Change-Id: I5d1c83125fd3d1a454d3400b2cbdd44bd3f2250c
We are in the process of creating an api to allow users to manage their
accounts programmatically. We would like to use api keys for
authorization. We were originally going to create an entirely new table
for these api keys, but seeing as we already have 2 other tables for
keys/tokens, api_keys and oauth_tokens, we thought it might be better to
use one of these. We're using oauth_tokens.
We create a new oidc.OAuthTokenKind for account management api keys:
KindAccountManagementTokenV0. We made the key versioned because we
likely want to improve the implementation in the future, but we want to
get something functional out the door ASAP because the account management
api feature is highly desired.
Add a new method to oidc.OAuthTokens interface for revoking v0 account
management api keys, RevokeAccountManagementTokenV0. Add update method
to dbx implementation to allow updating the expiration. We will revoke
these keys by setting the expiration to 0 so they are expired.
Change-Id: Ideb8ae04b23aa55d5825b064b5e43e32eadc1fba
It seems that the github API is a little slow/laggy with regards to propagation of whether a release tag has been made or not.
This sleep should fix it and avoid having to retrigger builds.
This change adds endpoints for supporting OpenID Connect (OIDC) and
OAuth requests. This allows application developers to easily
develop apps with Storj using common mechanisms for authentication
and authorization.
Change-Id: I2a76d48bd1241367aa2d1e3309f6f65d6d6ea4dc
Update the user to the default paid tier project limit, which is currently 3 projects, when the user upgrades to a paid account.
Change-Id: I95b19d62cebc7d878b716355f2ebcaf0b51ca3f7
We decided that we want to have segment limit for paying users high
enough to not have to change it too often.
Fixes https://github.com/storj/storj/issues/4590
Change-Id: Ic1c38bf3e2fcc000548ff4c7e7004647b39fbecf
Create global config to specify a list of country codes that should be
excluded from node selection during uploads.
This exclusion is not implemented when the upload selection cache is
disabled.
Change-Id: Ic41e8b4f18857a11045668eac23107da99668a72
This change allows us to send newly registered users to a configured URL
to help us track user conversions for marketing campaigns.
Brave conversions continue to be tracked using the /signup-success page
within the satellite app.
Change-Id: I9b451947ce0f39d3c99b233cb4b806d361151823
Add a RepairExcludedCountryCodes config flag for overlay for providing a list of country codes to exclude nodes from target repair selection.
Mark segments with less than repairThreshold pieces in countries not in the RepairExcludedCountryCodes as not healthy.
With this change, the repair process is not affected. The segment will be removed from the repair queue by the repairer.
Another change will handle the logic at the repairer level.
Fixes https://github.com/storj/team-metainfo/issues/95
Change-Id: I9231b32de117a116488de055a3e94efcabb46e81
Added a feture flag which will be used to indicate if new generated console api is used.
Fixed some comments from previous PR.
Change-Id: Ice31c998b0b347028a491c971a648fd1269bfd49
We would like to disable in production those parts of code
which are now mixed with new server-side copy logic.
Change-Id: Iff50682bc9545207330f58dd19b5eee53d404d7f
sometimes these scripts want to have an access imported
after it has been potentially modified by having the
satellite address and node id added. it used to use the
uplink command to do this, but the cli api for that
has changed. rather than try to have the script detect
which uplink version is in use and call the right thing
it can always write out a valid yaml file and depend on
the new cli migrating it.
Change-Id: Ib82819699333f5f29e00117b99bfb10640033b94
uplink command versions >= 1.48.0 always do multipart
uploads which cannot be downloaded by any of the
versions in the test < v1.27.6. so skip those tests.
Change-Id: I9644afbd14bfce9facfd87644d132f7d66367d62
In order to tag the latest release for the multinode image, it makes
the most sense to do it at the same time as we release the storagenode
image.
Change-Id: I2d63c1f93858354ad1f9a4fce0ce45a8fda2716f
the new uplink command expects to be able to migrate
and so we need to specify the --legacy-config-dir
flag sometimes as well. but unfortunately, the old
uplink will error if it gets a flag it doesn't
understand, so we have to set it as an env var.
but we can't use the env var for the --config-dir
flag all the time because the old uplink doesn't
look for that env var.
Change-Id: I019315192c0e6c348814527794342d823a5f9ec3
added InactivityTimerEnabled flag to enable/disable feature
added InactivityTimerDelay to configure delay time in seconds
default timer set up to 10 minutes
reset dom events: keypress, mouseover, mousedown, touchmove
Change-Id: Idb66067c2902b2cdbe1a972225319c8abff97927
Finished implementing queries for both bandwidth and storage using pgx.Batch.
Fixed CSP styling issue.
Change-Id: I5f9e10abe8096be3115b4e1f6ed3b13f1e7232df
Currently the rate limit has kept per satellite api endpoint.
Since we run 9+ api endpoints in production, we do not need
a limit of 1000, since the intention was to allow 1000 total.
This change reduces the effective limit given 9 instances
down to 900, which should be close enough.
Change-Id: Ia579149ccc3a12e8febe0cfd5586b8a39de40f55
Free-tier segment usage limit was defined as 150k, not 140k. This change
is correcting that.
https://github.com/storj/team-metainfo/issues/8
Change-Id: I71ec0961930b19fd09b2b996e01acd406a8dcf8f
We want to be able to limit the number of segments per project for users.
To limit this we need to check limit value associated with project
and value of used segments already in BeginMoveObject, BeginMoveSegment
and increment cache segments usage after each CommitSegment call.
Resolves https://github.com/storj/team-metainfo/issues/1
Change-Id: I6290e67c095a174b9d101c4521802d9bfe0453b8
this makes the flags match rclone nomenclature
fixes test-uplinkng to use the temporary config dir
instead of the machine default, and clean up some.
bumps clingy so that the command errors when an unknown
command is specified.
also fixes some printfs in share to use clingy stdout.
it still does some external actions that should be
passed through a ulext.External for mocking, but
that's ok for now.
Change-Id: Icc231e7e26393541c312396fec907b640b97718e
This change adds a script and the needed build logic to create a draft github release and then upload the created binaries to it in order to make the actual publishing a lot less error prone.
The logic is currently that it only does this for all github tags, but not for every main build/push. This is handled by the checks in the script itself.
Change-Id: Ie172a8e4a97200de901a26a055aa5a8a54b60a2a
We enabled this chore on different satellites
for testing. Works fine so we can enable this
by default for all configurations.
Change-Id: I987639685d8de5c7e5798adca30fe26bdac9e1d1
All limits we have for projects have also parent limits stored
with user data. New created project is first taking limits from
owner (user) limits.
This change is extending users table with project_segment_limit
column and adds functionality to get and set value for this
column.
Change-Id: Iff5e36c62b517652390b649fc05992475916ecff
This change disallows creation of users possessing the same email.
If a user attempts to create an account with an email address
that's already used - whether it belongs to an active account or not -
he will be notified of unsuccessful account creation. If he attempts to
log in using an email address belonging to an inactive account,
he will be presented with a link allowing him to re-send the
verification email. Attempting to register with an email address
belonging to an existing account triggers a password reset email.
Change-Id: Iefd8c3bef00ecb1dd9e8504594607aa0dca7d82e
this allows commands like
uplinkng cp -r sj://foo sj://bar
to work correctly, rather than complain that sj://foo is
not a boolean.
Change-Id: I003e47aabb85566bc2b454851cf55043b17ee7ea
To allow for changing limits for new users, while leaving existing users limits as they are, we must store the project limits for each user. We currently store the limit for the number of projects a user can create in the user DB table. This change would also store the project bandwidth and storage limits in the same table.
Change-Id: If8d79b39de020b969f3445ef2fcc370e51d706c6
Change the satellite Admin HTTP server for:
* Embedding the UI assets into the Go binary.
* Serve the UI assets from the embedded file system or from a specific
directory path through a configuration flag, without requiring
authentication but keeping the authentication verification for the API
endpoints.
* Add tests to verify that the UI assets are served without
authentication.
Change-Id: I9003ac96f1ec585a189b67fc1cb315905403d557
This change adds the ability to download byte ranges
to uplinkng.
Extended the uplinkng Filesystem interface with Stat
method and an OpenOptions struct as parameter for the
Open method.
Also added a few tests for the ranged download
Change-Id: I89a7276a75c51a4b22d7a450f15b3eb18ba838d4
Gateway-ST frequent release cycle has been resurrected, which means it's
safer to use the latest release tag in the storj repository's CI now.
Change-Id: I9df1c789a9b9418ba7cceaec9cfec3cc6c448284
Currently slower storagenodes can slow down deletion queue.
To make piece deletion faster reduce the maximum time spent in
either dialing or piece deletion requests.
With this change:
* dial timeout is 3s
* request timeout is 15s
* fail threshold is set to 10min
Similarly, we'll mark storage node as failed when the timeout occurs.
The timeout usually indicates that the storagenode is overwhelmed.
Garbage collection will ensure that the pieces get deleted eventually.
Change-Id: Iec5de699f5917905f5807140e2c3252088c6399b
To make our free tier limits more clear, we will reduce the number of projects allowed from 3 to 1, and increase the storage and bandwidth limit of the free tier from 50 Gb to 150 GB. The total allotments across all projects for a given user are unchanged, just reduced to a single project.
Change-Id: Ic8dddb135f2b83a3f36e2b9fdcb477e351ec137b
Multipart upload limits added. Last part has no size limit.
Max number of parts: 10000, min part size: 5 MiB
Change-Id: Ic2262ce25f989b34d92f662bde720d4c4d0dc93d
https://storjlabs.atlassian.net/browse/PG-305
we should extend method move of cmd/uplink/cmd/mv.go
if both parameters end with slash -
i should list all files and call move method in loop
parameters:
uplink mv sj://bucket/a/prefix/ sj://new-bucket/a/new-prefix/
Change-Id: Ic24c2af83153ea60ec74393e65736af094877151
Removes database tables and functionality related to our custom
coupon implementation because it has been superseded by the Stripe
coupon and promo code system. Requires implementations of the
payments Invoices interface to return coupon usages along with
invoices.
Change-Id: Iac52d2ff64afca8cc4dbb2d1f20e6ad4b39ddfde
New command for cli to move object to different
location.
uplink mv sj://bucket/your-object sj://bucket/moved-object
Change-Id: I85a4961aa59f250819954e78f20363ac3c570938
Make bucket command was using full location
specified in command line instead only bucket name.
As an addition change contains basic integration tests
with storj-sim.
Change-Id: Ie3b5283468b7fbde0b1333f01dc4fc2a2952e1a1