This test involves a satellite with dev defaults (DistinctIP=no) being
upgraded past commit 2522ff09b6, which
means we need to run the dev-defaults-satellite-upgrade migration SQL
to avoid getting DistinctIP=yes behavior (which breaks the tests).
Change-Id: I29fb596d1ffa568dad635d98cfe9abacd3aaa48f
Up to now, we have been implementing the DistinctIP preference with code
in two places:
1. On check-in, the last_net is determined by taking the /24 or /64
(in ResolveIPAndNetwork()) and we store it with the node record.
2. On node selection, a preference parameter defines whether to return
results that are distinct on last_net.
It can be observed that we have never yet had the need to switch from
DistinctIP to !DistinctIP, or from !DistinctIP to DistinctIP, on the
same satellite, and we will probably never need to do so in an automated
way. It can also be observed that this arrangement makes tests more
complicated, because we often have to arrange for test nodes to have IP
addresses in different /24 networks (a particular pain on macOS).
Those two considerations, plus some pending work on the repair framework
that will make repair take last_net into consideration, motivate this
change.
With this change, in the #2 place, we will _always_ return results that
are distinct on last_net. We implement the DistinctIP preference, then,
by making the #1 place (ResolveIPAndNetwork()) more flexible. When
DistinctIP is enabled, last_net will be calculated as it was before. But
when DistinctIP is _off_, last_net can be the same as address (IP and
port). That will effectively implement !DistinctIP because every
record will have a distinct last_net already.
As a side effect, this flexibility will allow us to change the rules
about last_net construction arbitrarily. We can do tests where last_net
is set to the source IP, or to a /30 prefix, or a /16 prefix, etc., and
be able to exercise the production logic without requiring a virtual
network bridge.
This change should be safe to make without any migration code, because
all known production satellite deployments use DistinctIP, and the
associated last_net values will not change for them. They will only
change for satellites with !DistinctIP, which are mostly test
deployments that can be recreated trivially. For those satellites which
are both permanent and !DistinctIP, node selection will suddenly start
acting as though DistinctIP is enabled, until the operator runs a single
SQL update "UPDATE nodes SET last_net = last_ip_port". That can be done
either before or after deploying software with this change.
I also assert that this will not hurt performance for production
deployments. It's true that adding the distinct requirement to node
selection makes things a little slower, but the distinct requirement is
already present for all production deployments, and they will see no
change.
Refs: https://github.com/storj/storj/issues/5391
Change-Id: I0e7e92498c3da768df5b4d5fb213dcd2d4862924
Segments loop have build-in sanity check to verify if number of segments
processed by loop is roughly fine. We want to have the same verification
for ranged loop.
https://github.com/storj/storj/issues/5544
Change-Id: Ia19edc0fb4aa8dc45993498a8e6a4eb5928485e9
This change adds a new chore that will check for failed invoices and
potentially freeze corresponding accounts.
It makes slight modifications to stripemock.go and invoices.go (adding
stripe CustomerID to the Invoice struct).
Issue: https://github.com/storj/storj-private/issues/140
Change-Id: I161f4037881222003bd231559c75f43360509894
add new config to the satellite admin: --admin.groups.limit-update.
This can be used as an alternate means of authentication if the request
is coming from the oauth proxy.
Change-Id: Ic2de13862e6414244b060c66a0f2bed72097cbad
Add new purchase-package endpoint to Server. The endpoint can be enabled
or disabled by a new config, --console.pricing-packages-enabled.
The purchase-package endpoint applies a coupon and adds and charges a
credit card if user's useragent is a partner with a configured package
plan.
github issue: https://github.com/storj/storj-private/issues/125
Change-Id: I0d6ccccd6874ddba360c45f338fd1c44f95e135a
Older releases are not compiling with latest Go version if quic is used.
We need to add noquic tag to be able to compile older release with
latest Go version.
Change-Id: Id5768fcaa5c1f7cf3e6fbb633e7ca60309b7a37c
Adds a feature flag for the new all projects dashboard. It defaults to false.
Issue: https://github.com/storj/storj/issues/5514
Change-Id: I160904eccae7d30e05b734e69600725702b16aca
We will be needing an infrequent chore to check which nodes are in the
reverify queue and synchronize that set with the 'contained' field in
the nodes db, since it is easily possible for them to get out of sync.
(We can't require that the reverification queue table be in the same
database as the nodes table, so maintaining consistency with SQL
transactions is out. Plus, even if they were in the same database, using
such SQL transactions to maintain consistency would be slow and
unwieldy.)
This commit adds the actual chore.
Refs: https://github.com/storj/storj/issues/5431
Change-Id: Id78b40bf69fae1ac39010e3b553315db8a1472bd
To improve deletion of old entries in project_bandwidth_daily_rollup
we need index on `interval_day` column which is used to find those old
entries.
As an addition we are changing interval how often deletion is executed
from 7 to 1 day. We would like to have smaller portion of data to
delete.
Fixes https://github.com/storj/storj/issues/5465
Change-Id: Ie18ebe859887b93d6e4e6065a61fb9214c7ad27a
implemented observer and partial, created new structures to keep mon
metrics remain in same way as in segment loop
Change-Id: I209c126096c84b94d4717332e56238266f6cd004
Create a config to specify one-time prices and corresponding coupon
ids for partners.
github issue: https://github.com/storj/storj-private/issues/118
Change-Id: I67b26e7208b12ba8f0e6dc1b164dd9545b09cac0
Add node tally ranged loop observer and partial.
Add node tally randed observer to range loop peer.
Add config flag to select which loop to use for node tally.
Update satellite core to use segement/ranged loop based on a flag.
Duplicate existing node tally test but using ranged loop.
Change-Id: I6786f1a16933463fab5f79601bf438203a7a5f9e
added in storj-sim rangedloop for each satellite, to verify it works for metrics oveserver,
removed identity from rangedloop peer as we never use it, added logs on service run, added loop
to service instead of endless for loop, interval value to config
Closes: https://github.com/storj/storj/issues/5414
Change-Id: Ibc3b06071b68feda4a35b45da2bbe36e22a02fc8
This change implements the ranged loop observer to replace the audit
chore that builds the audit queue.
The strategy employed by this change is to use a collector for each
segment range to build separate per-node segment reservoirs that are
then merge them during the join step.
In previous observer migrations, there were only a handful of tests so
the strategy was to duplicate them. In this package, there are dozens
of tests that utilize the chore. To reduce code churn and maintenance
burden until the chore is removed, this change introduces a helper that
runs tests under both the chore and observer, providing a pair of
functions that can be used to pause or run the queueing function.
https://github.com/storj/storj/issues/5232
Change-Id: I8bb4b4e55cf98b1aac9f26307e3a9a355cb3f506
The tests are forked from the chore tests with slight adaptations for
being run against the ranged loop. I also moved a benchmark for the
database from chore_test.go to db_test.go.
The pathcollector is reused as a rangedloop.Partial.
https://github.com/storj/storj/issues/5234
Change-Id: I56182031d133812a9f4d4a433c01b9150af39f31
This change stubs userinfo endpoint from storj/common/pb/userinfo.proto.
It also adds config for allowed peers, and a method for verifying peers.
Issue: https://github.com/storj/storj/issues/5358
Change-Id: I057a0e873a9e9b3b9ad0bba69305f0d708bd9b9e
Adds new method Exists which can be used to verify which
requested piece ids exists on storage node. Will verify only pieces
which belongs to the satellite that used that endpoint.
Minum WASM size was increased a bit.
https://github.com/storj/storj/issues/5415
Change-Id: Ia5f9cadeb526541b2776a8973eb7d50133ad8636
This change updates the stripecoinpayments service to optionally skip
generating line items for payments records that have no egress, storage,
or segments for the billing period.
This results in a reduction from 4 to 1 Stripe API calls for customers
who have no usage. The final API call is the attempt to generate an
invoice on stripe, which expectedly fails because there are no unapplied line
items. Removing that final API call would require some additional
queries and is out of scope for this change.
This functionality is behind the
`payments.stripe-coin-payments.skip-empty-invoices` feature flag.
https://github.com/storj/storj/issues/5381
Change-Id: Id184969a4c79047c40502336d69c51388ab03bf8
Minimal implementation of the ranged (=threaded) segment loop
service, to improve performance over the existing loop.
Has tests with a an inmemory segment database
and example observer.
Does not have yet: database link, observer duration tracking,
suspicious processed ratio guard, rate limiting, minimum execution
interval per observer, etc.
Part of https://github.com/storj/storj/issues/5223
Change-Id: I08ffb392c3539e380f4e7b4f1afd56c4c394668d
This change reduces the token links expiry time from 24h to 30m and improves the UI to promt users of the expiration.
see: https://github.com/storj/storj-private/issues/17
Change-Id: Iac00f5740fa84069937fdf9bd30a739b6db2a9e0
The audit chore will be pushing a large number of segments to be
audited, and the db might choke on that large insert when under load.
This change divides the insert up into batches, which can be sized
however is optimal for the backing database. It also arranges for
segments to be inserted in the order of the primary key, which helps
performance on some systems.
Refs: https://github.com/storj/storj/issues/5228
Change-Id: I941f580f690d681b80c86faf4abca2995e37135d
Reputation updates during repair currently consumes a lot of database
resources. Sometimes increasing the rate of repair is more important
than auditing a node based on whether they have or don't have the
correct piece during repair. This is the job of the audit service.
This commit is to implement an intermediate solution from this issue: https://github.com/storj/storj/issues/5089
This commit does not address the more in-depth fix discussed here: https://github.com/storj/storj/issues/4939
Change-Id: I4163b18d78a96fadf5265789fd73c8aa8def0e9f
We tested new upload flow (with multiple versions) to fix inconsistency
while uploading object on QA/EUN1/SLC. Now we would like to enable it
for all satellites by default. Tests required small adjustments.
Fixes https://github.com/storj/storj/issues/5283
Change-Id: I0d53c041abebc0d182ba5a88bb1dac906c29caf0
As part of the effort of splitting out the auditor workers to their own
process, we are transitioning the communication between the auditor
chore and the verification workers to a queue implemented in the
database, rather than the sequence of in-memory queues we used to use.
This logical database is safely partitionable from the rest of
satelliteDB.
Refs: https://github.com/storj/storj/issues/5251
Change-Id: I6cd31ac5265423271fbafe6127a86172c5cb53dc
We added alternative way to calculate bucket tallies for accounting and
now it's tested and we will enable it by default.
CollectBucketTallies was extended to support overriding current time
to be able to test handling expired objects.
Change-Id: I738b99a33fd2e086245f92d874c1cbb806e834c0
Add a new chore to periodically insert nodes who are offline and
have not gotten an offline email in a certain amount of time into node
events
Change-Id: I658b385bb777b0240c98092946a93d65bee94abc
Create NodeEvents Chore on satellite core to read nodeevents DB and
notify node operators on node events. The chore sends notifications
grouped by email and event type: it selects the oldest entry in
nodeevents.DB and also any other event with the same email and event
type no matter how old it is. The oldest entry of a group must exist for
a minimum amount of time before that group can be selected, however.
This minimum amount of time is a configurable value:
--node-events.selection-wait-period. This wait period allows us to
combine events of the same time and same email address into a singular
email.
Change-Id: I8b444aa324d2dae265cc27d9e9e85faef79195d8
This change causes the session inactivity timer to be enabled unless
expressly specified otherwise.
Change-Id: I85b4014394afac2feb21f383cac414cddb09ca8f
Added new feature flag.
Reworked vuex logic to work properly with project level passphrase.
Implemented new simple set project level passphrase modal.
Issue:
https://github.com/storj/storj/issues/5280
Change-Id: I6a15e90ee9fa7aa8a09c67022466787090120f9c
Hubspot is migrating from using API keys for authentication to OAuth.
This change migrates our Hubspot integration to use OAuth tokens.
It modifies the EnqueueCreateUser code to not send empty HubspotUTK to hubspot, and to return error for failed requests.
see: https://developers.hubspot.com/changelog/upcoming-api-key-sunset
Change-Id: I422f00e3e3caeff3ff3d08ddec059502b9addaee
This patch is required to fix the nightly deployment:
* We need to use the exact docker image tag what we built earlier
* Migration should be full instead of snapshot (snapshot couldn't update existing, but older dbs)
Change-Id: Id2a2070638072a7b0021326326b0d53533817168
New flag 'MultipleVersions' was not correctly passed from metainfo
configuration to metabase configuration. Because configuration was
set correctly for unit tests we didn't catch it and issue was found
while testing on QA satellite.
This change reduce number of places where new metabase flags needs
to be propagated from metainfo configuration to avoid problems with
setting new flags in the future.
Fixes https://github.com/storj/storj/issues/5274
Change-Id: I74bc122649febefd87f665be2fba628f6bfd9044
We need to make exceptions for older uplink versions, because it does
not compile with newer Go versions due to quic dependency.
Change-Id: I3e073694f0942029c56740f0689088058ee068c3
since amount of objects is growing and looping through all of them
starts taking lot of time, we are switching for SQL query to do it
in chunks of tallies per bucket. 2nd part of issue fix.
Closes https://github.com/storj/team-metainfo/issues/125
Change-Id: Ia26bcac0a7e2c6503df9ebbf4817a636841d3284
The current deployment strategy requires that the GC bloomfilter generation process executes only once and exits.
Change-Id: I952991f126596aa165d1f2e9fce6f8548c21bdba
Earthly is a build tool, it uses buildkitd to create reproducable and highly cacheable builds.
It is used by a new experimental nightly build to easily create storj-up images. (but can be used for any ad-hoc storj-up cluster to create the images).
To make the nightly more robust, I would prefer to commit the helper files (today I do a rebase every time, but sometimes it fails).
More detailed information about Earthly can be found at https://earthly.dev or https://www.youtube.com/watch?v=nChpMEdOaCQ
Change-Id: I683601e0558aca53b45ed3819c46c909534f8b15
The threshold of piece deletions from the nodes during CommitObject
when overriding an existing object seemed to cause a race condition in
tests.
This change makes the threshold configurable so we can set it to maximum
so CommitObject waits until all pieces are removed from the nodes in the
test.
Change-Id: Idf6b52e71d0082a1cd87ad99a2edded6892d02a8
We want to send emails to SNOs. Node status changes go through the
overlay service, so it's a good place to add the mail service.
Add the mailservice.Service, satellite address, and satellite name to
overlay service. Also add feature flag --overlay.send-node-emails
Change-Id: I3bd2cb3bf22f9724954ce2374f8b651b902b3a24
Change the default loop interval for querying for new payments and adding them into the billing table from 1 minute to 15 seconds.
Change-Id: I26cf4a764cbe1de4c9b839ad60352374d8231522
Change the default number of required block confirmations for a payment to be confirmed from 12 to 15.
Change-Id: I44c258134c293e7691623bc00c504130aa69a96a
We will introduce new logic for creating new objects (BeginObject).
Instead of using single version internally (1) we will be selecting first
available version during object creation. Because we need to be sure
that everything is wired up correctly we need a feature flag to be
able to control if new feature is enabled.
Change-Id: If0f8496397130811f43bf9db9fdcc2b30cd2e4ca
Implement a new service to read retain filter from a bucket and
send them out to storagenodes.
This allows the retain filters to be generated by a separate command on
a backup of the database.
Paralellism (setting ConcurrentSends) and end-to-end garbage collection
tests will be restored in a subsequent commit.
Solves https://github.com/storj/team-metainfo/issues/121
Change-Id: Iaf8a33fbf6987676cc3cf74a18a8078916fe673d
Doing some cleanup in "scripts" folder. All integration like tests are
moved under "test" directory (integration, bc, redis) and bash scripts
are adjusted to reflect new location.
As an addition "scripts/install-awscli.sh" was deleted as it was not
used.
Change-Id: I152905c4258f471a71f2d0e8731d91bb075e99c1
We would like to have separate process/command to collect bloom
filters from source different than production DBs. Such process will
use segment loop to build bloom filters for all storage nodes and
will send it to Storj bucket.
This change add main logic to new service. After collecting all bloom
filters with segment loop and piece tracker all filters are marshaled
and packed into zip files. Each zip contains up to "ZipBatchSize" bloom
filters and it's uploaded to specified in configuration bucket.
All uploaded objects have specified expiration time to not delete them
manually.
Updates https://github.com/storj/team-metainfo/issues/120
Change-Id: I2b6bc02a7dd7c3a639e75810fd013ae4afdc80a2
We would like to have separate process/command to collect bloom
filters from source different than production DBs. Such process will
use segment loop to build bloom filters for all storage nodes and
will send it to Storj bucket. This this initial change to add such
service. Added service is joining segment loop and collects all
bloom filters.
Sending bloom filters to the bucket will be added as a subsequent
change.
Updates https://github.com/storj/team-metainfo/issues/120
Change-Id: I2551723605afa41bec84826b0c647cd1f61f3b14
Sessions now expire after a much shorter amount of time, requiring
clients to issue API requests for session extension. This is handled
behind the scenes as the user interacts with the page, but once session
expiration is imminent, a modal appears which informs the user of his
inactivity and presents him with the choice of loging out or preserving
his session.
Change-Id: I68008d45859c814a835d65d882ad5ad2199d618e
This is in response to community feedback that our existing reputation
calculation is too likely to disqualify storage nodes unfairly with
extreme swings up and down.
For details and analysis, please see the data_loss_vs_dq_chance_sim.py
tool, the "tuning reputation further.ipynb" Jupyter notebook in the
storj/datascience repository, and the discussion at
https://forum.storj.io/t/tuning-audit-scoring/14084
In brief: changing the lambda and initial-alpha parameters in this way
causes the swings in reputation to be smaller and less likely to put a
node past the disqualification threshold unfairly.
Note: this change will cause a one-time reset of all (non-disqualified)
node reputations, because the new initial alpha value of 1000 is
dramatically different, and the disqualification threshold is going to
be much higher.
Change-Id: Id6dc4ba8fde1be3db4255b72282207bab5491ca3
Created new modal which shows user their native STORJ token wallet address.
There are QR and copy buttons.
It will be used only in new billing screen.
Change-Id: Icef3c8668c548b779c07fe2b85eb5761cd1221a3
Jenkins doesn't do a very good job with identifying what has been changed.
While it has a syntax to defined patterns, it compares the current build with the previous build (in case of git-verify it can be a totally different branch) instead of checking the HEAD commit.
This patch introduces shell scripts to do this better:
* It doesn't depend on Jenkins any more
* It can be executed locally
* It can detect web changes properly (see the relation change as an example).
Change-Id: I9d37775e3818c08c4aa96ffb78f84d57f28a2c95
We have enabled the new project dashboard in production. Change the
default to true so that we do not need an explicit configuration in
prod.
Change-Id: I0f93773965283e7b0682f6586685224281cbf78c
Implemented Recaptcha and Hcaptcha for login screen.
Slightly refactored registration page implementation.
Made 2 different login/registration captcha configs on server side to easily swap between captchas independently.
Issue: https://github.com/storj/storj/issues/4982
Change-Id: I362bd5db2d59010e90a22301893bc3e1d860293a
removed segment limit validation and checks in metainfo endpoint and accounting/projectusage
since feature is live and has always has segment limitation now
Resolves: https://github.com/storj/storj/issues/4470
Change-Id: I8cf87cbbc40ac61262f9f05e52573d3ae6410611
Currently we have a significant number of tallies that need to be
deleted together. Add a limit (by default 10k) to how many will
be deleted at the same time.
Change-Id: If530383f19b4d3bb83ed5fe956610a2e52f130a1