Bucket tally calculation will be removed from metaloop and will
use metabase objects iterator directly.
At the moment only bucket tally needs objects so it make no sense
to implement separate objects loop.
Change-Id: Iee60059fc8b9a1bf64d01cafe9659b69b0e27eb1
Added feature flag for MFA
Added new client-side api call to enable MFA returning secret
Updated users Vuex module to include new API call
Change-Id: Ia9e10f68c4a7da39b4f7c1073e657c2de98fb0db
The user must complete a reCAPTCHA in order to register.
ReCAPTCHA verification failure results in rejection of the
registration attempt.
Change-Id: I34ba7db414d756fd1aaebdc3d19cccbfc7fc1ea3
When a user adds a credit card, switch them to the paid tier and update
their projects with new bandwidth/storage limits. New projects for the
paid tier user will also have the updated limits.
The new limits are:
* storage per project - 50 GB free/25 TB paid
* bandwidth per project - 50 GB free/100 TB paid
Change-Id: I7d6467d077e8bb2bbe4bcf88ab8d75490f83165e
Because of our free/paid tier plan, we do not need a paywall anymore. We
have not used it in a while, but still have leftover code laying around.
Change-Id: Iaea8c39faf042a2f7a6b837727bb135c8bdf2907
Adding AS OF SYSTEM TIME to query that is calculating project bandiwdth.
As an addition method for setting interval is added as test doesn't
work well with default interval.
Change-Id: Id1e15be4f6afff13b9dc2b7f595e2edb6de28db9
We used this to reduce initial load on the core to avoid OOM. However,
this is not a problem anymore with garbage collection running
separately.
Change-Id: Ifd62c822a74974bc21a5913199334469a4bc0130
This adds verification for the processed count and before and after
segment/objects table counts.
This adds new flag:
metainfo.segment-loop.suspicious-processed-ratio: 0.03
This defaults to 3%, which at 100M segments is 3M segments.
Change-Id: I5ee03e913ddc4e67e94010ced126a2a9ea51f41b
This adds verification for the processed count and before and after
segment/objects table counts.
This adds new flag:
metainfo.loop.suspicious-processed-ratio: 0.03
This defaults to 3%, which at 100M objects is 3M objects.
Change-Id: Ife5522ecc97bcc5a55667f36868a0f1fc8e4c561
This is part of metaloop refactoring. We plan to remove
irreparable at some point but there was not time for it.
Now instead refatoring it for segmentloop its just easier
to drop it.
Later we still need to drop table with migration step.
Change-Id: I270e77f119273d39a1ecdcf5e1c37a5662a29ab4
Currently we did not limit the "as of system time" for iterating over
objects table. Using just an interval would cause problems with the
tests. That could be overcome skipping that interval for tests
altogether, however, we should probably test those more to ensure that
GC stays working as intended.
This is a safer code, however, maybe not as straigthforward as it could
be.
Change-Id: I374f77783b2af42bb6da846735ceea20a7ce5e60
Satellites set their configuration values to default values using
cfgstruct, however, it turns out our tests don't test these values
at all! Instead, they have a completely separate definition system
that is easy to forget about.
As is to be expected, these values have drifted, and it appears
in a few cases test planet is testing unreasonable values that we
won't see in production, or perhaps worse, features enabled in
production were missed and weren't enabled in testplanet.
This change makes it so all values are configured the same,
systematic way, so it's easy to see when test values are different
than dev values or release values, and it's less hard to forget
to enable features in testplanet.
In terms of reviewing, this change should be actually fairly
easy to review, considering private/testplanet/satellite.go keeps
the current config system and the new one and confirms that they
result in identical configurations, so you can be certain that
nothing was missed and the config is all correct.
You can also check the config lock to see what actual config
values changed.
Change-Id: I6715d0794887f577e21742afcf56fd2b9d12170e
We want to move some of current metainfo loop observers to
segment loop. This change adds new service, similar to metainfo
loop but which is iterating only over segments.
Change-Id: I67f7f461781723a4476e2b83377f31736d7c4870
Rather than applying our internal satellite implementation of coupons
when new accounts are created, use a configured Stripe coupon instead.
If no configuration is set, no coupon will be applied.
This change also removes logic for adding coupons to customers who pay
with crypto - they will already have the free tier coupon applied
anyway.
We will be phasing out our internal coupon implementation.
Change-Id: Ieb87ddb3412acbc74986aa9d18a4cbd93c29861a
Use the 'AS OF SYSTEM TIME' Cockroach DB clause for the Graceful Exit
(a.k.a GE) queries that count the delete the GE queue items of nodes
which have already exited the network.
Split the subquery used for deleting all the transfer queue items of
nodes which has exited when CRDB is used and batch the queries because
CRDB struggles when executing in a single query unlike Postgres.
The new test which has been added to this commit to verify the CRDB
batch logic for deleting all the transfer queue items of the exited
nodes has raised that the Enqueue method has to run in baches when CRDB
is used otherwise CRDB has return the error "driver: bad connection"
when a big a amount of items are passed to be enqueued. This error
didn't happen with the current test implementation it was with an
initial one that it was creating a big amount of exited nodes and
transfer queue items for those nodes.
Change-Id: I6a099cdbc515a240596bc93141fea3182c2e50a9
The previously configured never-expiring coupon does not refill every
month. Eventually, even though it never expires, it will run out. This
commit makes several small changes to address this issue for the free
tier:
* Change the config for the promotional coupon to be $1.65 for 1 month
(the change from $10 to $1.65 is due to our recent pricing changes)
* Update PopulatePromotionalCoupons (PPC for brevity) to add promotional
coupons to users with expired and consumed coupons (all users with a
project and no active coupons should get a new coupon when PPC is called)
* Call PPC at the end of the `create-invoice-coupons` stage of invoice
generation - after current coupons are processed and expired/exhausted.
* Remove legacy admin functionality for PPC from satellite/console - we
do not currently use it, but if we did, it should be in satellite/admin
instead.
Change-Id: I77727b97bef972df32ebb23cdc05055827076e2a
Allows us to remove the following files from satellite branding
repo, with an up-to-date single source of truth now in storj/storj:
* web/satellite/src/common/registrationSuccess.html
* web/satellite/src/common/registrationSuccess.scss
* web/satellite/src/views/register/registerArea.html
* web/satellite/src/views/register/registerArea.scss
The registrationSuccess files have been removed from all satellites in
the branding repository. The registerArea files have been removed only
from production satellites in the branding repository.
Importantly, this change enables the "resend email" functionality on
production satellites - previously, this functionality was available in
storj/storj, but not our branding repository.
Removes the config for VerificationPageURL, which redirected users away
from the satellite app to storj.io after creating an account. In order
for the email resend button to work, we cannot leave the app.
Adds a new config value for partner satellites, which replaces the
partner satellite names config. The new config includes name and
address. It is validated on setup/run to ensure it can be parsed.
Change-Id: I67db0702d9b9641f1a37b599f2929d56f3c33aca
Co-authored-by: littleskunk <jens.heimbuerge@googlemail.com>
Co-authored-by: JT Olio <hello@jtolio.com>
Co-authored-by: Igor <38665104+ihaid@users.noreply.github.com>
We can be more precise and conservative by using the backend
satellite/analytics service. We also no longer need client-side Segment
scripts.
Change-Id: Ic5fb18bea2d388b586ad773e26027d69bde87294
We already merged the multipart-upload branch to main. These two tools
make only sense if we are migrating a satellite from Pointer DB to
Metabase. There is one remaining satellite to migrate, but these tools
should be used from the respective release branch instead of from main.
Removing these tools from main will:
1) Avoid the mistake to use them from the main branch instead of from
the respective release branch.
2) Allow to finally remove any code related to the old Pointer DB.
Change-Id: Ied66098c5d0b8fefeb5d6e92b5e0ef5c6603df5d
The new default promotional coupon is $10/month, and doesn't expire.
This change also migrates the coupon.duration column over to the new
coupon.billing_periods, and switches to rely completely on
billing_periods.
Change-Id: Ic3341e9fa4040449bab5e66ca4ee2640b095cf3d
* Add a nullable billing_periods column in the coupons table
* Add nullable billing_periods column to the currently unused
coupon_codes table
* Drop the duration column from the coupon_codes table
* Replace duration config type so that the default promotional coupon
can be configured to never expire
Zero downtime migration plan:
* Add billing_periods column to coupons and coupon_codes tables (this change)
* After one release, remove all references to the old duration column,
replacing with references to billing_periods. At this point, we can also
change the defult promotional coupon to never expire and migrate over
values from the old duration column.
* After another release, drop the duration column.
Change-Id: I374e8dc9fab9f81b4a5bc681771955662d4c007a
* integration tests have now a trap that will display where script failed, line and message
* functions from uplink tests moved to utils.sh to reuse them
Change-Id: Ib2311775fd70ce784aa986328969d75eefc5ac36
This change introduces a new config flag,
--overlay.audit-history.offline-suspension-enabled,
to toggle suspending nodes for offline audits.
If the flag is set to true, nodes will be suspended if they meet the
requirements.
If the flag is false, nodes will not be suspended. If they are already
suspended and/or under review, these will be cleared.
Change-Id: Ibeba759c42d6e504f6b7598120d4fd4dab85ca74
- add Credit History table to billing acount page and set up ui for a user adding promo codes
- implement promo codes ui in registration form
- add feature flag to handle if coupon code ui should be rendered
Change-Id: I9fdeef7cffc7901958d3f9be335e1115b2471a2e
A recent change made the default usage/storage limits for projects 50gb
rather than 500gb. This increases the default limit for testversions.
Change-Id: Ibea05c0d0760662e447b6455d560a2a640801c6c
* Set up basic structure of new service.
* Implement a basic analytics track event for user creation.
Change-Id: Ica8c785540b1ef9d848404af307a22f21d33c6aa
For time of transition from pointerdb to metabase we need add migration step to rollingupgrade tests and comment few cases.
Change-Id: Ib12ae6aa14be35f9bf4ff3efb55cfc6957d4ceba
This is one step for implementing the free tier:
* Change the default project limit from 10 to 3
* Move storage and bandwidth project usage limits from the metainfo
package to the console package (otherwise there is a cyclical
dependency, and metainfo doesn't use these values anyway)
* Change the default storage usage limit per project from 500gb to 50gb
* Change the default bandwidth usage limit per project from 500gb to 50gb
* Migrate the database so that old users and projects continue to have
the old defaults (10 projects/500gb usage)
Change-Id: Ice9ee6a738bc6410da18c336c672d3fcd0cab1b9
We are implementing the free tier, which will give all new users 3
projects, 50gb storage, and 50gb bandwidth per project. All users will
receive a recurring coupon to cover this amount of usage.
With the free tier, we no longer need a paywall. Users will not need to
enter a payment method unless they want to increase their project or
usage limits.
Change-Id: If3b026e91858e5f557a2758e366616cecc8f21c7
We would like to log Node IDs and last contact successes of nodes DQd
in this manner. We would also like to avoid returning an unbounded list
of items from the db. Therefore we change the query to select a limited
number of nodes that meet the DQ conditions and iterate until 0 rows are
returned. Each column of the query is already indexed.
Change-Id: Iaec2d9b56e7202b7c2028ba21750d40c8dd506ee
Do not use Docker for running a Redis server in the test-sim that we
check that Satellite can still operate when Redis is unavailable because
we have to run this test in Jenkins and we don't want to empower a
container to be able to connect to the Docker socket of the host machine
for running sibling containers.
Change-Id: I6180e8ed804968c8ccb0783ed334acab38af9a0f
WHAT:
enter passphrase step for users who has already created passphrase
WHY:
to let users proceed to upload step
Change-Id: I084aec5b863981978cf190f99ee95154fbed9aab
WHAT:
beta satellite top banner's copy is changed to include support/feedback URLs
WHY:
so users using our beta satellite will be able to report feedback somewhere
Change-Id: Ibc349c8b3354b577275fcf1d2b75bfdd267729d9
The previous default FlushBatchSize of 10000 was causing major
slow down in select and insert statements on bucket_bandwidth_rollups.
We saw on the saltlake satellite that a FlushBatchSize of 1000 helped
reduce contention and query latency.
Change-Id: Ib95e73482219bc5aedc11925b1849fa5999774ba
WHAT:
config flag to indicate if satellite is in beta
WHY:
to avoid using hardcoded satellite names which may cause issues
Change-Id: If92eb7417c340bf343a9a91e2f6b11f0349020c5
This PR removes all back-end related referral program code including the
marketing portal.
We will have a separate PR for front-end code and database migration to
drop `offers` and `usercredits` table
Change-Id: If59f952cddfe0558a7dc03a0eac7cc1081517f88
Delete satellite order methods and DB tables which aren't used anymore
after we have done a refactoring on the orders to stuck bucket
information in the orders' encrypted metadata.
There are also configuration parameters and a satellite chore that
aren't needed anymore after the orders refactoring.
Change-Id: Ida3682b95921df70792284b42c96d2508bf8ca9c
The rollup archiver chore moves bucket bandwidth rollups and
storagenode rollups that are older than a given duration
to two new archive tables.
Change-Id: I1626a3742ad4271bc744fbcefa6355a29d49c6a5
Create a storj-sim test that checks that uplinks operations works when
satellite runs and can connect to Redis and when it cannot connect to
simulate a Redis downtime. Also verifies that the satellite can start
despite of Redis being downtime.
This test currently doesn't pass and it will be the one used to verify
the work that has to be done to make sure that the satellite allow the
clients to perform their operations despite of Redis being unavailable.
We require these changes before we deploy any customer face satellite on
a multi-region architecture.
NOTE that this test will be added later on to Jenkins to run this test
every time that we apply changes and at that time we'll see if it has to
be adjusted for being able to run on Jenkins because as it's now it may
not work because the scripts start and stop a Redis docker container.
Change-Id: I22acb22f0ca594583e36b45c88f8c03bac73b329
Full scope:
private/testplanet,satellite/{overlay,satellitedb}
Description:
In most cases, downtime tracking with audits will eventually lead
to DQ for nodes who are unresponsive. However, if a stray node has no
pieces, it will not be audited and will thus never be disqualified.
This chore will check for nodes who have not successfully been contacted
in some set time and DQ them.
There are some new flags for toggling DQ of stray nodes and the timeframes
for running the chore and how long nodes can go without contact.
Change-Id: Ic9d41fdbf214736798925e728245180fb3c55615
Query nodes table using AS OF SYSTEM TIME '-10s' (by default) when on CRDB to alleviate contention on the nodes table and minimize CRDB retries. Queries for standard uploads are already cached, and node lookups for graceful exit uploads has retry logic so it isn't necessary for the nodes returned to be current.
It turns out that alpine dropped support/updates for the aarch64 image.
Instead they have been using the arm64v8 notation for quite a while,
which resulted in breaking our recent aarch64 builds due to missing
dependencies/updates.
Both arches are exactly the same, aarch64 was created originally by Apple
and arm64 by GNU. The backends have been merged by now and the arm64 became
the de facto standard.
Since the Satellite now requires the order encryption functionality (since serial_number table is deprecated) to properly function, we can remove the config flag to turn on/off the feature.
Change-Id: Ie973f72a9a05a81cef9e53dc9c99d22c940c2488
This PR contains the minimum changes needed to stop inserting into the serial_numbers table. This is the first step in completely deprecating that table.
The next step is to create another PR to remove the expiredSerial chore, fix more tests, and remove any other methods on the serial_number table.
Change-Id: I5f12a56ebf3fa4d1a1976141d2911f25a98d2cc3
WHAT:
added brotli compression for wasm files and added copying of those files to static/wasm folder in Dockerfile
WHY:
those files are a part of web worker webpack bundle and I didn't find a way to compress them separately using webpack.
I'm open to any other ideas if they come up
Change-Id: I105cc1582e9816fd9b63052ba48358525c85a164