In most situations where we would not get enough shares to complete
an audit, something has probably gone wrong like a forgotten delete,
and nodes should not be failed. We have an alert when this occurs.
Check the logs to see what happened. If we decide the nodes should
get audit failures, we can do it manually.
Change-Id: Ib6e408082048d31197c37ebfd7f9031135fc938f
Bucket tally calculation will be removed from metaloop and will
use metabase objects iterator directly.
At the moment only bucket tally needs objects so it make no sense
to implement separate objects loop.
Change-Id: Iee60059fc8b9a1bf64d01cafe9659b69b0e27eb1
Columns for MFA status, secret key, and JSON-encoded array of
recovery codes are added to the users table.
Change-Id: Ifed7e50ec9767c1670d9682df1575678984daa60
Added feature flag for MFA
Added new client-side api call to enable MFA returning secret
Updated users Vuex module to include new API call
Change-Id: Ia9e10f68c4a7da39b4f7c1073e657c2de98fb0db
Current tally is calculating storage both for buckets and
storage nodes. This change is moving nodes storage
calculation to separate service that will be using
segment loop.
Change-Id: I9e68bfa0bc751c82ff738c71ca58d311f257bd8d
The user must complete a reCAPTCHA in order to register.
ReCAPTCHA verification failure results in rejection of the
registration attempt.
Change-Id: I34ba7db414d756fd1aaebdc3d19cccbfc7fc1ea3
When a user adds a credit card, switch them to the paid tier and update
their projects with new bandwidth/storage limits. New projects for the
paid tier user will also have the updated limits.
The new limits are:
* storage per project - 50 GB free/25 TB paid
* bandwidth per project - 50 GB free/100 TB paid
Change-Id: I7d6467d077e8bb2bbe4bcf88ab8d75490f83165e
The previous query was making a full table scan. This modifies code to
do the queries separately on each action. It will probably be slower on
a small table, however there should be a several magnitude boost on
large tables.
Change-Id: Ib8885024d8a5a0102bbab4ce09bd6af9047930c9
When the delta is very small from the bounds then the ratio calculation
doesn't work that well. Let's allow 100 from the bounds, since that
would be expected in any case.
We won't add a configuration for it, since it's not that useful.
Change-Id: I049066a42470b825f430b7f32ebe92d544c6cc8b
Added new info banner to show user their used and total storage values with a button to upgrade to Paid Tier with auto limit increase
Change-Id: I827818dcb5179358df246218a47feb61bc1a1bac
We want to calculate bucket tally only from iterating objects.
Object currently has an info about totals for bytes and segments.
We need to adjust tallies to keep those totals. Older entries will
be untouched and code will use totals only if available. Change
is adding columns for totals to bucket_storage_tally table and
is adding general handling for them.
Next step is to start using total columns instead of inline/remote.
This will be done with next change.
Change-Id: I37fed1b327789efcf1d0570318aee3045db17fad
We want to use StreamID/Position to identify injured
segment. As it is hard to alter existing injuredsegments
table we are adding a new table that will replace existing
one. Old table will be dropped later.
Change-Id: I0d3b06522645013178b6678c19378ebafe485c49
So that we can easily see whether a user is in the paid tier without
querying for payment methods.
Change-Id: I122566ddd0953203f852741fa12c71795bc1ec5c
Period end was calculated
incorrectly as it was still in current month but
should be the first day of next month.
Change-Id: I37451d29a9b901b69e6c3c401b333c58b3376d61
this service exists to do currency conversions, which is
the best I can assume that this was meant to be named.
Change-Id: Ia2416f5475749e8bfe8d05bf491649576f6d77bf
This change removes all the separate implementations for
`apiservice.serveJSONError()` and defines one for every service to use
in `consoleapi/common.go`.
Change-Id: Iabf184e5cba69a98eb25936ce11ebd07f02c8ff3
Because of our free/paid tier plan, we do not need a paywall anymore. We
have not used it in a while, but still have leftover code laying around.
Change-Id: Iaea8c39faf042a2f7a6b837727bb135c8bdf2907
Adding AS OF SYSTEM TIME to query that is calculating project bandiwdth.
As an addition method for setting interval is added as test doesn't
work well with default interval.
Change-Id: Id1e15be4f6afff13b9dc2b7f595e2edb6de28db9
Currently, pending audit is finding segment by segment location
(path) because we want to move audit to segmentloop and we will
have only StreamID and Position we need to add columns for those
fields. Altering existing table can cause issues while
migration and deployment. Cleaner choise is to make new table.
This change contains migration with new segment_pending_audit
table that will replace pending_audits table and adjustments
to use new table in the code.
Table pending_audits will be dropped with next release.
Change-Id: Id507e29c152da594bac1fd812c78d7ecf45ec51f
table graceful_exit_segment_transfer_queue will be used to replace graceful_exit_transfer_queue. Currently, it uses the path of a segment to keep track of pieces to be transferred. As we want to use the segment metainfo loop, we will need to record stream_id and position of the segment instead of relying on object path.
This change also add a uses_segment_transfer_queue column to the graceful_exit_progress table to be able to know if a transfer has been initiated while using the old table.
Change-Id: Iafb1e8e65ba124e20de4a9ff76da181c3222de7e
Added new endpoint and service method to return total usage and limits for all the projects that user owns.
It is needed for new paid tier UI
Change-Id: Ic5b67ca7b275ec4930d976a007168235c0500b70
It's possible that single object will have a lot of segments and
at the moment we are trying to collect all pieces at once and
send to storage nodes information about deletion. Such
approach my lead to using extensive amount of memory. This
change is handling this problem by calling DeletePieces
function multiple times with only part of pieces to delete for
a single call.
Change-Id: Ie1e66cd9d86d130eb89a61cf6e23f38b8cb8859e
Define service and DB interface for storing node reputation data
and updating the overlay cache.
Add overlay service and DB method UpdateReputation.
See https://github.com/storj/storj/pull/4144
Change-Id: Iedd8bd3274457d26c595919303d55327c1464b8c
The reputation table duplicates the reputation information in the
nodes table. It will be used for implementing the reputation
service.
Change-Id: I36c0318e8fa5f535e9d527df95b22a4f9eb365d4
the very common case is that the node api version is
indeed at least the requested version, so query for
that first to avoid write traffic.
Change-Id: Ib047d93078205bc07fee75d1f635503b792307f0
* added signup personal user test & added testDefault:true to OpenRegistrationEnabled in service.go
* added copyright
* fixed import ordering
* fixed comment formatting and gofmt-ed with -s
* gofmt-ed with -s and -w
* fixed fragile elements
* fixed one more fragile element
* fixed nesting
* removed unnecessary timeout
* fixed imports
We used this to reduce initial load on the core to avoid OOM. However,
this is not a problem anymore with garbage collection running
separately.
Change-Id: Ifd62c822a74974bc21a5913199334469a4bc0130
This adds verification for the processed count and before and after
segment/objects table counts.
This adds new flag:
metainfo.segment-loop.suspicious-processed-ratio: 0.03
This defaults to 3%, which at 100M segments is 3M segments.
Change-Id: I5ee03e913ddc4e67e94010ced126a2a9ea51f41b
This adds verification for the processed count and before and after
segment/objects table counts.
This adds new flag:
metainfo.loop.suspicious-processed-ratio: 0.03
This defaults to 3%, which at 100M objects is 3M objects.
Change-Id: Ife5522ecc97bcc5a55667f36868a0f1fc8e4c561
This is part of metaloop refactoring. We plan to remove
irreparable at some point but there was not time for it.
Now instead refatoring it for segmentloop its just easier
to drop it.
Later we still need to drop table with migration step.
Change-Id: I270e77f119273d39a1ecdcf5e1c37a5662a29ab4
Loops are using custom structs to provide
segments while iterating/looping so we need
to expose new field there too.
Change-Id: I12c8f4a01afeac171bf638d278253999fa90a8cb
We added expires_at column to segments table and now
we need to populate this column while committing segment.
We still need to migrate existing segments with
separate tool.
Change-Id: Ibac8c63d97201dd98cc2cb9db385f4cb73bc3f7e
Currently we did not limit the "as of system time" for iterating over
objects table. Using just an interval would cause problems with the
tests. That could be overcome skipping that interval for tests
altogether, however, we should probably test those more to ensure that
GC stays working as intended.
This is a safer code, however, maybe not as straigthforward as it could
be.
Change-Id: I374f77783b2af42bb6da846735ceea20a7ce5e60
The redis key associated to bandwidth usage depended on the current month.
This change makes it depends also on the day, so that we update bandwidth usage
daily to take into account changes associated to expired but not used allocated bandwidth.
It also add a test to to check that the allocated but not settled bandwidth is not counted after 2 days.
Change-Id: Iee9dbe3517cc3b9825438360b276a07a43dfbc64
Currently it's difficult to gather how many objects and segments are
being inserted. Adding separate monitoring counters make this easier.
Change-Id: I986cd82f03e99d2aa6fc76028255ee1090d1b294
Migration step for adding a 'egress_dead' column to the project_bandwidth_daily_rollups.
It will be used to track bandwidth allocation that won't be consumned
as the corresponding order has already been processed and has a settled
bandwidth amount lower than the order limit (allocated bandwidth).
Change-Id: Ic07592e69292ae2076e69f6038bb0e0fae79b271
Currently the iterate is being called in only one location so there's no
benefit in passing them as arguments over using the receiver.
Change-Id: I433a5d8b795b1bcc1f1e9320d87b10820cf537f1
Full prefix: web/satellite, satellite/{console, analytics, satellitedb}
- checkbox added to register view - business tab
- user being saved with new column
- add sales contact choice to Segment calls
- ui fix added to employee count dropdown
Change-Id: Ib976872463b88874ea9714db635d58c79cdbe3a1
In a rare case it's possible to start the loop iteration without
observers. The most likely case is that the observer is cancelled and
the coalesce timer trigger asynchronously, although being stopped.
Nevertheless, all the observers may also exit during the iteration, in
either case it should not result in an error.
If there's a probem with the observers, then they can report their own
error as they see fit.
Change-Id: Ie423fec41e6295be05536a4b7b0b6623ffebf2fb
We are not using this table so make no sense to put data there.
This change removes only code that is using this table. Before next
release we need to drop table with migration step.
Change-Id: I80f400aa778c717e70324bd00da502b7032c9d9f
Satellites set their configuration values to default values using
cfgstruct, however, it turns out our tests don't test these values
at all! Instead, they have a completely separate definition system
that is easy to forget about.
As is to be expected, these values have drifted, and it appears
in a few cases test planet is testing unreasonable values that we
won't see in production, or perhaps worse, features enabled in
production were missed and weren't enabled in testplanet.
This change makes it so all values are configured the same,
systematic way, so it's easy to see when test values are different
than dev values or release values, and it's less hard to forget
to enable features in testplanet.
In terms of reviewing, this change should be actually fairly
easy to review, considering private/testplanet/satellite.go keeps
the current config system and the new one and confirms that they
result in identical configurations, so you can be certain that
nothing was missed and the config is all correct.
You can also check the config lock to see what actual config
values changed.
Change-Id: I6715d0794887f577e21742afcf56fd2b9d12170e
Until now, whenever audits were recorded we would try to delete
the node from containment just in case it exists. Since we now
want to treat segment repair downloads as audits, this would
erroneously remove nodes from containment, as repair does not go
through a Reverify step. With this changeset, (Batch)UpdateStats
will not remove nodes from containment. The Reverify method will
remove all necessary nodes from containment.
Change-Id: Iabc9496293076dccba32ddfa028e92580b26167f
We want to move some of current metainfo loop observers to
segment loop. This change adds new service, similar to metainfo
loop but which is iterating only over segments.
Change-Id: I67f7f461781723a4476e2b83377f31736d7c4870
Previously the object range was not used for calculating order limit.
This meant that even if you were downloading only a small range it would
account bandwidth based on the full segment.
This doesn't fully address the accounting since the lazy segment
downloads do not send their requested range nor requested limit.
Change-Id: Ic811e570c889be87bac4293547d6537a255078da
There was a bug when user tried to get project after removing themselves from it.
Also we made user select firstly created project only if they removed themselves from current selected project.
Change-Id: I4b28ebc1ab4a8c14d05ef702e034f2ab39225cc3
Method IterateLoopSegments can be used to iterate over all segments in metabase without touching objects.
Change-Id: I3cc0e783884b603b47ef3f8233e357aa8a391250
Because of recent changes to how coupons for the free tier are handled
(see commit 4c0817bcfb), we no longer want all these $10
non-expiring coupons. After coupons are applied during invoice
generation, if a customer does not have any valid (non expired, non
consumed) coupons, a new promotional coupon is applied.
We could just wait for users to consume all $10 of the non-expiring
coupons, and the new promotional coupon would be applied for the
following billing cycle, but this gets tricky, because if in the final
month, the user is billed for $1 of usage, but only $0.5 of the $10
non-expiring coupon is remaining, the user will be charged for the
remaining $0.5. With the new promotional coupon of $1.65, expiring every
month, this would not be an issue.
So long story short, this commit migrates all non-expiring coupons to
expire within 2 billing periods (all existing non-expiring coupons in
prod were created in early April or later). That way, there is still
enough value in the $10 coupon that we don't have to worry about
customers exceeding it, and the coupons will expire. Then we'll
immediately apply the $1.65 coupon for the next month! And then
hopefully this unfortunate situation will come to a pleasant end.
Change-Id: I8a593948d8876c41a71d886b9a95d4e2c802b4f3
Replace GetProjectAllocatedBandwidth by GetProjectBandwidth which calculates
used bandwidth from allocated and settled bandwidth recorded in the
project_bandwidth_daily_rollups table.
For each day in the month, if the allocated bandwidth is expired, it uses the
settled bandwidth for computing used bandwidth.
Change-Id: Ife723c6d5275338f470619631acb25930d39ac3c
project_bandwidth_daily_rollups table
We want to calculate used bandwidth better so we need to calculate it
from allocated and settled bandwidth. To do this we need first populate
this new table.
https://storjlabs.atlassian.net/browse/PG-56
Change-Id: I308b737bf08ee48ce4e46a3605697ab2095f7257
Rather than applying our internal satellite implementation of coupons
when new accounts are created, use a configured Stripe coupon instead.
If no configuration is set, no coupon will be applied.
This change also removes logic for adding coupons to customers who pay
with crypto - they will already have the free tier coupon applied
anyway.
We will be phasing out our internal coupon implementation.
Change-Id: Ieb87ddb3412acbc74986aa9d18a4cbd93c29861a
The context passed into SendEmail gets canceled before links are clicked
in the test environment. This change passes an un-cancelable context
from SendRenderedAsync so that email sending/link clicking can be
completed even if the parent context is canceled.
Change-Id: I88535d315900d7886877f0e14d1d052745402ac7
Improve UpdateCheckIn on a contended row:
name old time/op new time/op delta
UpdateCheckInContended-100x-32 2.29s ±55% 0.17s ±61% -92.45% (p=0.008 n=5+5)
Change-Id: I053ab9f1cff136c306e5fb57f5e355cdc0269a8c
Piece hash verification failures during repair download are considered
audit failures, but we are not logging these occurrences. Now we log
them.
Change-Id: If456cebcfda6af7a659be3d1fc74448e681fb653
Add test with NotBefore and NotAfter restricted permission to verify that we don't have an access to bucket
Change-Id: I7ec98a5b02c0098ee7ec81034278398f4435f1cf
Currently the interface is not useful. When we need to vary the
implementation for testing purposes we can introduce a local interface
for the service/chore that needs it, rather than using the large api.
Unfortunately, this requires adding a cleanup callback for tests, there
might be a better solution to this problem.
Change-Id: I079fe4dbe297b0ae08c10081a1cea4dfbc277682
Initially metabase was developed separately and it was useful to have a
separate environment flag for tests, however, it's more convenient to
use the same as rest of the testsuite.
Change-Id: Ia4d79be27ce5911cbae68d57cdf0b30f63459444
Currently we were duplicating code for AS OF SYSTEM TIME in several
places. This replaces the code with using a method on
dbutil.Implementation.
As a consequence it's more useful to use a shorter name for
implementation - 'impl' should be sufficiently clear in the context.
Similarly, using AsOfSystemInterval and AsOfSystemTime to distinguish
between the two modes is useful and slightly shorter without causing
confusion.
Change-Id: Idefe55528efa758b6176591017b6572a8d443e3d
So far we were setting ZombieDeletionDeadline alwasy as nil and because of that DB default was never set. This change adds separate query for inserting object if deadline is not set.
Change-Id: I3d6a16570e7c74b5304e13edad8c7adcd021340c
Use the 'AS OF SYSTEM TIME' Cockroach DB clause for the Graceful Exit
(a.k.a GE) queries that count the delete the GE queue items of nodes
which have already exited the network.
Split the subquery used for deleting all the transfer queue items of
nodes which has exited when CRDB is used and batch the queries because
CRDB struggles when executing in a single query unlike Postgres.
The new test which has been added to this commit to verify the CRDB
batch logic for deleting all the transfer queue items of the exited
nodes has raised that the Enqueue method has to run in baches when CRDB
is used otherwise CRDB has return the error "driver: bad connection"
when a big a amount of items are passed to be enqueued. This error
didn't happen with the current test implementation it was with an
initial one that it was creating a big amount of exited nodes and
transfer queue items for those nodes.
Change-Id: I6a099cdbc515a240596bc93141fea3182c2e50a9
The system and database time may drift. We should use database time for
absolute "as of system time" to ensure that it's not newer than the
current database time. When the "as of system time" is in the future,
then the query will fail.
Change-Id: I5423f6aaad966ca03a76b5ff805bfba932e44a51
multiregion satellites have complex database connection strings
largely due to using a different backend for the repair queue than
cockroach.
billing stuff didn't work right with this.
Change-Id: Ie8759a8c47e71347c3a190abfc9d53945d7b8855
Add an auto-generated table of content to the README for easy to find
and browser the documentation of the available API end points.
Change-Id: Id94d904cefd30449234224072ddc50a181aaba04
Currently nodeselection package only contained state for uploads, move
these to a subpackage, such that we can make another "downloadselection"
for downloads. Then move selection logic from overlay to nodeselection.
Change-Id: I0fc42bcae3a29db2728dae9f3863b1e95bf5165b
When audits are being recorded, we automatically add some SQL to remove
the node from the pending audits table in case it exists. They are
removed from pending audits even if the node was offline for the audit.
This is not the correct behavior.
Add statement to record audit results in reverify tests to ensure no
more false positives.
Change-Id: I186ae68bc5e7962ef6c5defbebc1d95e63596a17
We are already adding the free tier coupons at the end of
InvoiceApplyCoupons, but there is a case where we will charge customers
who do not currently have a coupon the next time invoices are generated.
By applying the free tier coupon before preparing invoice project
records, we cover this case.
Once every customer has a coupon, it will be safe to remove this
functionality and only apply new coupons at the end of invoicing.
Change-Id: I65afbe5c0b84e63eeb1a0221e8d95311d87641a0
The DBs of our production satellites have some indexes that we didn't
have in the migrations because at that time we weren't able to add them
because our migration test was not able to deal with Cockroach indexes
with the STORING clause.
We have recently modified the storj.io/private/dbutil/pgutil package to
support the CRDB STRORING clause, so we are adding the missing indexes
to our migrations for being able to have them if we have to recover a DB
from scratch or we deploy a new DB satellite.
Change-Id: I686ff84e5b4c02d9615f50fa531261363affefb8
errs.Class should not contain "error" in the name, since that causes a
lot of stutter in the error logs. As an example a log line could end up
looking like:
ERROR node stats service error: satellitedbs error: node stats database error: no rows
Whereas something like:
ERROR nodestats service: satellitedbs: nodestatsdb: no rows
Would contain all the necessary information without the stutter.
Change-Id: I7b7cb7e592ebab4bcfadc1eef11122584d2b20e0
The previously configured never-expiring coupon does not refill every
month. Eventually, even though it never expires, it will run out. This
commit makes several small changes to address this issue for the free
tier:
* Change the config for the promotional coupon to be $1.65 for 1 month
(the change from $10 to $1.65 is due to our recent pricing changes)
* Update PopulatePromotionalCoupons (PPC for brevity) to add promotional
coupons to users with expired and consumed coupons (all users with a
project and no active coupons should get a new coupon when PPC is called)
* Call PPC at the end of the `create-invoice-coupons` stage of invoice
generation - after current coupons are processed and expired/exhausted.
* Remove legacy admin functionality for PPC from satellite/console - we
do not currently use it, but if we did, it should be in satellite/admin
instead.
Change-Id: I77727b97bef972df32ebb23cdc05055827076e2a
Allows us to remove the following files from satellite branding
repo, with an up-to-date single source of truth now in storj/storj:
* web/satellite/src/common/registrationSuccess.html
* web/satellite/src/common/registrationSuccess.scss
* web/satellite/src/views/register/registerArea.html
* web/satellite/src/views/register/registerArea.scss
The registrationSuccess files have been removed from all satellites in
the branding repository. The registerArea files have been removed only
from production satellites in the branding repository.
Importantly, this change enables the "resend email" functionality on
production satellites - previously, this functionality was available in
storj/storj, but not our branding repository.
Removes the config for VerificationPageURL, which redirected users away
from the satellite app to storj.io after creating an account. In order
for the email resend button to work, we cannot leave the app.
Adds a new config value for partner satellites, which replaces the
partner satellite names config. The new config includes name and
address. It is validated on setup/run to ensure it can be parsed.
Change-Id: I67db0702d9b9641f1a37b599f2929d56f3c33aca
For business accounts we need to track the sales contact.
It will be a question to business accounts during onboarding.
Change-Id: I8d101ce1b52091478dfb0ddd875e1cc717d765d3
Initially there were pkg and private packages, however for all practical
purposes there's no significant difference between them. It's clearer to
have a single private package - and when we do get a specific
abstraction that needs to be reused, we can move it to storj.io/common
or storj.io/private.
Change-Id: Ibc2036e67f312f5d63cb4a97f5a92e38ae413aa5
cache is really common variable and type name and we have already used
the package name alias in multiple places.
Change-Id: I6435785b7549b541d533de59ec94557b9bd11e04
There's a rare chance that `Stop` returns false, however doesn't have
time triggered. Use a non-blocking drain to remove the token.
Change-Id: I1ae18a197424017f0ca76656602709a029b56bfd
WHAT:
whitelist .storjshare.io domain for media-src CSP
WHY:
to enable video preview for linksharing
Change-Id: Ib673602d31ca116e7ce1cee0eba17099a55d7dbc
Initially we duplicated the code to avoid large scale changes to
the packages. Now we are past metainfo refactor we can remove the
duplication.
Change-Id: I9d0b2756cc6e2a2f4d576afa408a15273a7e1cef
We need some chores to join without triggering the loop.
For example it's fine to run metrics, only when something else is
running.
Change-Id: I9d8bd16f59c28c540c8d72971bc4e233a8660c02
Currently the tool verifies:
* validity of plain_offset
* whether plain_size is smaller than encrypted_size
Change-Id: I9ec4fb5ead3356a196392c26ca377fcdb367138e
When nodes check in for the very first time, if the satellite can't ping
them back, they are inserted into the nodes table with
last_contact_success of '0001-01-01 00:00:00+00'. If the stray nodes
chore runs before the node can fix their problem, they are DQd.
Solution: when DQing stray nodes, dont DQ where last_contact_success =
'0001-01-01 00:00:00+00'::timestamptz
Change-Id: I477a02d5ef85b2c930ed6b7d99a4d1995169bca8
Currently the loop handling is heavily related to the metabase rather
than metainfo.
metainfo over time has become related to the "public API" for accessing
the metabase data.
Currently updates monkit.lock, because monkit monitoring does not handle
ScopeNamed correctly. Needs a followup change to monitoring check.
Change-Id: Ie50519991d718dfb872ec9a0176a82e732c97584
The original test caused the testplanet to timeout due to contact
service stuck in a loop. We can change it to only test the failure case for
PingBack method instead of closing the TCP port on storagenodes.
Change-Id: Ic96aee637b39ae95050c6902c2bf9ca51fb586c3
The test function fails randomly in the CI when runs with CRDB. There
isn't currently an explanation why the expectation of number of nodes
which exited 4 minutes ago reports 4 nodes rather than 5 and the only
clue that we have now to see if it gets remedied is to give 2 minutes
rather than 1 to the node that exited close to the time passed function
which makes the test to randomly fail.
Change-Id: I3a731e3eb7f19caebdf29713150727f2cf3e0e0a
metabase has become a central concept and it's more suitable for it to
be directly nested under satellite rather than being part of metainfo.
metainfo is going to be the "endpoint" logic for handling requests.
Change-Id: I53770d6761ac1e9a1283b5aa68f471b21e784198
QUIC
We want to encourage storagenodes to open their udp port. This PR
changes contact service in satellite to try to connect to nodes through
QUIC. If satellite can't reach nodes through quic, it will send an error
message back to nodes. On the nodes side, it will always log out error
message from check in if the error message is not empty.
Whether satellite can reach nodes through quic has no affect on nodes'
uptime check.
Change-Id: I5ebf80f921c4a6504997d83c8bd45226da9d3703
The cursor was not being used in the batch deletion.
The stream ID was not being used while deleting, which could in rare
circumstaces delete a newly uploaded object.
Use the stream id in deletion, rather than passing that information from
one query to another.
Change-Id: I03271c6e72747e345dfb0bb70989f29e835efd8e
Co-authored-by: littleskunk <jens.heimbuerge@googlemail.com>
Co-authored-by: JT Olio <hello@jtolio.com>
Co-authored-by: Igor <38665104+ihaid@users.noreply.github.com>
Check that the bloom filter creation date is earlier than the
metainfo loop system time used for db scanning.
Change-Id: Ib0f47c124f5651deae0fd7e7996abcdcaac98fb4
During metainfo refactor we disabled some validation as it was designed to validate pointer. Now part of this validation is restored. This is first part.
Change-Id: I6132f922fe23d60118bbccfdb77fd93c3c81afed
Currently metrics could trigger metainfo loop on it's own on GC pod,
however running only metrics adds a lot of load, we should only run it
together with other essential chores.
There's no easy way to join that way in metainfo loop, for now, let's
remove it from GC. Satellite Core will still run the metrics.
Change-Id: Ie513a9d2f86add0aa319d08567c6cf3542d73d4e
Document the fields that migrated objects have missing, it's easy to
forget that they might not exist.
Avoid downloading the segment, if we're not sure whether it's the
correct one. We'll later improve the code with an heuristic to get a
best guess, which segment to download.
Change-Id: I12395c17bbf0edf25e0d00c8d072fce6085e303b
If a visitor to the website (run through the reverse proxy) consented to
cookies, read the ID stored in that cookie and send it along with the
Identify/Track calls sent to Segment upon account creation. This allows
us to connect referral information gathered when visitors land on our
website with account activity, helping us improve our onboarding flow.
Change-Id: I0ece717ab5bba67901e50a9b4229c1d4ed7e46b7
We can be more precise and conservative by using the backend
satellite/analytics service. We also no longer need client-side Segment
scripts.
Change-Id: Ic5fb18bea2d388b586ad773e26027d69bde87294
* satellite/analytics: Add analytics for user signed in, project created and access grant created events
Co-authored-by: Moby von Briesen <mobyvb@gmail.com>
We recently added create_at column to segments table.
Old segments needs to get this value from objects table.
This tool will iterate over all objects and update corresponding
segments if create_at column is not set.
Change-Id: Ib5aedc384637e739ee9af84454af0639e2559416
This is a very simple endpoint which allows the satellite UI client to
notify the console server that an event has occurred. We will use this
to track when users have completed certain tasks that can't be tracked
server-side (e.g. generating gateway credentials, setting a passphrase)
As part of this change, one client side event is implemented to use the
endpoint - when the user clicks the button to create gateway credentials
after making a new access grant.
Change-Id: Ic8fa729f1c84474788e1de84c18532aef8e8fa3c
When object doesn't contain segments the implementation would have
returned []*pb.SegmentDownloadResponse{nil} instead of nil.
Change-Id: If38f6d3d9d119f514f63ad1a8762055f657f3004
Add endpoint for getting object information, list segments in a range
and download the first segment in the range.
Change-Id: I056d697ae87c9aa34e7deccba8713902db260457
The new default promotional coupon is $10/month, and doesn't expire.
This change also migrates the coupon.duration column over to the new
coupon.billing_periods, and switches to rely completely on
billing_periods.
Change-Id: Ic3341e9fa4040449bab5e66ca4ee2640b095cf3d
Repair checker expects to have information about CreatedAt and RepairedAt fields to calculate segment age metric.
Change-Id: I6b41df880d77133be541e14d10d91cc75759b339
Currently we can have an error about duplicated entry while inserting into value_attributions table. This change is changing simple insert into insert that is doing nothing on conflict.
Change-Id: I3efd8dc0b63115e8e2ed8f4196ccf969ee942295
* Add a nullable billing_periods column in the coupons table
* Add nullable billing_periods column to the currently unused
coupon_codes table
* Drop the duration column from the coupon_codes table
* Replace duration config type so that the default promotional coupon
can be configured to never expire
Zero downtime migration plan:
* Add billing_periods column to coupons and coupon_codes tables (this change)
* After one release, remove all references to the old duration column,
replacing with references to billing_periods. At this point, we can also
change the defult promotional coupon to never expire and migrate over
values from the old duration column.
* After another release, drop the duration column.
Change-Id: I374e8dc9fab9f81b4a5bc681771955662d4c007a
we want to know a lot more about what's going on during
the operation of the metainfo loop. this patchset adds
more instrumentation to previously unmonitored but
interesting functions, and adds metrics that keep track
of how far through a specific loop we are. it also
adds mon:lock annotations, especially to the metainfo
loop run task, which recently changed, silently broke
some queries, and thus failed to alert us to spiking
run time issues.
Change-Id: I4358e2f2293d8ebe30eef497ba4e423ece929041
instead of only generating invoices for nodes that had some
activity, we generate it for every node so that we can find
and pay terminal nodes that did not meet thresholds before
we recognized them as terminal.
Change-Id: Ibb3433e1b35f1ddcfbe292c034238c9fa1b66c44
This change introduces a new config flag,
--overlay.audit-history.offline-suspension-enabled,
to toggle suspending nodes for offline audits.
If the flag is set to true, nodes will be suspended if they meet the
requirements.
If the flag is false, nodes will not be suspended. If they are already
suspended and/or under review, these will be cleared.
Change-Id: Ibeba759c42d6e504f6b7598120d4fd4dab85ca74
Previously we would select a limited number of nodes for DQ in a
CTE and run the update on that set in a single transaction. This
could lead to locking on the table, so instead we select and update
in separate transactions.
Change-Id: I1e802c0845e829eeadcee4fa382f58462515fdb1
- add Credit History table to billing acount page and set up ui for a user adding promo codes
- implement promo codes ui in registration form
- add feature flag to handle if coupon code ui should be rendered
Change-Id: I9fdeef7cffc7901958d3f9be335e1115b2471a2e
* Set up basic structure of new service.
* Implement a basic analytics track event for user creation.
Change-Id: Ica8c785540b1ef9d848404af307a22f21d33c6aa
We were using two queries to delete one or more objects and its/their
segments in DeleteObjectExactVersion, DeletePendingObject, DeleteObjectLatestVersion,
DeleteObjectAnyStatusAllVersions,DeleteObjectsAllVersions.
This change delete objects and their segments in one query.
Change-Id: Ib2c0eb501f00b091ee32519e02155350c4dcb8b0
We have a use case for this in ListSegments. ListSegments is going to
return the EncryptedETag along with EncryptedKey and EncryptedKeyNonce.
It also must return the EncryptionParameters.
Since the EncryptionParameters are in the objects DB table, it would be
more efficient for ListSegment to avoid querying that DB table, but take
it from the SatStreamID.
Change-Id: I16c98641c0fe0c98e3303329d0da6ef137ca55cf
Rename the functions that are prefixed with 'New' which connect with
Redis by 'Open' to make clear that they perform network operations.
Change-Id: I1351e89a642e8e2c2586626646315ad0fb2c6242
This is one step for implementing the free tier:
* Change the default project limit from 10 to 3
* Move storage and bandwidth project usage limits from the metainfo
package to the console package (otherwise there is a cyclical
dependency, and metainfo doesn't use these values anyway)
* Change the default storage usage limit per project from 500gb to 50gb
* Change the default bandwidth usage limit per project from 500gb to 50gb
* Migrate the database so that old users and projects continue to have
the old defaults (10 projects/500gb usage)
Change-Id: Ice9ee6a738bc6410da18c336c672d3fcd0cab1b9
WHAT:
new endpoint to be able to delete apiKey/accessGrant by name and project id
WHY:
it will be called to delete special pregenerated access grant which will be used to generate gateway credentials for file browser component or bucket management
Change-Id: I7467ebaab27a7da33efd062536c6da41e6ed4c30
We are implementing the free tier, which will give all new users 3
projects, 50gb storage, and 50gb bandwidth per project. All users will
receive a recurring coupon to cover this amount of usage.
With the free tier, we no longer need a paywall. Users will not need to
enter a payment method unless they want to increase their project or
usage limits.
Change-Id: If3b026e91858e5f557a2758e366616cecc8f21c7
We would like to log Node IDs and last contact successes of nodes DQd
in this manner. We would also like to avoid returning an unbounded list
of items from the db. Therefore we change the query to select a limited
number of nodes that meet the DQ conditions and iterate until 0 rows are
returned. Each column of the query is already indexed.
Change-Id: Iaec2d9b56e7202b7c2028ba21750d40c8dd506ee
We were deleting expired objects by directly executing a delete query.
With this change, we first select the objects to be deleted and then
delete them (as recommended by cockroachdb for deleting using a non indexed
column).
Change-Id: Ied150fbdc7031a343a74e0b9dab316598188ef66
At some point we might try to change original segment RS values and set Pieces according to the new values. This change adds add NewRedundancy parameter for UpdateSegmentPieces method to give ability to do that. As a part of change NewPieces are validated against NewRedundancy.
Change-Id: I8ea531c9060b5cd283d3bf4f6e4c320099dd5576
The coupon_codes table will allow for administrators to create new promo
codes associated with coupon information (amount, duration, etc...).
A user will be able to enter a promo code (aka coupon code) in order to
apply a new coupon to their account. The coupon in the coupons table is
linked to the template defined in the coupon_codes table.
Change-Id: I50e49fa92afbc6aa9d01d8a895c069efb59e472b
WHAT:
enter passphrase step for users who has already created passphrase
WHY:
to let users proceed to upload step
Change-Id: I084aec5b863981978cf190f99ee95154fbed9aab
Among other conditions, nodes fail audits by returning incorrect
data and by reaching the max reverify count, but we weren't logging
these events. This commit adds the missing logs.
Change-Id: I80749a7e95e8cb97bc8dd7dac1e523e223114b7f
Update the Redis dependency to use the last major production version.
The last version accepts a context parameter in all the network methods
so it allows us to pass it through them.
Change-Id: I34121b2ec3c2728602115c724933ad24c9e6e4fd
WHAT:
beta satellite top banner's copy is changed to include support/feedback URLs
WHY:
so users using our beta satellite will be able to report feedback somewhere
Change-Id: Ibc349c8b3354b577275fcf1d2b75bfdd267729d9
At the moment we are trying to optimize deletion queries but its hard to verify deletion performance. Until we are sure that the queries are good we will just log errors instead shutting down whole satellite core.
Change-Id: I5625251d4518c35f0d46d6bf37b2f3ea7950675e
If a non-nil value is read from created_at column of the segments table,
it will be set to the CreatedAt field if SegmentListItem.
Change-Id: I02691d8e11fad12c1b0e4c443bdebb568016ffe3
The created_at columns is first added without a default value to avoid
setting the current time to existing segments.
Change-Id: Ic2fe3da238422e2949e6f3016fbac04eb89ba037
Migration step 148 will cause errors because we missed some
references to the columns being dropped. Removing the step
altogether causes problems with backwards compatibility tests
because the change already exists in the latest release tag.
To circumvent, we change v148 to an empty migration.
Add methods FindTable and RemoveColumn in private/dbutil/dbschema
Change-Id: Ia527e95b88a88c5dc82800928ce6f8cfb879e334
Move a specific interface & types used for testing to be a private
subpackage with a name that clearly identifies it for testing purpose.
Change-Id: I646cf3b6f0a3b518a6f9a125998dc5a02df02db6
ListSegments loads all the segment data into memory, however this can
add up to a lot of data with inline segments and large objects.
Change-Id: I037738f0e70b810ecbea7d83b00ea7ca9eb90c7a
IterateDatabase method was used by zombie segment reaper which is removed for multipart implementation.
Change-Id: I93e1294236612d6d82b2ab57053bb84e653f72b4
Iterate over streams/segments rather than loading all of them into
memory. This reduces the memory overhead of metainfo loop.
Change-Id: I9e98ab98f0d5f6e80668677269b62d6549526e57
cockroach is having problems with huge transactions and
having them complete before timeouts or whatever, so
do smaller transactions.
because we can have partial recording of payments which
are not unique, we have to do a thing where we read and
check if it already exists before writing. this is not
concurrency safe.
Change-Id: Ia7d59499a43ce6d70cb2a23754edbdd1b643ef1a
Previously if node was not found in containment, it was
given the status, 'skipped'.
We later try to delete skipped nodes from containment.
To fix this, add a new status called 'remove' to differentiate
nodes which should be skipped and nodes which should be deleted.
Change-Id: Ic09e62dc9723c89d0c9f968ce68c039114a9d74e
For metainfo loop we need only some of Segment fields. By removing some of them we will reduce memory consumption during loop.
Change-Id: I4af8baab58f7de8ddf5e142380180bb70b1b442d
The previous default FlushBatchSize of 10000 was causing major
slow down in select and insert statements on bucket_bandwidth_rollups.
We saw on the saltlake satellite that a FlushBatchSize of 1000 helped
reduce contention and query latency.
Change-Id: Ib95e73482219bc5aedc11925b1849fa5999774ba
This method will be used only with metainfo loop and we need to customize query to consume less memory.
Change-Id: Iaa97392f483c5df5609d501b3847b80eb1ea2583
We want to read from DB only those fields that are used by metainfo loop so we need to remove most of fields from LoopObjectEntry.
Change-Id: I14ecae288f631dc0ff54f4c560ce43b736eccdcf
Currently our metabase assumption is that it may contain arbitrary
bucket names and endpoint applies the naming constraints as it sees fit.
However by passing bucket_name as TEXT pg and crdb automatically try to
convert it to []byte, which may or not may work as intended... or in
some cases not work at all.
Cast all bucket name arguments to []byte to make it work.
Change-Id: I44650f5c873010997398bb0163d7f56ff6d9b5cf
We want to have custom loop iterator to avoid reading all object fields to reduce memory consumpion. This is first step to just rename existing iterator to IterateLoopObjects.
Change-Id: I8878ff21a49ba224db2d497cc8f9076e75c7609e
The new 'consistency ge-cleanup-orphaned-data' cli command deleted
orphaned transfer queue items, but not entries in the
graceful_exit_progress table. This will delete orphaned entries
from the exit progress table too.
Change-Id: I5f927aac1f258490678deaf179be92ccfe10fcd8
Currently the old encrypted keys may not match the path component
encoding. Change the iterator such that the prefixes handle arbitrary
byte sequences.
Change-Id: I0a50049f4ef9887e1c4df6f9692f967a054430eb
New metainfo loop can have memory issues when in one batch we will have object with many segments. This change limits number of batched segments to defined limit. Solution is not perfect as if we will have single object with extreme large segments count it can cross defined limit a lot. We need to prepare safer solution soon.
Change-Id: Iefcf466d5bac76513d4219b1a9d99adc361c54ae
Turns out that many methods generated with dbx where not used at all.
Lets remove them.
As a next step we can think about dropping tables like:
* user_credit
* offer
Change-Id: Id6cda81a701348db2a6b8b26daa22ae9c4f87cb4
It looks that we cannot use root piece id as indicator if segment is inline as we have case in SLC satellite that inline segment have root piece id set. Pieces should be better thing to check.
Change-Id: I2377ff88861390342273f5e71871373eaf462615
WHAT:
config flag to indicate if satellite is in beta
WHY:
to avoid using hardcoded satellite names which may cause issues
Change-Id: If92eb7417c340bf343a9a91e2f6b11f0349020c5
Segments are not read in batches. For each batch of objects
we are reading all segments for those objects.
Change-Id: Idaf19bbe4d4b095065d59399dd326e22c57499a6
Pregenerate the database schema we should use for most tests.
Currently, Cockroach is slow with regards to migration and it's
better if it happens in as few transactions as possible.
This reduces test time from ~21min to ~15min.
Change-Id: Ife8117053e6b9ecf3c93fe63677edf15d4d7c254
The subquery for DELETE FROM obects returns a stream_id field for filtering. Unfortunately stream_id is not indexed. This change removed the subquery from the CockroachDB delete bucket query.
Change-Id: If1abe21668c593e6d4bdc3ba8cdbad26c09d234e
Using random piece num may generate the exact same key or piecenum.
Instead use fixed piecenum and key.
Change-Id: I54b7bc1a6698149bf99608dd46501ea963cec084
TestProjectUsage_ResetLimitsFirstDayOfNextMonth seems to be flaky.
It doesn't seem to reproduce easily, however it should flush orders from
storagenodes to satellites, that way the orders chore flushing on the
satellite makes sense.
Change-Id: I586d9d8ce10b5f6320e79324f6128b4f8c2cac5f
Testing interfaces is slightly clearer when it's in the package needing
the database rather than each individual implementation.
Change-Id: I10334c214a205f7e510b939b4359a2214c4e060a
When listing pending objects with prefix, the prefix should be prepended
to the EncryptedPath in satStreamID. Otherwise, listing multipart
uploads may display different UploadID than expected.
Change-Id: I27e9f9af9348783e053ad123121b6ddd051739e4
We need to keep empty inline segments as we did it with pointerDB because otherwise old uplinks after uploading data won't be able to download such file. To reduce number of empty inline segments on uplink side we need to implement skipping empty last inline segments for multipart upload.
Change-Id: Ice86c805babba1ad17149754cbd6b3f4fd652722
ListAllBuckets could skip buckets when the total number of buckets
exceeds list limit. Replace listing buckets with looping directly
on the objects table.
Change-Id: I43da2fdf51e83915a7854b782f0e9ec32c373018
We have multipart objects so we may get multiple inline segments
sequences or no segments at all for objects.
Change-Id: Ie46ee777a2db8f18f7154e3443bb9e07ecb170f7
We have multipart objects so we may get multiple inline segments
sequences or no segments at all for objects.
Change-Id: Ic19150efe2ca2b1c60ddd5d30b1317922221c221
Until now we where using single RS per object but it turns out that we
need to be able to support RS per segment. We need to give uplink such information while downloading.
As an addition we are using RedundancySchemePerSegment flag for GetObject request to detect if
we should try to get RS from segment for this request response.
Change-Id: I209dad324496ff59b521b11d2343da61dcdbe7f5
Until now we where using single RS per object but it turns out that we
need to be able to support RS per segment. We need to give uplink such information while downloading.
Change-Id: I6565b7c08962b3a1429f6079e7c2023a0a7c8b72
When there are concurrent refreshes to the cache and the entries are
missing, it could end up causing multiple database calls, even though
only one is needed.
Change-Id: I1ae7a124bbdd1570473cf3a032d375d2f25a8426
adds tests to BeginObjectNextVersion and BeginObjectExactVersion
to check the behavior when an older or a newer committed version
exists.
The current behavior is: everything is committed.
Change-Id: Ia8facbe0dc038a5d214e4e56da3c8e4df2f18900
This enables the transfer of pieces from an on-going multipart upload.
Tests are also modified to take into account pending multipart uploads.
See https://storjlabs.atlassian.net/browse/PG-161
Change-Id: I35d433c44dd6e618667e5e8f9f998ef867b9f1ad
Old uplinks sends some additional information inside marshaled protobufs and we need to extract things like encryption parameters. Newer uplinks are passing it directly in request.
Change-Id: I0b575e68c3ed98481247fe38344e7d61cbd542ba
This adds AliasPieces run length encoding. On average it should
make our pieces encoding:
repair=50,optimal=85,total=90 152.0 bytes
repair=16,optimal=37,total=50 65.4 bytes
Change-Id: I391a9183164828f05383a3cde9ab0e4549c2d440
WHAT:
people who sign up on US2 are not redirected to verifying page. From now on we have to set verify URL to make redirect happen
WHY:
user experience
Change-Id: I96c51a2c4f9cb6376cbfea639675b32918b58bee
This PR removes all back-end related referral program code including the
marketing portal.
We will have a separate PR for front-end code and database migration to
drop `offers` and `usercredits` table
Change-Id: If59f952cddfe0558a7dc03a0eac7cc1081517f88
We will add a cache to nodes, so using completely random nodes wouldn't
show the actual performance.
Change-Id: I94f18283712812f05f7795efd3c7cf57499fa52c
We need to keep an inmemory cache to avoid lookups into aliases table.
This adds the inmemory state of the cache.
Change-Id: Ief2b9bb19e10b46839b9208472dfc3035eb49af3
This is first step in supporting node aliases. It adds a table
that automatically assigns aliases to nodes inserted into the table.
Change-Id: Ibdf40097c3c1e5b371500203f8db203505a48adc
This ensures the caveats are unique even when they contain the same
permissions and will result in unique macaroons. This is important to
ensure revocation doesn't impact more macaroons than intended.
Change-Id: I6354edd0119f2d85eaf580f2d1926a3de9151b88
Remove the orders Settlement endpoint because it isn't used and it was
already always returning an error.
Change-Id: I81486fbe7044a1444182173bc0693698ee7cfe7e
These changes are independently tracked on
https://github.com/storj/storj/tree/jt/migration-reorder
The point of this is to make the distributed column
migration, needed for SNO invoice generation, the very
next one, so we can release it as a point release.
Change-Id: I26e1c03629c4f079b9ad12485e2b71a715d82b3b
allow disabling tcp/quic
In order to have more control of a server so that we can
simulate connection failures in `testplanet`, this PR changes
quic.Listener to accept an existing UDPConn instead of relying on the
quic-go library to create the UDPConn.
This PR also adds two flags on the `server.Config` struct to allow
enabling/disabling tcp/tls listener and quic listener. By default, they
are both set to true.
- `DisableTCPTLS`: internal flag, disables tcp/tls listener.
- `DisableQUIC`: hidden flag, disables quic listener
By making the `DisableQUIC` a hidden flag, it allows storagenode operators to
have the ability to disable quic traffic in case their set up can't work
with udp traffic.
Change-Id: I853b12435d988b9c41ad9b873fd57480d792e378
this changes from a satellite error to a local encryption
error with the upcoming permissions changes where we only
include keys for the paths that are allowed.
Change-Id: I7aa37cfbaee31a1e54afe0423b283b9f41d9345f
Limit bucket name lookup to date range of the calling methods since we only need distinct bucket names for that time period.
Adds new index and removes an index specific to project ID since it is no longer needed.
Change-Id: Ic07bbfb1c32280e0c0e39f8da020b284e1e5d974
It's impossible to time correctly this check. The segment may expire
just at the time we upload the repaired pieces to new storage nodes.
They will reject this as expired and the repair will fail.
Also, we penalize storage nodes with audit failure only if they fail
piece hash verification, i.e. return incorrect data, but only if they
have already deleted the piece.
So, it would be best if the repair service does not care about object
expiration at all. This is a responsibility of another service.
Removing this check will also simplify how we migrate this code
correctly to the metabase.
Change-Id: I09f7b372ae2602daee919a8a73cd0475fb263cd2
Fix an issue due to copy-paste problem that made that the Graceful Exit
test to be flaky.
The test uses a time created at the beginning of the test for avoiding
to get undeterministic time differences due to the fact of the response
time variation by the DB queries, however some part of the test were
using a current time rather than this base time, so they have been
addressed.
Change-Id: I4786f06209e041269875c07798a44c2850478438
We need this method to fix repairing pending objects. In another PR, it
will replace the GetObjectLatestVersion + GetSegmentByPosition calls
that are currently executed.
Change-Id: I4c5c2ab604edf898452b6fd21b86d4d3f970ce79
Delete satellite order methods and DB tables which aren't used anymore
after we have done a refactoring on the orders to stuck bucket
information in the orders' encrypted metadata.
There are also configuration parameters and a satellite chore that
aren't needed anymore after the orders refactoring.
Change-Id: Ida3682b95921df70792284b42c96d2508bf8ca9c
Add a command to the satellite for cleaning up the Graceful Exit (a.k.a
GE) transfer queue items of nodes that have exited.
The commit adds to the GE satellite DB a couple of new methods, and its
corresponding test, for performing the operations of the new command.
Change-Id: I29a572a59689d63b24990ac13c52e76d65aaa917
using redash i manually checked that the only times the sum of
the payments does not match the paid column is for 2020-12 and
if it does not match then there are no payments.
Change-Id: I71ce0571de7e38e21548d7d6757b25abc3bfa781
The rollup archiver chore moves bucket bandwidth rollups and
storagenode rollups that are older than a given duration
to two new archive tables.
Change-Id: I1626a3742ad4271bc744fbcefa6355a29d49c6a5
This index is obsolete and duplicates a similiar (project_id, name)
index on the same table.
Moreover, it might confuse CockroachDB which of the two index to use,
which may might affect DB performance.
Change-Id: If8d1df8347714942cea9dca82864ba5f4973bed3
Comparing the result from a subquery with the "IN" operator instead of
"=" makes a huge difference in the execution time of the SQL query on
CockroachDB.
Change-Id: I76e8f75a7bc95951667345d1ed9bd60f9aef3edb
When we observed the value for total piecesizes stored in the network,
we were doing it after converting them to byte-hours, rather than using
the actual piece sizes. This fixes that issue.
Change-Id: I1564d21b519f70eb59f298d97dbd777baf127723
We wanto have single uplink branch for standard and multipart-upload satellite but some tests are using helper methods from multipart. This change adds methods used by uplink test.
Change-Id: I82352ed56674ff7e8743b58061ba594018e78e3b
We are checking if satStreamID is created in the last 48 hours. If it is
older we treat is as expired an fail to unmarshal it.
Since the satStreamID is also the Upload ID for multipart uploads, this
means that all calls fail for multipart uploads older than 48 hours.
Even aborting old multipart uploads is not possible.
To resolve this issue, we should stop checking satStreamID for
expiration.
Change-Id: Ieaf53ed3cd800cdd08843676c2d9490b007d962e
Parts that have segment index gaps should be treated similarly how
multipart objects are, because direct calculation of the segment does
not work.
Change-Id: I2717eac36f085b5100f3d600fcf0ce056202a9eb
CreateGetOrderLimits is not used anymore because we have CreateGetOrderLimits2. We need to remove old method and fix name of second.
Change-Id: I59148b8d28fc9dbab7d452c884319125a02745d1
In some cases we need to set encryption parameters later, with CommitObject method. This change makes Encryption optional with BeginObject* methods and mandatory with CommitObject if not set earlier.
Change-Id: I812c9b0e8fc213ca32d4758e0e68227e0e9bdd32
In the past we were storing fixed segment size with StreamInfo, encrypted in metadata. The value was unencrypted size of segment, not encrypted one.
Change-Id: Id6b18440c674223eabbb152b1636c83e1ab6462c
Add ProjectsCursor type for pagination
Add PageCount, CurrentPage, and TotalCount ProjectsPage
This allows us to mimic the logic of GetBucketTotals and the
implementation of BucketUsages in graphql for the new ProjectsByOwnerID
functionality.
Change-Id: I4e1613859085db65971b44fcacd9813d9ddad8eb
Full scope:
private/testplanet,satellite/{overlay,satellitedb}
Description:
In most cases, downtime tracking with audits will eventually lead
to DQ for nodes who are unresponsive. However, if a stray node has no
pieces, it will not be audited and will thus never be disqualified.
This chore will check for nodes who have not successfully been contacted
in some set time and DQ them.
There are some new flags for toggling DQ of stray nodes and the timeframes
for running the chore and how long nodes can go without contact.
Change-Id: Ic9d41fdbf214736798925e728245180fb3c55615
Tally shouldn't abort its cycle if the accounting cache return an error
because it isn't an essential requirement to update it.
Change-Id: I78fd2bd9cf253ddedfb9ada80c0fa2ddf438f647
On upload we need to override pending and committed object. This change is adjusting DeleteObjectAllVersions to delete both.
Change-Id: Ib66c2af207c618119f7bf0de7fa9d3e5145d8641
* Deduplicate NodeID list prior to fetching IPs.
* Use NodeSelectionCache for fetching reliable IPs.
* Return number of segements, reliable pieces and all pieces.
Change-Id: I13e679caab275488b4037624b840a4068dad9589
For being able to have resilient multi-region satellites we cannot stop
processing uploads/download client request when Redis isn't responding
properly.
These changes avoid to stop the processing of the client requests when
we cannot check if the client exceeds its storage or bandwidth limits
and we cannot update its used storage/bandwidth limits because Redis is
not responding successfully or the satellite database returns an error.
Change-Id: Ia7f12c07fc9ffdfad0e7ff052ff3fd81eca0f0e3
Respond to the HTTP clients which request the project usage limits with
different status codes depending of the error class returned by the
satellite/accounting Service.
Change-Id: I6f486ea55517f616c7cec81dbbe77e997484180f
This is the first step in the removal of uptime columns on the
nodes table. These columns are no longer used:
uptime_success_count
total_uptime_count
uptime_reputation_alpha
uptime_reputation_beta
In order to avoid breaking backwards compatibility, we need to
remove all references to these columns before removing the columns
themselves from the database. However, since uptime_success_count
and total_uptime_count are NOT NULLABLE, we can't remove them from
the insert statements in the overlay. So we can't remove the columns
because of the references, and we can't remove the references because
the columns can't be null. What a pickle. To remedy this, we will set a
default on the columns. Then we should be able to remove them from the
insert statements
Change-Id: I75f6c56fb7897835bbf29869f86f39de1d9dd345
We have to adapt the live accounting to allow the packages that use it
to differentiate about errors for being able to ignore them and make our
satellite resilient to Redis downtime.
For differentiating errors we should make changes in the live accounting
but also in the storage/redis.Client, however, we may need to do some
dirty workarounds or break other parts of the implementation that
depends on it.
On the other hand we want to get rid of the storage/redis.Client because
it has more functionality that the one that we are using and some
process has been started to remove it.
Hence, we have refactored the live accounting to directly use the Redis
client library for later on (in a future commit) adapt the satellite for
being resilient to Redis downtime.
Last but not least, a test for expired bandwidth keys have been added
and with it a bug was spotted and fix it.
Change-Id: Ibd191522cd20f6a9a15e5ccb7beb83a678e530ff