The created_at columns is first added without a default value to avoid
setting the current time to existing segments.
Change-Id: Ic2fe3da238422e2949e6f3016fbac04eb89ba037
Migration step 148 will cause errors because we missed some
references to the columns being dropped. Removing the step
altogether causes problems with backwards compatibility tests
because the change already exists in the latest release tag.
To circumvent, we change v148 to an empty migration.
Add methods FindTable and RemoveColumn in private/dbutil/dbschema
Change-Id: Ia527e95b88a88c5dc82800928ce6f8cfb879e334
Move a specific interface & types used for testing to be a private
subpackage with a name that clearly identifies it for testing purpose.
Change-Id: I646cf3b6f0a3b518a6f9a125998dc5a02df02db6
ListSegments loads all the segment data into memory, however this can
add up to a lot of data with inline segments and large objects.
Change-Id: I037738f0e70b810ecbea7d83b00ea7ca9eb90c7a
IterateDatabase method was used by zombie segment reaper which is removed for multipart implementation.
Change-Id: I93e1294236612d6d82b2ab57053bb84e653f72b4
Iterate over streams/segments rather than loading all of them into
memory. This reduces the memory overhead of metainfo loop.
Change-Id: I9e98ab98f0d5f6e80668677269b62d6549526e57
cockroach is having problems with huge transactions and
having them complete before timeouts or whatever, so
do smaller transactions.
because we can have partial recording of payments which
are not unique, we have to do a thing where we read and
check if it already exists before writing. this is not
concurrency safe.
Change-Id: Ia7d59499a43ce6d70cb2a23754edbdd1b643ef1a
Previously if node was not found in containment, it was
given the status, 'skipped'.
We later try to delete skipped nodes from containment.
To fix this, add a new status called 'remove' to differentiate
nodes which should be skipped and nodes which should be deleted.
Change-Id: Ic09e62dc9723c89d0c9f968ce68c039114a9d74e
For metainfo loop we need only some of Segment fields. By removing some of them we will reduce memory consumption during loop.
Change-Id: I4af8baab58f7de8ddf5e142380180bb70b1b442d
The previous default FlushBatchSize of 10000 was causing major
slow down in select and insert statements on bucket_bandwidth_rollups.
We saw on the saltlake satellite that a FlushBatchSize of 1000 helped
reduce contention and query latency.
Change-Id: Ib95e73482219bc5aedc11925b1849fa5999774ba
This method will be used only with metainfo loop and we need to customize query to consume less memory.
Change-Id: Iaa97392f483c5df5609d501b3847b80eb1ea2583
We want to read from DB only those fields that are used by metainfo loop so we need to remove most of fields from LoopObjectEntry.
Change-Id: I14ecae288f631dc0ff54f4c560ce43b736eccdcf
Currently our metabase assumption is that it may contain arbitrary
bucket names and endpoint applies the naming constraints as it sees fit.
However by passing bucket_name as TEXT pg and crdb automatically try to
convert it to []byte, which may or not may work as intended... or in
some cases not work at all.
Cast all bucket name arguments to []byte to make it work.
Change-Id: I44650f5c873010997398bb0163d7f56ff6d9b5cf
We want to have custom loop iterator to avoid reading all object fields to reduce memory consumpion. This is first step to just rename existing iterator to IterateLoopObjects.
Change-Id: I8878ff21a49ba224db2d497cc8f9076e75c7609e
The new 'consistency ge-cleanup-orphaned-data' cli command deleted
orphaned transfer queue items, but not entries in the
graceful_exit_progress table. This will delete orphaned entries
from the exit progress table too.
Change-Id: I5f927aac1f258490678deaf179be92ccfe10fcd8
Currently the old encrypted keys may not match the path component
encoding. Change the iterator such that the prefixes handle arbitrary
byte sequences.
Change-Id: I0a50049f4ef9887e1c4df6f9692f967a054430eb
New metainfo loop can have memory issues when in one batch we will have object with many segments. This change limits number of batched segments to defined limit. Solution is not perfect as if we will have single object with extreme large segments count it can cross defined limit a lot. We need to prepare safer solution soon.
Change-Id: Iefcf466d5bac76513d4219b1a9d99adc361c54ae
Turns out that many methods generated with dbx where not used at all.
Lets remove them.
As a next step we can think about dropping tables like:
* user_credit
* offer
Change-Id: Id6cda81a701348db2a6b8b26daa22ae9c4f87cb4
It looks that we cannot use root piece id as indicator if segment is inline as we have case in SLC satellite that inline segment have root piece id set. Pieces should be better thing to check.
Change-Id: I2377ff88861390342273f5e71871373eaf462615
WHAT:
config flag to indicate if satellite is in beta
WHY:
to avoid using hardcoded satellite names which may cause issues
Change-Id: If92eb7417c340bf343a9a91e2f6b11f0349020c5
Segments are not read in batches. For each batch of objects
we are reading all segments for those objects.
Change-Id: Idaf19bbe4d4b095065d59399dd326e22c57499a6
Pregenerate the database schema we should use for most tests.
Currently, Cockroach is slow with regards to migration and it's
better if it happens in as few transactions as possible.
This reduces test time from ~21min to ~15min.
Change-Id: Ife8117053e6b9ecf3c93fe63677edf15d4d7c254
The subquery for DELETE FROM obects returns a stream_id field for filtering. Unfortunately stream_id is not indexed. This change removed the subquery from the CockroachDB delete bucket query.
Change-Id: If1abe21668c593e6d4bdc3ba8cdbad26c09d234e
Using random piece num may generate the exact same key or piecenum.
Instead use fixed piecenum and key.
Change-Id: I54b7bc1a6698149bf99608dd46501ea963cec084
TestProjectUsage_ResetLimitsFirstDayOfNextMonth seems to be flaky.
It doesn't seem to reproduce easily, however it should flush orders from
storagenodes to satellites, that way the orders chore flushing on the
satellite makes sense.
Change-Id: I586d9d8ce10b5f6320e79324f6128b4f8c2cac5f
Testing interfaces is slightly clearer when it's in the package needing
the database rather than each individual implementation.
Change-Id: I10334c214a205f7e510b939b4359a2214c4e060a
When listing pending objects with prefix, the prefix should be prepended
to the EncryptedPath in satStreamID. Otherwise, listing multipart
uploads may display different UploadID than expected.
Change-Id: I27e9f9af9348783e053ad123121b6ddd051739e4
We need to keep empty inline segments as we did it with pointerDB because otherwise old uplinks after uploading data won't be able to download such file. To reduce number of empty inline segments on uplink side we need to implement skipping empty last inline segments for multipart upload.
Change-Id: Ice86c805babba1ad17149754cbd6b3f4fd652722
ListAllBuckets could skip buckets when the total number of buckets
exceeds list limit. Replace listing buckets with looping directly
on the objects table.
Change-Id: I43da2fdf51e83915a7854b782f0e9ec32c373018
We have multipart objects so we may get multiple inline segments
sequences or no segments at all for objects.
Change-Id: Ie46ee777a2db8f18f7154e3443bb9e07ecb170f7
We have multipart objects so we may get multiple inline segments
sequences or no segments at all for objects.
Change-Id: Ic19150efe2ca2b1c60ddd5d30b1317922221c221
Until now we where using single RS per object but it turns out that we
need to be able to support RS per segment. We need to give uplink such information while downloading.
As an addition we are using RedundancySchemePerSegment flag for GetObject request to detect if
we should try to get RS from segment for this request response.
Change-Id: I209dad324496ff59b521b11d2343da61dcdbe7f5
Until now we where using single RS per object but it turns out that we
need to be able to support RS per segment. We need to give uplink such information while downloading.
Change-Id: I6565b7c08962b3a1429f6079e7c2023a0a7c8b72
When there are concurrent refreshes to the cache and the entries are
missing, it could end up causing multiple database calls, even though
only one is needed.
Change-Id: I1ae7a124bbdd1570473cf3a032d375d2f25a8426
adds tests to BeginObjectNextVersion and BeginObjectExactVersion
to check the behavior when an older or a newer committed version
exists.
The current behavior is: everything is committed.
Change-Id: Ia8facbe0dc038a5d214e4e56da3c8e4df2f18900
This enables the transfer of pieces from an on-going multipart upload.
Tests are also modified to take into account pending multipart uploads.
See https://storjlabs.atlassian.net/browse/PG-161
Change-Id: I35d433c44dd6e618667e5e8f9f998ef867b9f1ad
Old uplinks sends some additional information inside marshaled protobufs and we need to extract things like encryption parameters. Newer uplinks are passing it directly in request.
Change-Id: I0b575e68c3ed98481247fe38344e7d61cbd542ba
This adds AliasPieces run length encoding. On average it should
make our pieces encoding:
repair=50,optimal=85,total=90 152.0 bytes
repair=16,optimal=37,total=50 65.4 bytes
Change-Id: I391a9183164828f05383a3cde9ab0e4549c2d440
WHAT:
people who sign up on US2 are not redirected to verifying page. From now on we have to set verify URL to make redirect happen
WHY:
user experience
Change-Id: I96c51a2c4f9cb6376cbfea639675b32918b58bee
This PR removes all back-end related referral program code including the
marketing portal.
We will have a separate PR for front-end code and database migration to
drop `offers` and `usercredits` table
Change-Id: If59f952cddfe0558a7dc03a0eac7cc1081517f88
We will add a cache to nodes, so using completely random nodes wouldn't
show the actual performance.
Change-Id: I94f18283712812f05f7795efd3c7cf57499fa52c
We need to keep an inmemory cache to avoid lookups into aliases table.
This adds the inmemory state of the cache.
Change-Id: Ief2b9bb19e10b46839b9208472dfc3035eb49af3
This is first step in supporting node aliases. It adds a table
that automatically assigns aliases to nodes inserted into the table.
Change-Id: Ibdf40097c3c1e5b371500203f8db203505a48adc
This ensures the caveats are unique even when they contain the same
permissions and will result in unique macaroons. This is important to
ensure revocation doesn't impact more macaroons than intended.
Change-Id: I6354edd0119f2d85eaf580f2d1926a3de9151b88
Remove the orders Settlement endpoint because it isn't used and it was
already always returning an error.
Change-Id: I81486fbe7044a1444182173bc0693698ee7cfe7e
These changes are independently tracked on
https://github.com/storj/storj/tree/jt/migration-reorder
The point of this is to make the distributed column
migration, needed for SNO invoice generation, the very
next one, so we can release it as a point release.
Change-Id: I26e1c03629c4f079b9ad12485e2b71a715d82b3b
allow disabling tcp/quic
In order to have more control of a server so that we can
simulate connection failures in `testplanet`, this PR changes
quic.Listener to accept an existing UDPConn instead of relying on the
quic-go library to create the UDPConn.
This PR also adds two flags on the `server.Config` struct to allow
enabling/disabling tcp/tls listener and quic listener. By default, they
are both set to true.
- `DisableTCPTLS`: internal flag, disables tcp/tls listener.
- `DisableQUIC`: hidden flag, disables quic listener
By making the `DisableQUIC` a hidden flag, it allows storagenode operators to
have the ability to disable quic traffic in case their set up can't work
with udp traffic.
Change-Id: I853b12435d988b9c41ad9b873fd57480d792e378
this changes from a satellite error to a local encryption
error with the upcoming permissions changes where we only
include keys for the paths that are allowed.
Change-Id: I7aa37cfbaee31a1e54afe0423b283b9f41d9345f
Limit bucket name lookup to date range of the calling methods since we only need distinct bucket names for that time period.
Adds new index and removes an index specific to project ID since it is no longer needed.
Change-Id: Ic07bbfb1c32280e0c0e39f8da020b284e1e5d974
It's impossible to time correctly this check. The segment may expire
just at the time we upload the repaired pieces to new storage nodes.
They will reject this as expired and the repair will fail.
Also, we penalize storage nodes with audit failure only if they fail
piece hash verification, i.e. return incorrect data, but only if they
have already deleted the piece.
So, it would be best if the repair service does not care about object
expiration at all. This is a responsibility of another service.
Removing this check will also simplify how we migrate this code
correctly to the metabase.
Change-Id: I09f7b372ae2602daee919a8a73cd0475fb263cd2
Fix an issue due to copy-paste problem that made that the Graceful Exit
test to be flaky.
The test uses a time created at the beginning of the test for avoiding
to get undeterministic time differences due to the fact of the response
time variation by the DB queries, however some part of the test were
using a current time rather than this base time, so they have been
addressed.
Change-Id: I4786f06209e041269875c07798a44c2850478438
We need this method to fix repairing pending objects. In another PR, it
will replace the GetObjectLatestVersion + GetSegmentByPosition calls
that are currently executed.
Change-Id: I4c5c2ab604edf898452b6fd21b86d4d3f970ce79
Delete satellite order methods and DB tables which aren't used anymore
after we have done a refactoring on the orders to stuck bucket
information in the orders' encrypted metadata.
There are also configuration parameters and a satellite chore that
aren't needed anymore after the orders refactoring.
Change-Id: Ida3682b95921df70792284b42c96d2508bf8ca9c
Add a command to the satellite for cleaning up the Graceful Exit (a.k.a
GE) transfer queue items of nodes that have exited.
The commit adds to the GE satellite DB a couple of new methods, and its
corresponding test, for performing the operations of the new command.
Change-Id: I29a572a59689d63b24990ac13c52e76d65aaa917
using redash i manually checked that the only times the sum of
the payments does not match the paid column is for 2020-12 and
if it does not match then there are no payments.
Change-Id: I71ce0571de7e38e21548d7d6757b25abc3bfa781
The rollup archiver chore moves bucket bandwidth rollups and
storagenode rollups that are older than a given duration
to two new archive tables.
Change-Id: I1626a3742ad4271bc744fbcefa6355a29d49c6a5
This index is obsolete and duplicates a similiar (project_id, name)
index on the same table.
Moreover, it might confuse CockroachDB which of the two index to use,
which may might affect DB performance.
Change-Id: If8d1df8347714942cea9dca82864ba5f4973bed3
Comparing the result from a subquery with the "IN" operator instead of
"=" makes a huge difference in the execution time of the SQL query on
CockroachDB.
Change-Id: I76e8f75a7bc95951667345d1ed9bd60f9aef3edb
When we observed the value for total piecesizes stored in the network,
we were doing it after converting them to byte-hours, rather than using
the actual piece sizes. This fixes that issue.
Change-Id: I1564d21b519f70eb59f298d97dbd777baf127723
We wanto have single uplink branch for standard and multipart-upload satellite but some tests are using helper methods from multipart. This change adds methods used by uplink test.
Change-Id: I82352ed56674ff7e8743b58061ba594018e78e3b
We are checking if satStreamID is created in the last 48 hours. If it is
older we treat is as expired an fail to unmarshal it.
Since the satStreamID is also the Upload ID for multipart uploads, this
means that all calls fail for multipart uploads older than 48 hours.
Even aborting old multipart uploads is not possible.
To resolve this issue, we should stop checking satStreamID for
expiration.
Change-Id: Ieaf53ed3cd800cdd08843676c2d9490b007d962e
Parts that have segment index gaps should be treated similarly how
multipart objects are, because direct calculation of the segment does
not work.
Change-Id: I2717eac36f085b5100f3d600fcf0ce056202a9eb
CreateGetOrderLimits is not used anymore because we have CreateGetOrderLimits2. We need to remove old method and fix name of second.
Change-Id: I59148b8d28fc9dbab7d452c884319125a02745d1
In some cases we need to set encryption parameters later, with CommitObject method. This change makes Encryption optional with BeginObject* methods and mandatory with CommitObject if not set earlier.
Change-Id: I812c9b0e8fc213ca32d4758e0e68227e0e9bdd32
In the past we were storing fixed segment size with StreamInfo, encrypted in metadata. The value was unencrypted size of segment, not encrypted one.
Change-Id: Id6b18440c674223eabbb152b1636c83e1ab6462c
Add ProjectsCursor type for pagination
Add PageCount, CurrentPage, and TotalCount ProjectsPage
This allows us to mimic the logic of GetBucketTotals and the
implementation of BucketUsages in graphql for the new ProjectsByOwnerID
functionality.
Change-Id: I4e1613859085db65971b44fcacd9813d9ddad8eb
Full scope:
private/testplanet,satellite/{overlay,satellitedb}
Description:
In most cases, downtime tracking with audits will eventually lead
to DQ for nodes who are unresponsive. However, if a stray node has no
pieces, it will not be audited and will thus never be disqualified.
This chore will check for nodes who have not successfully been contacted
in some set time and DQ them.
There are some new flags for toggling DQ of stray nodes and the timeframes
for running the chore and how long nodes can go without contact.
Change-Id: Ic9d41fdbf214736798925e728245180fb3c55615
Tally shouldn't abort its cycle if the accounting cache return an error
because it isn't an essential requirement to update it.
Change-Id: I78fd2bd9cf253ddedfb9ada80c0fa2ddf438f647
On upload we need to override pending and committed object. This change is adjusting DeleteObjectAllVersions to delete both.
Change-Id: Ib66c2af207c618119f7bf0de7fa9d3e5145d8641
* Deduplicate NodeID list prior to fetching IPs.
* Use NodeSelectionCache for fetching reliable IPs.
* Return number of segements, reliable pieces and all pieces.
Change-Id: I13e679caab275488b4037624b840a4068dad9589
For being able to have resilient multi-region satellites we cannot stop
processing uploads/download client request when Redis isn't responding
properly.
These changes avoid to stop the processing of the client requests when
we cannot check if the client exceeds its storage or bandwidth limits
and we cannot update its used storage/bandwidth limits because Redis is
not responding successfully or the satellite database returns an error.
Change-Id: Ia7f12c07fc9ffdfad0e7ff052ff3fd81eca0f0e3
Respond to the HTTP clients which request the project usage limits with
different status codes depending of the error class returned by the
satellite/accounting Service.
Change-Id: I6f486ea55517f616c7cec81dbbe77e997484180f
This is the first step in the removal of uptime columns on the
nodes table. These columns are no longer used:
uptime_success_count
total_uptime_count
uptime_reputation_alpha
uptime_reputation_beta
In order to avoid breaking backwards compatibility, we need to
remove all references to these columns before removing the columns
themselves from the database. However, since uptime_success_count
and total_uptime_count are NOT NULLABLE, we can't remove them from
the insert statements in the overlay. So we can't remove the columns
because of the references, and we can't remove the references because
the columns can't be null. What a pickle. To remedy this, we will set a
default on the columns. Then we should be able to remove them from the
insert statements
Change-Id: I75f6c56fb7897835bbf29869f86f39de1d9dd345
We have to adapt the live accounting to allow the packages that use it
to differentiate about errors for being able to ignore them and make our
satellite resilient to Redis downtime.
For differentiating errors we should make changes in the live accounting
but also in the storage/redis.Client, however, we may need to do some
dirty workarounds or break other parts of the implementation that
depends on it.
On the other hand we want to get rid of the storage/redis.Client because
it has more functionality that the one that we are using and some
process has been started to remove it.
Hence, we have refactored the live accounting to directly use the Redis
client library for later on (in a future commit) adapt the satellite for
being resilient to Redis downtime.
Last but not least, a test for expired bandwidth keys have been added
and with it a bug was spotted and fix it.
Change-Id: Ibd191522cd20f6a9a15e5ccb7beb83a678e530ff
GetSuccessfulNodeNotCheckedInSince and GetOfflineNodesLimited are overlay methods
which were only used by the previous downtime tracking system which has been removed.
These methods should also be removed.
Change-Id: Idb829d742e1f987e095604423fff656fe581183e
Non-multipart uplink implementation is always trying to download object
by downloading last segment first (PartNumber=0, Index=-1) but this
approach won't work with multipart object. We need to reject such old
style request with reasonable message.
Change-Id: I9221e019933565a8d25136bdfef3e054320bac3d
SatelliteAddress in OrderLimit is not being used anymore and some
satellite addresses may consume too much bytes.
Change-Id: Ic7a0efe5b6211c2f3b91af67b293cde98b29d074
Avoid using project uuid string representation, because
it uses more bandwidth.
This reduces the encrypted metadata size from 118 -> 97 bytes.
Change-Id: Ic53a81b83acc065f24f28cd404f9c0b1fe592594
The total_plain_size and total_encrypted_size columns in the objects
table were set as INT4, which limits the size of committed objects to
just 2 GiB.
This patch migrates the DB to change the type of these fields to INT8.
Change-Id: Iad7e7b44a652e6c5b8e17b80588637bb48390fe6
Do not insert the number of healthy pieces for segment health anymore.
Rather, insert the segment health calculated by our new priority
function.
Change-Id: Ieee7fb2deee89f4d79ae85bac7f577befa2a0c7f
Full prefix: satellite/{overlay,nodestats},storagenode/{reputation,nodestats}
Allow the storagenode to receive its audit history data from the
satellite via the satellite's GetStats endpoint.
The storagenode does not save this data for use in the API yet.
Change-Id: I9488f4d7a4ccb4ccf8336b8e4aeb3e5beee54979
* Separate audit history interface into its own file in the overlay
package
* Add overlay.AuditHistory struct so that internalpb.AuditHistory is
only used from within the database layer
* Add overlay.GetAuditHistory function for features that will require
access to detailed audit history information
* Do not return full audit history from UpdateAuditHistory - callers to
that function only need to know the online score and whether a full
tracking period has been completed
* Move audit history tests out of satellite/satellitedb, since they are
independent of database implementation
Change-Id: I35b0c4ac23bbaabd80624f8a9631c3cb1a1f33bd
Now that the deprecated downtime tracking service is removed
(3fc76f4ffe), we can safely remove
the nodes_offline_times table.
Change-Id: Ia7c6efe32ba104dff5a830af5f2beee3337eefe5
Nodes which are offline_suspended will no longer be considered for new
uploads. The current threshold that enters a node into offline
suspension is 0.6. Disqualification for offline suspension is still
disabled.
Change-Id: I0da9abf47167dd5bf6bb21e0bc2186e003e38d1a
Currently we do not allow anything other than the "paid" status for invoices when
trying to delete a user. However there can be a couple of other states that are
still fine to accept during deletion of a user. This change reverses the order to
check for the status that we do not want to allow.
Change-Id: I78d85af6438015c55100fa201ccffc731c91de1c
this change isn't the real fix. it's just ignoring the problem.
i don't know what the real fix is. is the problem with the test, or
is there actually a problem with the rollup code?
Change-Id: I552bdd947deadc212cc56efc5f818942b9827126
Query nodes table using AS OF SYSTEM TIME '-10s' (by default) when on CRDB to alleviate contention on the nodes table and minimize CRDB retries. Queries for standard uploads are already cached, and node lookups for graceful exit uploads has retry logic so it isn't necessary for the nodes returned to be current.
IterateObjectsAllVersionsWithStatus
We need different implementation for IterateObjectsAllVersions because
we want to iterate over all object without specifying object status.
Existing method will have new name but implementation details are not
changed.
Change-Id: I01b987996772fa7f8fd73da9910d52db2d1aa0d7
Since the Satellite now requires the order encryption functionality (since serial_number table is deprecated) to properly function, we can remove the config flag to turn on/off the feature.
Change-Id: Ie973f72a9a05a81cef9e53dc9c99d22c940c2488
This PR contains the minimum changes needed to stop inserting into the serial_numbers table. This is the first step in completely deprecating that table.
The next step is to create another PR to remove the expiredSerial chore, fix more tests, and remove any other methods on the serial_number table.
Change-Id: I5f12a56ebf3fa4d1a1976141d2911f25a98d2cc3
This fix issues with passing observers between iteration methods.
It's not best implementation but I think we will need to optimize it
soon one way or another.
Change-Id: I574599bfd10822d84e2d2f1800bcd88e176a76ea
The chief segment health models we've come up with are the "immediate
danger" model and the "survivability" model. The former calculates the
chance of losing a segment becoming lost in the next time period (using
the CDF of the binomial distribution to estimate the chance of x nodes
failing in that period), while the latter estimates the number of
iterations for which a segment can be expected to survive (using the
mean of the negative binomial distribution). The immediate danger model
was a promising one for comparing segment health across segments with
different RS parameters, as it is more precisely what we want to
prevent, but it turns out that practically all segments in production
have infinite health, as the chance of losing segments with any
reasonable estimate of node failure rate is smaller than DBL_EPSILON,
the smallest possible difference from 1.0 representable in a float64
(about 1e-16).
Leaving aside the wisdom of worrying about the repair of segments that
have less than a 1e-16 chance of being lost, we want to be extremely
conservative and proactive in our repair efforts, and the health of the
segments we have been repairing thus far also evaluates to infinity
under the immediate danger model. Thus, we find ourselves reaching for
an alternative.
Dr. Ben saves the day: the survivability model is a reasonably close
approximation of the immediate danger model, and even better, it is
far simpler to calculate and yields manageable values for real-world
segments. The downside to it is that it requires as input an estimate
of the total number of active nodes.
This change replaces the segment health calculation to use the
survivability model, and reinstates the call to SegmentHealth() where it
was reverted. It gets estimates for the total number of active nodes by
leveraging the reliability cache.
Change-Id: Ia5d9b9031b9f6cf0fa7b9005a7011609415527dc
A few weeks ago it was discovered that the segment health function
was not working as expected with production values. As a bandaid,
we decided to insert the number of healthy pieces into the segment
health column. This should have effectively reverted our means of
prioritizing repair to the previous implementation.
However, it turns out that the bandaid was placed into the code which
removes items from the irreparable db and inserts them into the repair
queue.
This change: insert number of healthy pieces into the repair queue in the
method, RemoteSegment
Change-Id: Iabfc7984df0a928066b69e9aecb6f615253f1ad2
there were two changes to this package: one that modified
some things and renamed access.go to main.go, and one that
reduced the binary sizes against main.go.
somehow the latter change was included on this branch but
the former was not, which made the folder contain both
main.go and access.go. building the package then fails
with duplicate definitions.
the easiest fix is to remove access.go which makes the folder
match the contents of the current master branch.
Change-Id: I7070a0d25a0399cef3166b8b2189427bc55587e0
There is a new checker field called statsCollector. This contains
a map of stats pointers where the key is a stringified redundancy
scheme. stats contains all tagged monkit metrics. These metrics exist
under the key name, "tagged_repair_stats", which is tagged with the
name of each metric and a corresponding rs scheme.
As the metainfo observer works on a segment, it checks statsCollector
for a stats corresponding to the segment's redundancy scheme. If one
doesn't exist, it is created and chained to the monkit scope. Now we can call
Observe, Inc, etc on the fields just like before, and they have tags!
durabilityStats has also been renamed to aggregateStats.
At the end of the metainfo loop, we insert the aggregateStats totals into the
corresponding stats fields for metric reporting.
Change-Id: I8aa1918351d246a8ef818b9712ed4cb39d1ea9c6
Make changes so that we only import the necessary files from the console package so that the generated wasm code is as small as possible.
This change gets the compiled wasm code down to 8.6MB uncompressed and 2MB when compressed with `gzip --best`.
https://review.dev.storj.io/c/storj/storj/+/3396
Change-Id: Ifdd4be285810757b46bbbe43327c0d0139e5f8f7
Remove a declared variable that's set by never read nor passed to any
function so it's unused code.
Change-Id: I8daf9d1f71d29ab39d7a80011d1b4813ada1c67d
We need to be able to update just remote_pieces column in DB. This is
needed at least for repair process.
Change-Id: I20dcc9b06babfefbbf102f32b1d14946379f26c2
It was designed to detect and remove zombie segments in the PointerDB.
This tool should be not relevant with the MetabaseDB anymore.
Change-Id: I112552203b1329a5a659f69a0043eb1f8dadb551
We migrated satelliteDB off of Postgres and over to CockroachDB (crdb), but there was way too high contention for the injuredsegments table so we had to rollback to Postgres for the repair queue. A couple things contributed to this problem:
1) crdb doesn't support `FOR UPDATE SKIP LOCKED`
2) the original crdb Select query was doing 2 full table scans and not using any indexes
3) the SLC Satellite (where we were doing the migration) was running 48 repair worker processes, each of which run up to 5 goroutines which all are trying to select out of the repair queue and this was causing a ton of contention.
The changes in this PR should help to reduce that contention and improve performance on CRDB.
The changes include:
1) Use an update/set query instead of select/update to capitalize on the new `UPDATE` implicit row locking ability in CRDB.
- Details: As of CRDB v20.2.2, there is implicit row locking with update/set queries (contention reduction and performance gains are described in this blog post: https://www.cockroachlabs.com/blog/when-and-why-to-use-select-for-update-in-cockroachdb/).
2) Remove the `ORDER BY` clause since this was causing a full table scan and also prevented the use of the row locking capability.
- While long term it is very important to `ORDER BY segment_health`, the change here is only suppose to be a temporary bandaid to get us migrated over to CRDB quickly. Since segment_health has been set to infinity for some time now (re: https://review.dev.storj.io/c/storj/storj/+/3224), it seems like it might be ok to continue not making use of this for the short term. However, long term this needs to be fixed with a redesign of the repair workers, possible in the trusted delegated repair design (https://review.dev.storj.io/c/storj/storj/+/2602) or something similar to what is recommended here on how to implement a queue on CRDB https://dev.to/ajwerner/quick-and-easy-exactly-once-distributed-work-queues-using-serializable-transactions-jdp, or migrate to rabbit MQ priority queue or something similar..
This PRs improved query uses the index to avoid full scans and also locks the row its going to update and CRDB retries for us if there are any lock errors.
Change-Id: Id29faad2186627872fbeb0f31536c4f55f860f23
We need to be able to list all buckets in DB without knowing project ID.
This method will be used to list buckets for metainfo loop
implementation based on metabase.
Change-Id: Iac75af0eee4f31e80a15577575a8249cbca787b2
- TestBucketNameValidation
- TestBatch
- TestCommitObjectMetadataSize
- TestIDs
TestOverwriteZombieSegments is removed as not relevant to metabase.
Change-Id: I13cf5abe342089960628f185061303fd4f9d09a4
This also removes the
TestEndpoint_DeleteObjectPieces_ObjectWithoutLastSegment test case as it
does not seem relevant to metabase.
Change-Id: I06a0ecaa8232c10c15e433517a7ba056933bf858
We should set the client requested maxParts to MaxListLimit if it is
greater than that value instead of returning an error.
MinIO default value for maxParts is 10,000 while the satellite's
MaxListLimit is 1,000. If we return an error, the ListParts with default
maxParts will throw an error.
Change-Id: I06739e1d8d8f96803eba491585395da0443aec04
We are no longer planning on implementing downtime penalization using
the method described in
docs/blueprints/archive/storage-node-downtime-tracking-deprecated.md.
Now, we are implementing the design described in
docs/blueprints/storage-node-downtime-tracking-with-audits.md.
This change removes the downtime estimation chores from the satellite
core as well as the package satellite/downtime. A future change will
remove the database table.
Change-Id: I1a1d3cf9dceeba36255d25243294865b89925518
We have some issues with SUBSTRING function on cockroachdb so for now we
are removing it from SQL query and replacing with go code.
Change-Id: I5be921211067d42e7d1a4997076bcfdbed9617a1
We want to stop using the serial_numbers table in satelliteDB. One of the last places using the serial_numbers table is when storagenodes settle orders, we look up the bucket name and project ID from the serial number from the serial_numbers table.
Now that we have support to add encrypted metadata into the OrderLimit, this PR makes use of that and now attempts to read the project ID and bucket name from the encrypted orderLimit metadata instead of from the serial_numbers table. For backwards compatibility and to ensure no errors, we will still fallback to the old way of getting that info from the serial_numbers table, but this will be removed in the next release as long as there are no errors.
All processes that create orderLimits must have an orders.encryption-keys set. The services that create orderLimits (and thus need to encrypt the order metadata) are the satellite apiProcess, the repair process, audit service (core process), and graceful exit (core process). Only the satellite api process decrypts the order metadata when storagenodes settle orders. This means that the same encryption key needs to be provided in the config for the satellite api process, repair process, and the core process like so:
orders.include-encrypted-metadata=true
orders.encryption-keys="<"encryptionKeyID>=<encryptionKey>"
Change-Id: Ie2c037971713d6fbf69d697bfad7f8b672eedd66
iterator
This method replaces `deleteByPrefix` as at the moment only function of
this method was to delete objects in a bucket.
Change-Id: I5266103672003fbd64f3847f53760b1ba0016fe2
Which database access and how it internally does migrations is an
implementation detail and does not belong in the requirements interface.
Change-Id: Ia4a6994f39470063a96a8e5f3a1bd27aa79fe5cd
WHAT:
POST request to get gateway credentials using access grant.
Put request url to config and use it for request.
WHY:
to show gateway credentials on UI
Change-Id: I15ef43ecdeed69b0961d5796aacb47f36d560b1b
these are unchanged from storj.io/dbx.
we're importing them because in a later commit we
will change them, and it'd be nice to see that
diff as a separate commit.
Change-Id: I8315130ed6bab397bd65b9a1a90c29d130b8c02d
this change tries really hard to never have all of the storage node
rollups in memory at the same time, up until the rollups are actually
getting summed together.
Change-Id: If67f49e7d71106798d996a6850b3e48671bd9e18
the immediate need is to be able to move the repair queue back out
of cockroach if we can't save it.
Change-Id: If26001a4e6804f6bb8713b4aee7e4fd6254dc326
We did not test the SegmentHealth function with actual production
values, and it turns out that values such as 52 healthy, 35 minimum
result in +Inf segment health - so pretty much all segments put into the
repair queue have the same health, which means we effectively aren't
sorting by health.
This change inserts numHealthy as segment health into the database so
the segments are ordered as they were before. We need to refine the
SegmentHealth function before we can support multi RS.
Change-Id: Ief19bbfee3594c5dfe94ca606bc930f05f85ff74
Because the PieceTracker receives a piece count per nodes which is an
approximation of the number of nodes that they are going to be reported
by the metainfo loop so we can use as a good guess of the map's size and
initialized with it.
Change-Id: I644db40926c03e4c457457fb41d2ec1da059cea6
MetadataSize can slightly vary and checking for exact value makes
difficult to change what's being encoded in metadata.
Change-Id: I5f1ade41bc26d115e6743367ee35cf1ba74795c9
Otherwise, if left to default version 0, the iterator will include the
cursor item in the result, which fails some tests.
Change-Id: I85103a36852477f371ec46c673a82c2e129978b7
We now have the piece hashes verified for all segments on all production
satellites. We can remove the code that handles the case where piece
hashes are not verified. This would make easier the migration of
services from PointerDB to the new metabase.
For consistency, PieceHashesVerified is still set to true in PointerDB
for new segments.
Change-Id: Idf0ccce4c8d01ae812f11e8384a7221d90d4c183
Currently flag parsing seems to call Set twice, which causes problems
with encryption keys. We can clear for every set for now.
Change-Id: Id5c695b4020194ac1c50a2da9c7d2a896cb9216f
Rather than having a single repair override value, we will now support
repair override values based on a particular segment's RS scheme.
The new format for RS override values is
"k/o/n-override,k/o/n-override..."
Change-Id: Ieb422638446ef3a9357d59b2d279ee941367604d
CRDB doesn't like large deletes. While testing in the POC environment we found that deletes on the serial_numbers table could take hours. This change limits deletes to 1000 at a time (configurable) to avoid blocking other queries.
Change-Id: I08455e25db1574579dd4d7b7125a08e9c913dff1
With the new phase 3 order submission, orders can be added to the
storage and bandwidth rollup tables at timestamps before the most recent
rollup was run. This change shifts the start time of each new rollup
window to account for any unexpired orders that might have been added
since the previous rollup.
A satellitedb migration is necessary to allow upserts in the
accounting_rollups table when entries with identical node_ids and
start_times are inserted.
Change-Id: Ib3022081f4d6be60cfec8430b45867ad3c01da63
Before manipulating order information on storagenodes we need to wait
for the orders to propagate to the database. Some of that happens
async with uplink.
Change-Id: Iaacfd7db0909ab5d2831d06388e5fb27b6d4778f
Firstly, this changes the repair functionality to return Canceled errors
when a repair is canceled during the Get phase. Previously, because we
do not track individual errors per piece, this would just show up as a
failure to download enough pieces to repair the segment, which would
cause the segment to be added to the IrreparableDB, which is entirely
unhelpful.
Then, ignore Canceled errors in the return value of the repair worker.
Apparently, when the worker returns an error, that makes Cobra exit the
program with a nonzero exit code, which causes some piece of our
deployment automation to freak out and page people. And when we ask the
repair worker to shut down, "canceled" errors are what we _expect_, not
an error case.
Change-Id: Ia3eb1c60a8d6ec5d09e7cef55dea523be28e8435
Old iterator returns object keys without prefixes, this helps to reduce
the bandwidth from the database. The endpoint also doesn't send the
prefixes.
Change-Id: I77d85dae671ee3a16abe75db14e19674e80abaf4
to metabase
* EncryptedMetainfoEncryptedKey added to CommitSegment and
UpdateMetadata request
* EncryptedMetainfoEncryptedKey returned with GetObject response and all
delete responses
* EncryptedMetainfoEncryptedKey returned with object iterator results
Change-Id: I917541ab5f3e1863bc8f238d17a15fbf72a23025
We plan to add support for a new Reed-Solomon scheme soon, but our
repair queue orders segments by least number of healthy pieces first.
With a second RS scheme, fewer healthy pieces will not necessarily
correlate to lower health.
This change just adds the new column in a migration. A separate change
will add the new health function.
Right now, since we only support one RS scheme, behavior will not
change. Number of healthy pieces is being inserted as "segment health"
until the new health function is merged.
Segment health is calculated with a new priority function created in
commit 3e5640359. In order to use the function, a new config value is
added, called NodeFailureRate, representing the approximate probability
of any individual node going down in the duration of one checker run.
Change-Id: I51c4202203faf52528d923befbe886dbf86d02f2
It turns out we need to make 2 more changes in order for the new order submission phase 3 to get deployed.
This PR makes 2 changes:
1) when the rollup service deletes tallies, we now keep tallies around until orders expire (vs 1 day like before).
2) the reported rollup chore will now write the storagenode_bandwidth_rollups to a new table _phase2 as an intermediary step so it doesn't conflict with phase 3 order settlement.
These changes need to be deployed for 2 days before we can turn on phase 3 of the new orders settlement workflow.
Change-Id: Iafbff577ba7d55f8f17b7db857311b2ce799de60
The current monkit reporting for "remote_segments_lost" is not usable for
triggering alerts, as it has reported no data. To allow alerting, two new
metrics "checker_segments_below_min_req" and "repairer_segments_below_min_req"
will increment by zero on each segment unless it is below the minimum
required piece count. The two metrics report what is found by the checker
and the repairer respectively.
Change-Id: I98a68bb189eaf68a833d25cf5db9e68df535b9d7
This change is adjusting metainfo endpoint to use metabase for uploading
and downloading remote objects. Inline segments will be added later.
Change-Id: I109d45bf644cd48096c47361043ebd8dfeaea0f3
This PR does the following three things:
1. Defines a high-level interface for this wasm package
- All return value from this package will be wrapped with an
result object that contains a value field and an error field
2. Exposes two new functions to allow users to add permissions for a
given API key
- newPermission()
- setAPIKeyPermission()
3. Adds API documentation for the newly added API functions
Change-Id: Id995189702b369bba18fa344bef4ddfb0f3f1f44
While resolving conflicts with `master` I missed this change which is
needed e.g. to run storj-sim.
Change-Id: I56a548ed92b978510526c26c81af03051acfde2f
Make metainfo.RSConfig a valid pflag config value. This allows us to
configure the RSConfig as a string like k/m/o/n-shareSize, which makes
having multiple supported RS schemes easier in the future.
RS-related config values that are no longer needed have been removed
(MinTotalThreshold, MaxTotalThreshold, MaxBufferMem, Verify).
Change-Id: I0178ae467dcf4375c504e7202f31443d627c15e1
A change was made to use a metabase.SegmentKey (a byte slice alias)
as the last seen item to iterate through the irreparable DB in a
for loop. However, this SegmentKey was not initialized, thus it was
nil. This caused the DB query to return nothing, and healthy segments
could not be cleaned out of the irreparable DB.
Change-Id: Idb30d6fef6113a30a27158d548f62c7443e65a81
WHAT:
change user's email endpoint and appropriate service method was implemented
WHY:
make it possible to change user's email for temporary filezilla account
Change-Id: Ieea41bf49819a42b5f433e8dfaeec24c6d5ddc9f
Storage nodes undergoing Graceful Exit have up to now been receiving
hostnames for all other storage nodes they need to contact when
transferring pieces. This adds up to a lot of DNS lookups, which
apparently overwhelm some home routers. There does not seem to be any
need for us to send hostnames for graceful exit as opposed to IP
addresses; we already use IP addresses (as given by the last_ip_port
column in the nodes table) for all the GET and PUT orders we send out.
This change causes IP addresses to be used instead.
I started trying to construct a test to ensure that the behavior
changed, but it was rabbit-holing, so I've begun to feel that maybe this
change doesn't require one; it is a very simple change, and very much of
the same nature as what we already do for IPs in CreateGetOrderLimits
and CreatePutOrderLimits (and others).
Change-Id: Ib2b5ffe7a9310e9cdbe7464450cc7c934fa229a1
With the new overlay.AuditOutcome type for offline audits, the
IsUp field is redundant. If AuditOutcome != AuditOffline, then
the node is online.
In addition to removing the field itself, other changes needed
to be made regarding the relationship between 'uptime' and 'audits'.
Previously, uptime and audit outcome were completely separated. For
example, it was possible to update a node's stats to give it a
successful/failed/unknown audit while simultaneously indicating that
the node was offline by setting IsUp to false. This is no longer possible
under this changeset. Some test which did this have been changed slightly
in order to pass.
Also add new benchmarks for UpdateStats and BatchUpdateStats with different
audit outcomes.
Change-Id: I998892d615850b1f138dc62f9b050f720ea0926b
After moving SatStreamID and SatSegmentID from common I missed changing
some methods in metainfo endpoint. This change is a fix for that.
Change-Id: I34e121fce47371ee4cfd92cce03809520b68859f
After moving SatStreamID and SatSegmentID from common I missed changing
some methods in metainfo endpoint. This change is a fix for that.
Change-Id: I3344623dc7acfa73db6c20cd3212301e74335857
We have some types that are only valid for satellite usage. Such types
are SatStreamID and SatSegmentID. This change moves those types to
storj/storj and adds basic infrastructure for generating code.
Change-Id: I1e643844f947ce06b13e51ff16b7e671267cea64
Some of metainfo endpoint methods are not used but we still have
implementation there. This change removes unused code and returns
unimplemented error for those methods.
Change-Id: I74e75e0caff76a4f5d119ee989b687b4e9d6e6f9
This change removed unused 'createRequests' struct. As far I remember it
was used to help validating old metainfo beginObject/commitObject flow.
Change-Id: I0f139b9934196d73f26eafa347ba5605722f3a55
As part of the Metainfo Refactoring, we need to make the Metainfo Loop
working with both the current PointerDB and the new Metabase. Thus, the
Metainfo Loop should pass to the Observer interface more specific Object
and Segment types instead of pb.Pointer.
After this change, there are still a couple of use cases that require
access to the pb.Pointer (hence we have it as a field in the
metainfo.Segment type):
1. Expired Deletion Service
2. Repair Service
It would require additional refactoring in these two services before we
are able to clean this.
Change-Id: Ib3eb6b7507ed89d5ba745ffbb6b37524ef10ed9f
- Remove flag for switching off offline audit reporting.
- Change the overlay method used from UpdateUptime to BatchUpdateStats, as this
is where the new online scoring is done.
- Add a new overlay.AuditOutcome type: AuditOffline. Since we now use the same
method to record offline audits as success, failure, and unknown, we need to
distinguish offline audits from the rest.
Change-Id: Iadcfe10cf13466fa1a1c2dc542db8994a6423355
This fixes a slow query that was taking up to 4 seconds in production
SELECT node_id, path, piece_num, root_piece_id, durability_ratio, queued_at, requested_at, last_failed_at, last_failed_code, failed_count, finished_at, order_limit_send_count
FROM graceful_exit_transfer_queue
WHERE node_id = '[redacted]'
AND finished_at is NULL
AND last_failed_at is NULL
ORDER BY durability_ratio asc, queued_at asc LIMIT 300 OFFSET 0;
Change-Id: Ib89743ca35f1d8d0a1456b20fa08c683ebdc1549
Fix the DeleteAccount handler to return 501 HTTP status code because
it's what corresponds for a "Not Implemented" status.
Add a black box test for the DeleteAccount to ensure that always return
an error response because, at this time, we don't allow to delete
accounts through the API.
This test was not added to the corresponding commit
https://review.dev.storj.io/c/storj/storj/+/2712 due to the rush to
fix it.
Change-Id: Ibcf09e2ec52f182a8a580d606c457328d94c8b60
A year ago we made the audit service deleting expired segments.
Meanwhile, we introduced an expired deletetion sub-service in the
metainfo service which sole purpose is deleting expired segments.
Therefore, now we are removing this responsibility from the audit
service. It will continue to avoid reporting failures on expired
segments, but it would not delete them anymore.
We do this to cleanup responsibilities in advance of the metainfo
refactoring.
Change-Id: Id7aab2126f9289dbb5b0bdf7331ba7a3328730e4
In production we are seeing ~115 storage nodes (out of ~6,500) are not using the new SettlementWithWindow endpoint (but they are upgraded to > v1.12).
We analyzed data being reported by monkit for the nodes who were above version 1.11 but were not successfully submitting orders to the new endpoint.
The nodes fell into a few categories:
1. Always fail to list orders from the db; never get to try sending orders from the filestore
2. Successfully list/send orders from the db; never get to calling satellite endpoint for submitting filestore orders
3. Successfully list/send orders from the db; successfully list filestore orders, but satellite endpoint fails (with "unauthenticated" drpc error)
The code change here add the following to address these issues:
- modify the query for ordersDB.listUnsentBySatellite so that we no longer select expired orders from the unsent_orders table
- always process any orders that are in the ordersDB and also any orders stored in the filestore
- add monkit monitoring to filestore.ListUnsentBySatellite so that we can see the failures/successes
Change-Id: I0b473e5d75252e7ab5fa6b5c204ed260ab5094ec
This preserves the last_ip_and_port field from node lookups through
CreateAuditOrderLimits() and CreateAuditOrderLimit(), so that later
calls to (*Verifier).GetShare() can try to use that IP and port. If a
connection to the given IP and port cannot be made, or the connection
cannot be verified and secured with the target node identity, an
attempt is made to connect to the original node address instead.
A similar change is not necessary to the other Create*OrderLimits
functions, because they already replace node addresses with the cached
IP and port as appropriate. We might want to consider making a similar
change to CreateGetRepairOrderLimits(), though.
The audit situation is unique because the ramifications are especially
powerful when we get the address wrong. Failing a single audit can have
a heavy cost to a storage node. We need to make extra effort in order
to avoid imposing that cost unfairly.
Situation 1: If an audit fails because the repair worker failed to make
a DNS query (which might well be the fault on the satellite side), and
we have last_ip_and_port information available for the target node, it
would be unfair not to try connecting to that last_ip_and_port address.
Situation 2: If a node has changed addresses recently and the operator
correctly changed its DNS entry, but we don't bother querying DNS, it
would be unfair to penalize the node for our failure to connect to it.
So the audit worker must try both last_ip_and_port _and_ the node
address as supplied by the SNO.
We elect here to try last_ip_and_port first, on the grounds that (a) it
is expected to work in the large majority of cases, and (b) there
should not be any security concerns with connecting to an out-or-date
address, and (c) avoiding DNS queries on the satellite side helps
alleviate satellite operational load.
Change-Id: I9bf6c6c79866d879adecac6144a6c346f4f61200
This change allows the creation and deletion of api keys via the admin API.
It adds two methods for deletion, one via the name and projectID and the
second one via the serialized apikey directly.
Change-Id: Ida8aa729e716db58c671a901e5f7e39253e89a0d
We are moving an error into rejectErr since its preventing storage nodes from being able to settle other orders.
Change-Id: I3ac97c340e491b127f5e0024c5e8bd9f4df8d5c3
Use tagsql.DB pointer as step database, to propagate changes
back and forth between actual database and migration.
Adds CreateDB operation to the migration step to be able to
create new dbs before executing migration action.
Adjusts storagenode database migration to use inner tagsql.DB
pointer of each database as step.DB.
Adjusts satellite dabase migration, adds proxy migrationDB field
to satellite db that wraps itself as tagsql.DB, pointer of which
is used as step.DB.
Change-Id: Ifed4de5b01a356cf7b37db64d2eaeb7b61982c5c
This change completes the column migration of
5f6fccc6e8 and
2f648fd981.
It resets every users project limits who are below or equal to our
current production defaults.
Change-Id: Ie041d08bb67b62844f6023190fc00bc2dad5b1cb
WHAT:
Now project amount limit is taken from users db instead of config. But if db value is 0 then default config value will be used instead.
WHY:
this will allow us to change user's project limit by changing db value.
Change-Id: I9edcd0bf9eaae5fe40e90a44cac82d9ce8519274
This makes it possible to remove of this obsolete flag from the
multi-tenant gateway.
As a consequence, displaying the GATEWAY_0_ACCESS env var will always
require a running storj-sim. Until now, it was required only the first
time. Then the value was stored in the 'access' config. But this is now
not possible anymore.
The changes in StripeMock are required to fix failures in integration
tests. StripeMock is in-memory and its data does not survive restarts of
storj-sim. The second and following starts of storj-sim had invalid
state of StripeMock, which failed requests that were required to
populate the GATEWAY_0_ACCESS env var. The changes in StripeMock makes
it repopulate the Stripe customers from the database.
Change-Id: I981a208172b76577f12ecdaae485f5ae4ea269bc