Adds -tags=service when building storagenode-updater which adds
interaction with Windows SVC and Linux systemd to the binary.
Change-Id: I21b4b3e8718348a30ec0453b07135ff3de033a5a
Repair workers prioritize the most unhealthy segments. This has the consequence that when we
finally begin to reach the end of the queue, a good portion of the remaining segments are
healthy again as their nodes have come back online. This makes it appear that there are more
injured segments than there actually are.
solution:
Any time the checker observes an injured segment it inserts it into the repair queue or
updates it if it already exists. Therefore, we can determine which segments are no longer
injured if they were not inserted or updated by the last checker iteration. To do this we
add a new column to the injured segments table, updated_at, which is set to the current time
when a segment is inserted or updated. At the end of the checker iteration, we can delete any
items where updated_at < checker start.
Change-Id: I76a98487a4a845fab2fbc677638a732a95057a94
WHAT:
new project summary (details) section added to project dashboard page.
v1 shows team size, API keys amount, buckets amount, estimated charges for selected project for current month.
WHY:
redesign. Better user experience?
Change-Id: I31204a3e68db49486bad1e1a0eedd238eba6b84e
Jira: https://storjlabs.atlassian.net/browse/PG-69
There are a number of segments with piece_hashes_verified = false in
their metadata) on US-Central-1, Europe-West-1, and Asia-East-1
satellites. Most probably, this happened due to a bug we had in the
past. We want to verify them before executing the main migration to
metabase. This would simplify the main migration to metabase with one
less issue to think about.
Change-Id: I8831af1a254c560d45bb87d7104e49abd8242236
We have configlock_test for checking changes so the separate script and
makefile target are not needed.
Change-Id: I2bbc1c21ad849c9b7ec8bba43c0e11e94e04f6a6
Currently Cockroach migration test is the most heavy with regards to
schema changes. This causes other tests to time out. This adds an
alternate cockroach instance that is used for migration tests.
Change-Id: I01fe9313527ff002f0bb0914dd52c3645b8eaf6d
Currently a user is only able to create a project if either
a STORJ deposit or CC was added to his account. With this change, an existing
coupon is also valid to let the user proceed.
Change-Id: I7be8d2d9ec58a15c50755b3fe33af04d2fd64ea2
This PR adds the following items:
1) an in-memory read-only cache thats stores project limit info for projectIDs
This cache is stored in-memory since this is expected to be a small amount of data. In this implementation we are only storing in the cache projects that have been accessed. Currently for the largest Satellite (eu-west) there is about 4500 total projects. So storing the storage limit (int64) and the bandwidth limit (int64), this would end up being about 200kb (including the 32 byte project ID) if all 4500 projectIDs were in the cache. So this all fits in memory for the time being. At some point it may not as usage grows, but that seems years out.
The cache is a read only cache. When requests come in to upload/download a file, we will read from the cache what the current limits are for that project. If the cache does not contain the projectID, it will get the info from the database (satellitedb project table), then add it to the cache.
The only time the values in the cache are modified is when either a) the project ID is not in the cache, or b) the item in the cache has expired (default 10mins), then the data gets refreshed out of the database. This occurs by default every 10 mins. This means that if we update the usage limits in the database, that change might not show up in the cache for 10 mins which mean it will not be reflected to limit end users uploading/downloading files for that time period..
Change-Id: I3fd7056cf963676009834fcbcf9c4a0922ca4a8f
WHAT:
working logic of dropdowns moved to local store. Removed ability to open more than one dropdown. Dropdowns' colors changed to satisfy needed contrast ratio
WHY:
better user experience
Change-Id: Ia7a5683a3544fcb6bdd8a05d1fd6a12755b76caf
Our current endpoints bail on us, if the column data is null. Thus we need
to take the intermediate step and set the default to a fixed value and
reset those with the following release.
It sets the default column value to our current config values of 50GB
for storage and bandwidth and 100 buckets, while still enabling the field to be nullable.
All 0 values are migrated to be the default as well to ensure they can
keep using their projects, as with the original change, 0 actually means 0.
Change-Id: I797be80ce2d2105091599dc1b3fc76f74336b66b
Combine store.writeLimit and store.writeOrder into
store.writeLimitAndOrder, which only requires a single call to
file.Write(). This simplifies code, but it also reduces the likelihood
of multiple calls to Write() increasing the likelihood of file
corruption.
Also combine the corresponding readLimit/readOrder functions for
consistency.
Change-Id: I62ed406fa2c02708465a678d18293f510f666440
WHAT:
in case when user was in no paywall cohourt, unnecessary error message appeared in console after login.
Error removed.
WHY:
better user experience.
Change-Id: Ieafd756e3dc0d6a10bf0058b52ebfaf38d02e8fd
Currently we have no way to actually set one
of the following limits to 0 (meaning not usable):
- maxBuckets
- usageLimit
- bandwidthLimit
With having the field nullable,
NULL corresponds to the global default,
0 now actually 0 and
a set value determines a custom limit.
Change-Id: I92bb77529dcbd0881ae8368921be9d246eb0919e
Jira: https://storjlabs.atlassian.net/browse/PG-67
There are a number of old-style objects (without the number of segments
in their metadata) on US-Central-1, Europe-West-1, and Asia-East-1
satellites. We want to migrate their metadata to contain the number of
segments before executing the main migration to metabase. This would
simplify the main migration to metabase with one less issue to think
about.
Change-Id: I42497ae0375b5eb972aab08c700048b9a93bb18f
WHAT:
lastly selected project is now stored in browser's local storage and is being selected automatically on data load (login/refresh)
WHY:
better user experience for multiproject state
Change-Id: Idadbb75c8391017ee1845415bec282d9e33309eb
We only need to lock aquire mutexes inside ListUnsentBySatellite when we
want to determine whether a file has an active enqueue in progress.
On some nodes, ListUnsentBySatellite can take a particularly long time, having
undesired side-effects, so if we can minimize locking time, those nodes
will be better off.
Also, lock archive mu during ListUnsentBySatellite so files cannot be
archived and listed at the same time.
Change-Id: Ieb7e2a759c20c724a74dd8315728c873ccab14a3
WHAT:
added functionality for user to update project name. Logic only, without actual GUI updates.
WHY:
better user experience
Change-Id: I1e38e33ba827b0bdf2c89e29de24e4e87edb474a
Fix case where line is larger than maxline and writer ended up in an
infinite loop trying to buffer more data.
Change-Id: I243da738b331279d6bf27255778b5798e7f37f95
If we see an UnexpectedEOF error when attempting to read orders, return
the orders we have been able to read successfully and do not return an
error. This behavior ensures that the storagenode orders service
attempts to archive corrupted files and does not retry them repeatedly
and get stuck.
Change-Id: I0d00d1e174f968af6e99ca861eddad190f1339e2