We only sync free disk and available space, if necessary, on startup. If the
SNs disk fills up with non-storj data, we will not know about it when reporting
available space to the satellite.
Solution: whenever we check the node's capacity, double check free disk.
If free disk < than expected available space, return free disk.
Change-Id: I66265c16e03be45b6e1f5817c70df7eac0a76455
* Instead of archiving a list of orders and deleting an "unsent" file in
separate steps, archival simply moves the old unsent file to a new
archived file
* Add maxInFlightTime to be used along with grace period for sending
buffer
* Create unsent/archival directories in constructor
* Code cleanup
Change-Id: Ia3bc2aaf60cced6c6d413465423d78c7d5151188
Nodes are receiving an error that heldamount rpc doesn't exist on a
satellite. This simply adds which satellite to the error.
Change-Id: I7708e0511b55fdd2425969db2a545645339bad81
on node's start checks if any of trusted satellites has GE status "Exited successfully"
if so - trying to delete blobs/satellite folder, so no trash left on SNO.
Change-Id: I566266c84f2a872df54cd01bc2f15a9934f138ed
Per https://documentation.tardigrade.io/pricing/billing-and-payment:
"The calculation of per object fees is based on a standard 720-hour month."
On most years, the average value is 730 (365*24/12), except leap years.
However, we want to have ours be 720 (30*24) so its lines up with days.
Change-Id: Ifb9691878f1a7ea81ed36c92b37985493295fe31
When an uplink or repair work finishes uploading a piece to a
storagenode, it has no reason to wait another round trip after the piece
is committed to gracefully close the connection - in many (most?) cases,
the connection is simply canceled once the upload is complete. This has
the unintended side effect of producing a lot of "piece canceled" logs
and metrics on the storagenode side, when the reality is that the piece
uploads were successful, and not really canceled. This commit fixes
that.
Change-Id: Icbc1f7857d380134560219c1c19c186df2783cd0
with 128B/sec, a satellite with 10min default timeout could already
closed its connection to a node even though the node was able to
compelete the transfer.
Change-Id: I6173d6473a62c6d0b0e0a8765c1ae0a5e57b0a08
* Allow orders to be archived after being settled successfully with the
satellite.
* Allow for cleanup of orders that were archived before a certain time.
* Rewrite other parts of the orders file store to work better with new
design.
Change-Id: I39bea96d80e66a324ec522745169bd6d8b351751
This will give storagenode operators a better idea of whether the memory
allocated to the usedserials store is sufficient.
Change-Id: I5c30f2e39473a573f43409511ad9e2e32680479c
* Will replace order limits database.
* This change adds functionality for storing and listing unsent orders.
* The next change will add functionality for order archival after
submission.
Change-Id: Ic5e2abc63991513245b6851a968ff2f2e18ce48d
Part 2 of moving usedserials in memory
* Drop usedserials table in storagenodedb
* Use in-memory usedserials store in place of db for order limit
verification
* Update order limit grace period to be only one hour - this means
uplinks must send their order limits to storagenodes within an hour of
receiving them
Change-Id: I37a0e1d2ca6cb80854a3ef495af2d1d1f92e9f03
Implement an in-memory store for keeping track of order limit serial
numbers. It automatically deletes items if its size exceeds a configured
limit.
This change is part 1 - it creates the store
In part 2, the in-memory store will replace the usedserials database
Change-Id: I36f540ed809f034a27c1d7cede8a0a8b080af818
..before they are transferred to another node and submitted to the
satellite as successful piece transfers, because if we submit an invalid
signature, the node will be marked as a cheater and disqualified
immediately.
These signatures should have been validated when the piece was
originally stored, but bitrot does happen and needn't be cause for an
immediate DQ.
Change-Id: I8b0ebd5812ea8a2e60766005b7251fbb74ef7857
In walkNamespaceWithPrefix log in case of "lstat" error, because this may indicate an underlying disk corruption.
SG-50
Change-Id: I867c3ffc47cfac325ae90658ec4780d213ff3e63
Most places now need the NodeURL rather than the ID and Address
separately. This simplifies code in multiple places.
Change-Id: I52621d8ca52296a8b5bf7afbc1001cf8bfb44239
See https://storjlabs.atlassian.net/browse/SM-752
These changes allow us to change the log level at runtime through a handler off of the debug endpoint.
Examples of changing the log level on storj-sim
To get the current level for the satellite api process:
curl -XGET 'http://127.0.0.1:10009/logging' --header 'Content-Type: text/plain'
To change the log level:
curl -XPUT 'http://127.0.0.1:10009/logging' --header 'Content-Type: text/plain' --data-raw '{"level":"error"}'
Change-Id: I05d164b290929fa06b6d78c01075ee41f8238044
This test was the last place using it. Replace it with a direct call so
we can remove the method from uplink piecestore.
Change-Id: I62e13028663a7e67aa2495f90ecc02d0d8657fbd
Currently uploads can cause a lot of IOPS, reduce this by introducing a
in-memory buffer on-top of the file.
Change-Id: I5f4e3e01c0a36258271d180b922107de447bcb59
it was being used in ways that implied it should be NOT NULL
even though it was possibly null. we used to get this data
from the satellite db's added_at column as seen in 30369b02,
so backfill it using that data where joined_at is NULL, and
then alter the table to constrain the column to be NOT NULL.
Fixes#3866.
Change-Id: If2d856189209740d985f71dada7b93525e625ef3
According to the docs at https://www.sqlite.org/lang_altertable.html
doing the steps
1. Rename old table
2. Create new table
3. Copy data
4. Drop old table
is incorrect and should be
1. Create new table
2. Copy data
3. Drop old table
4. Rename new into old
Additionally, each step was being run in a different transaction,
which could cause permanent failures if a problem happened during
the migration.
Avoid both of those problems by changing up some previous migrations
that ran in this way. Since they are semantically identical, it's
fine to change up these old migrations. It will help make newer
nodes coming up for the first time more robust.
Change-Id: I43fb004fa1b6cb2fe2554f9920925420da28fb4a
CreateTables hasn't been quite true for a while now, rename to
MigrateToLatest to be clearer in it's behavior.
Change-Id: Ida48e95122a5d9b7a814e922d3698e00024a2ba7
Before the deleter would close its done channel once, so if additional
tests shared a storagenode, even if not in parallel, the later waits
would not work properly. This fixes that problem.
Change-Id: I7dcacf6699cef7c2c2948ba0f4369ef520601bf5
Currently this test was the last place that was using
piecestore.Client.DeletePieces. This way we can remove it from uplink to
reduce the code.
Change-Id: I72fda8888d05181f95eeb544d067c031ec3e36a0
Currently Cockroach isn't performant for concurrent database setup and
tear-down. Instead of a single instance allow setting multiple potential
connection strings and let the tests pick one connection string
randomly.
This improves test duration by ~10 minutes.
While we are at significantly changing how pgtest works, introduce
helper PickPostgres and PickCockroach for selecting the database to
reduce code duplications in multiple places.
Change-Id: I8ad171d5c4c8a4fc081ec2ae9bdd0cc948a80619
There was a race in the test code for piece deleter, which made it
possible to broadcast on the condition variable before anyone was
waiting. This change fixes that and has Wait take a context so it times
out with the context.
Change-Id: Ia4f77a7b7d2287d5ab1d7ba541caeb1ba036dba3
When we receive a piece deletion request, include the number of piece
IDs we couldn't add to the queue in the reponse
Change-Id: Ibebbe92ac50105bb5c74b18211ed38d468eb33f3
Each time we process a piece deletion on the storagenode, monitor how
long the item was in the queue and the size of the queue.
Change-Id: I23f1a44f8b9cecb901bdf4739d55c005ffed4bef
To improve delete performance, we want to process deletes asynchronously
once the message has been received from the satellite. This change makes
it so that storagenodes will send the delete request to a piece Deleter,
which will process a "best-effort" delete asynchronously and return a
success message to the satellite.
There is a configurable number of max delete workers and a max delete
queue size.
Change-Id: I016b68031f9065a9b09224f161b6783e18cf21e5
We want to avoid net/http dependency in errs2 package, hence we removed
http.ErrServerClosed from IgnoreCanceled and IsCanceled check. Now we
need to add that check explicitly to every http endpoint.
Change-Id: I62b1cc0a0a2d3b43301d713a7951e5022145f88f
* satellite: update log levels
Change-Id: I86bc32e042d742af6dbc469a294291a2e667e81f
* log version on start up for every service
Change-Id: Ic128bb9c5ac52d4dc6d6c4cb3059fbad73f5d3de
* Use monkit for tracking failed ip resolutions
Change-Id: Ia5aa71d315515e0c5f62c98d9d115ef984cd50c2
* fix compile errors
Change-Id: Ia33c8b6e34e780bd1115120dc347a439d99e83bf
* add request limit value to storage node rpc err
Change-Id: I1ad6706a60237928e29da300d96a1bafa94156e5
* we cant track storage node ids in monkit metrics so lets use logging to track that for expired orders
Change-Id: I1cc1d240b29019ae2f8c774792765df3cbeac887
* fix build errs
Change-Id: I6d0ffe058e9a38b7ed031c85a29440f3d68e8d47
Currently storj-sim relies on the log lines to be exactly the same,
when they change it cannot find the necessary information from log.
Change-Id: Ia039915ef3375a7cf60f107b2c05c958de15b6d5
* Add migration to storagenode reputation table to add suspended
timestamp
* Send suspended info to storagenode from satellite nodestats endpoint
* Add suspended status to storagenode api
* Add an indicator on the storagenode dashboard informing operator of
the satellites the node is suspended on
Change-Id: Ie3669f6069cc0258ba76ec99d17006e1b5fd9c8a
uuid.UUID implements driver.Value so it can be directly used as a
scannable result.
Replace uses of dbutil.BytesToUUID with uuid.FromBytes.
Change-Id: I51a670185ceb3cc2199d5aa2b76bc3fc191ca8fe
Instead of providing the database from outside to testplanet create it
inside and then allow wrapping and modifying it. This is more convenient
to use.
Change-Id: I9b8f69e6e0a19ff984b4e2bfe927c9100c77bc6c
storagenodes have like 10 or more databases. without this
tag they all get sent as the same value, stomping on each
other.
Change-Id: Ib12019684d6ea8f2a5b83df584056dfa79e3c4b3
* debug
* traces
* cfgstruct
* process
Package `storj/private/version` will be removed as a separate change.
Change-Id: Iadc40faa782e6225513b28218952f02d9c240a9f
In many cases when a storagenode fails the preflight check, it is due to
test_table existing, which is used to determine read/write capabilities
after the initial schema verification. If preflight ends early due to a
failure or stopped storagenode, it may not get the chance to drop this
table.
This change excludes test_table from the schema comparison to ensure
that it never prevents a storagenode from starting up.
It also adds Preflight DB test for storagenode.
Change-Id: Ib8e71df2e42fda3b2a364fbf7a801891c5831d39
After calling uplink.Upload it is not guaranteed that the
storage node has yet saved all the orders since it happens
asynchronously. Hence we need a separate func to wait
for them to complete.
Change-Id: I0c34b3ea6c98dbcf37f80493c0e10a8bdbbb2aaf
On satellite, remove all references to free_bandwidth column in nodes table.
On storage node, remove references to AllocatedBandwidth and MinimumBandwidth and mark as deprecated.
Protobuf message, NodeCapacity, is left intact for backwards compatibility.
Once this is released to all satellites, we can drop the column from the DB.
Change-Id: I2ff6c6537fc9008a0c5588e951afea58ede85838
When a storagenode begins to run low on capacity, we want to notify
the satellite before completely running out of space. To achieve this,
at the end of an upload request, the SN checks if its available space has
fallen below a certain threshold. If so, trigger a notification to the
satellites.
The new NotifyLowDisk method on the monitor chore is implemented using the
common/syn2.Cooldown type, which allows us to execute contact only once
within a given timeframe; avoiding hammering the satellites with requests.
This PR contains changes to the storagenode/contact package, namely moving
methods involving the actual satellite communication out of Chore and into
Service. This allows us to ping satellites from the monitor chore
Change-Id: I668455748cdc6741291b61130d8ef9feece86458
common/pb moved grpc to a separate package common/pb/pbgrpc.
This updates this repository to use it.
Change-Id: I2de2a190688871cf9cb61f7ea511f8a01e264e4e
With commit: 3331b443e7, satellite will
start calling `DeletePieces`. Therefore, we can remove the old endpoint
once the above commit is deployed with all satellites
Change-Id: I0124bc00a7cb808d119eb59f8fcd7fadf68158bb
Curently, storage nodes only report their capacity to satellites
once per hour. If a node fills up, it will fail all uploads until
the next contact cycle begins. With these changes, at the end of an
upload we check whether the MinimumDiskSpace threshold has been
passed. If so, trigger the monitor chore to update the node's
capacity, then trigger the contact chore to report the new
capacity to the satellites
Change-Id: Ie6aadaade1e2c12c87e03f8ff9059a50121380a0
this commit updates our monkit dependency to the v3 version where
it outputs in an influx style. this makes discovery much easier
as many tools are built to look at it this way.
graphite and rothko will suffer some due to no longer being a tree
based on dots. hopefully time will exist to update rothko to
index based on the new metric format.
it adds an influx output for the statreceiver so that we can
write to influxdb v1 or v2 directly.
Change-Id: Iae9f9494a6d29cfbd1f932a5e71a891b490415ff
This test checks that we are actually walking over the pieces when
starting the cache, and that it is returning expected values.
A recent outage was partially caused by the fact that this cache was
accidentally reading itself (via the pieces store, which has the cache
embedded). This test ensures that does not happen, and checks that when
the cache's `Run` method is called, the space used values are read from
disk and accurately update the cache.
Change-Id: I9ec61c4299ed06c90f79b17de3ffdbbb06bc502e
As a workaround it was set to 0 in previous release. Now according to the TOC must be set to 500GB.
Change-Id: Ia2743d49e86683396958aff51b95df743af4f872
http.FileServer relies on mime types defined in the operating system.
These values may be misconfigured, so a javascript file might
end up being served as "plain/text".
Change-Id: I3c13c8a9ac484bd765a4de0f8253bfe40dde7513
it was noticed that if you had a long lived transaction A that
was blocking some other transaction B and A was being aborted
due to retriable errors, then transaction B was never given
priority. this was due to using savepoints to do lightweight
retries.
this behavior was problematic becaue we had some queries blocked
for over 16 hours, so this commit addresses the issue with two
prongs:
1. bound the amount of time we will retry a transaction
2. create new transactions when a retry is needed
the first ensures that we never wait for 16 hours, and the value
chosen is 10 minutes. that should be long enough for an ample
amount of retries for small queries, and huge queries probably
shouldn't be retried, even if possible: it's more preferrable to
find a way to make them smaller.
the second ensures that even in the case of retries, queries that
are blocked on the aborted transaction gain priority to run.
between those two changes, the maximum stall time due to retries
should be bounded to around 10 minutes.
Change-Id: Icf898501ef505a89738820a3fae2580988f9f5f4
instead of aborting on the first error, so that we can hit all
satellites and get the best numbers we can
Change-Id: I21d5163884940612d7d39eaf73a6fac07235cd9e
We have added a bug with v0.31.7 and deploying it would kick out all the
storage nodes that are full. Easy fix is setting the requirment to 0.
That will allow them to still start up even if they are full.
Change-Id: Ie66f369952d929fcfd47f44f6e5e57eea8f51ff6
Per default our server address is listening on all IP addresses on the machine.
This caused our preflight check to fail, as it did not have an hostname to lookup.
With this change, we are fine with this and go ahead.
Change-Id: I9eb5c891c099eb35f679d6d7e79ec38bb43b619f
1 transfer with a minimum speed of 128 Bytes was a nice try but it is
way too low. Even a pi3 was able to handle 7 grpc transfers. We have 4
satellites and with 5 concurrent transfers that should be a total of 20
concurrent transfers. Each transfer will have a minimum speed of 5KB/s.
That should give us a better througput and still be Ok on a pi3.
Change-Id: I650a7baf890080901ef70ea3b5636d93009b4e60
With the v0.30.5 release we asked the storage node operators to manually
enable the preflight check while they are in front of their machine. We
didn't want to risk taking too many storage nodes offline at the same
time because of some unknow bug. The preflight check worked. We have no
negative feedback. We can now enable it by default.
Change-Id: Ic670ee52becd0b35eca84af7a0841ea983d7b19d
clock sync
Change 24h and 1h to 30m and 10m respectively for clock sync. If a
storagenode's clock is off by more than 30m for every trusted satellite,
it will not start. If it is off by more than 10m for any trusted
satellite, a warning is displayed.
Change-Id: I05ef611a30a49c1783e3b68b513745922c2f7e28
this is to help protect against intentional or unintentional
slowloris style problems where a client keeps a tcp connection
alive but never sends any data. because grpc is great, we have
to spawn a separate goroutine for every read/write to the stream
so that we can return from the server handler to cancel it if
necessary. yep. really.
additionally, we update the rpcstatus package to do some stack
trace capture and add a Wrap method for the times where we want
to just use the existing error.
also fixes a number of TODOs where we attach status codes to the
returned errors in the endpoints.
Change-Id: Id8bb8ff84aa34e0f711b0cf9bce3908b36a1d3c1
A few variables were not renamed to the new standard piecesTotal and
piecesContentSize, so it was unclear which value was being used. These
have been updated, and some comments made more thorough.
Change-Id: I363bad4dec2a8e5c54d22c3c4cd85fc3d2b3096c
This change updates the storagenode piecestore apis to expose access to
the full piece size stored on disk. Previously we only had access to
(and only kept a cache of) the content size used for all pieces. This
was inaccurate when reporting the amount of disk space used by nodes.
We now have access to the total content size, as well as the total disk
usage, of all pieces. The pieces cache also keeps a cache of the total
piece size along with the content size.
Change-Id: I4fffe7e1257e04c46021a2e37c5adc6fe69bee55
With this change RS configuration will be set on satellite. Uplink with
get RS values with BeginObject request and will use it. For backward
compatibility and to avoid super large change redundancy scheme stored
with bucket is not touched. This can be done in future.
Change-Id: Ia5f76fc10c37e2c44e4f7b8754f28eafe1f97eff
Replace all the remaining uses of sql.DB with tagsql.DB to
fix issues with context cancellation.
Introduce tagsql.Open which helps to get rid of all tagsql.Wrap-s.
Use tagsql in cockroachkv and postgreskv.
Change-Id: I8946d203341cb85a25976896fc7881e1f704e779
Migration step was closing a database that was used by
the migration itself. There is an active tranasction
over the database.
Instead of closing in the same transaction we can wait
until restart for the database cleanup.
Change-Id: Ic971d8cea81a3ab783f4a1bdc6357009c8b31386
Also added temporary types withRebind and withTagTx,
which will be later removed. Currently they help to avoid
changing the whole codebase at the same time.
Change-Id: I7f07ba8f4709a23a463bfa67464628665a05808f
storagenode database preflight check.
Disable preflight database check by default, and have the option to
enable it. This will allow us to enable it once it is definitely
working.
Also change the name of the config flag for preflight time sync.
Change-Id: Ie2e20f9e25dcb38794eafa7e1505e7c6ff287c99
On pieces usage cache init we now load the trash info from the db. Also
fixes a test that was masking the failure here.
Change-Id: I9ff7da5bc6c0f74cf0942e20931b40e0c88d70fa
for storagenode
Ensure that database schema matches latest test migration schema before
allowing the node to start up.
Ensure minimal read/write functionality for each storagenode database
before allowing the node to start up.
This will eliminate many unhandled audit errors we are seeing.
Change-Id: Ic0e628b04a9c35b7a8243f6a81d4683918170ba9