This runs each benchmark for one iteration to ensure that they are
valid. Unfortunately, it does not give any useful metrics as output.
Change-Id: I68940398c8dd849aed656bd12656f48d5df10128
Before the deleter would close its done channel once, so if additional
tests shared a storagenode, even if not in parallel, the later waits
would not work properly. This fixes that problem.
Change-Id: I7dcacf6699cef7c2c2948ba0f4369ef520601bf5
When running testplanet tests, mark storagenode peer PieceDeleter as in
testing mode so that you don't have to do it on each test.
Change-Id: I2592e02c63f8bcc9152ecf436bac4e798b08bccf
There was a race in the test code for piece deleter, which made it
possible to broadcast on the condition variable before anyone was
waiting. This change fixes that and has Wait take a context so it times
out with the context.
Change-Id: Ia4f77a7b7d2287d5ab1d7ba541caeb1ba036dba3
To improve delete performance, we want to process deletes asynchronously
once the message has been received from the satellite. This change makes
it so that storagenodes will send the delete request to a piece Deleter,
which will process a "best-effort" delete asynchronously and return a
success message to the satellite.
There is a configurable number of max delete workers and a max delete
queue size.
Change-Id: I016b68031f9065a9b09224f161b6783e18cf21e5
Currently it was possible that PopAll returns 1010 items, then
makes one RPC call with 1000 items, then RPC call 10 items. Meanwhile,
there have been added 500 new items added to the queue.
This change ensures that we pull items from the queue early and
try to make rpc batches as large as possible.
Change-Id: I1a30dde9164c2ff7b90c906a9544593c4f1cf0e9
During testing it's possible to get into a scenario where all nodes are
offline and list of requests is empty.
Change-Id: I271c0ca2c72009244df13e8bc1441fcd5f3da9e0
Instead of providing the database from outside to testplanet create it
inside and then allow wrapping and modifying it. This is more convenient
to use.
Change-Id: I9b8f69e6e0a19ff984b4e2bfe927c9100c77bc6c
This adds a piece deletion handler that has debounce for failed dialing
and batching multiple jobs into a single request.
Change-Id: If64021bebb2faae7f3e6bdcceef705aed41e7d7b
To handle concurrent deletion requests we need to combine them into a
single request.
To implement this we introduces few concurrency ideas:
* Combiner, which takes a node id and a Job and handles combining
multiple requests to a single batch.
* Job, which represents deleting of multiple piece ids with a
notification mechanism to the caller.
* Queue, which provides communication from Combiner to Handler.
It can limit the number of requests per work queue.
* Handler, which takes an active Queue and processes it until it has
consumed all the jobs.
It can provide limits to handling concurrency.
Change-Id: I3299325534abad4bae66969ffa16c6ed95d5574f