This change allows a node to look for a piece in the trash when
serving a download request.
If the piece is found in the trash, it restores it to the blobs
directory and continue to serve the request as expected.
Resolves https://github.com/storj/storj/issues/6145
Change-Id: Ibfa3c0b4954875fa977bc995fc4dd2705ca3ce42
All the files in uploadselection are (in fact) related to generic node selection, and used not only for upload,
but for download, repair, etc...
Change-Id: Ie4098318a6f8f0bbf672d432761e87047d3762ab
This add tests to the zapwrapper package and also adds a test
to verify the issue in https://github.com/storj/storj/issues/6006
Change-Id: Iec3f568e72683af71e1718017109a1ed52794b0b
There are many case where the keywords `free` and `available`
are confused in their usage.
For most cases, `free` space is the amount of free space left
on the whole disk, and not just in allocation while
`available` space is the amount of free space left in the
allocated disk space.
What the user/sno wants to see is not the free space but the
available space. To the SNO, free space is the free space
left in the allocated disk space.
Because of this confusion, the multinode dashboard displays
the `free` disk space instead of the free space in the
allocated disk space https://github.com/storj/storj/issues/5248
While the storagenode dashboard shows the correct free space
in the allocation.
This change fixes the wrong free disk space. I also added a
few comments to make a distinction between the `free`
and `available` fields in the `DiskSpace*` structs.
Change-Id: I11b372ca53a5ac05dc3f79834c18f85ebec11855
We use two different Node types in `overlay` and `uploadnodeselection` and converting back and forth.
Using the same object would allow us to use a unified node selection interface everywhere.
Change-Id: Ie71e29d60184ee0e5b4547eb54325f09c418f73c
The test needs to wait for the upload information to be saved to the
database.
Fixes https://github.com/storj/storj/issues/6008
Change-Id: I1f258c923a4b33cbc571f97bad046cec70642a0b
Storagenodes are currently getting larger signed orders due to
a performance optimization in uplink, which now messes with the
ingress graph because the storagenode plots the graph using
the order amount instead of actually uploaded bytes, which this
change fixes.
The egress graph might have a similar issue if the order amount
is larger than the actually downloaded bytes but since we pay
for orders, whether fulfilled or unfulfilled, we continue using
the order amount for the egress graph.
Resolves https://github.com/storj/storj/issues/5853
Change-Id: I2af7ee3ff249801ce07714bba055370ebd597c6e
* storagenode/orders/ordersfiles: unit test coverage
This change implements unit testing on common.go from the ordersfile package.
* storagenode/orders/ordersfiles: unit test coverage
This change implements the zeebo assert library instead of gotools as to not introduce a new dependency.
* storagenode/orders/ordersfiles: unit test coverage
This change implements the zeebo assert library instead of gotools as to not introduce a new dependency.
Lazyfilewalker was failing with SIGPIPE which was quite
misleading. The command was failing because the
the value of the --lower-io-priority flag was assumed
to be an arguement since it was passed as
"--lower-io-priority true" instead "--lower-io-priority=true"
Resolves https://github.com/storj/storj/issues/5900
Change-Id: Icf79fcce76dafee21659d76ee0ce19d8520c8f1d
Instead of the hardcoded payout rates that is assumed for all satellites,
this change adds a new endpoint for fetching the pricing model for
each satellite.
The pricing model is then displayed on the Info & Estimation table
on the dashboard
Updates https://github.com/storj/storj-private/issues/245
Change-Id: Iac7669e3e6eb690bbaad6e64bbbe42dfd775f078
This is particularly useful for monitoring the lazyfilewalker to
make sure it is not checking the wrong directory.
Updates https://github.com/storj/storj/issues/5349
Change-Id: I7e5fcfd4545ec4157d33a9225cd1bce607ccd154
The execwrapper package wraps the exec.Cmd and has a Command
interface that mimics the behaviour of the exec.Cmd.
This is useful for testing the lazyfilewalker subprocesses
by stubbing instead of spawning a real subprocess.
Updates https://github.com/storj/storj/issues/5349
Change-Id: I14084139c76a531f2b6d7163f9aa35c3f5e192d7
We've had issues with forgetting to close readers and writers.
Add leak tracking to find those pesky issues.
Change-Id: If6b0ad6e9958318a7e0affee9c6d0a1ece412b6d
As part of fixing the IO priority of filewalker related
processes such as the garbage collection and used-space
calculation, this change allows the initial used-space
calculation to run as a separate subprocess with lower
IO priority.
This can be enabled with the `--storage2.enable-lazy-filewalker`
config item. It falls back to the old behaviour when the
subprocess fails.
Updates https://github.com/storj/storj/issues/5349
Change-Id: Ia6ee98ce912de3e89fc5ca670cf4a30be73b36a6
We automatically start a chore to check whether the blobstore is
writeable and readable, however, we don't want to fail the tests due to
that reason. Usually we want to test some other failure.
There probably should be some nicer way to achieve this, but this is an
easier fix.
Change-Id: I77ada75329f88d3ea52edd2022e811e337c5255a
this change makes it so that the storage node no longer
cares if the cert of peers it talks to has been signed
by the sno registration server. this is fine because
the only reason a storage node would talk to a peer
besides the explicitly configured satellites is because
a satellite told it to.
we have already disabled this on uplinks (uplinks don't
care about the peer ca whitelist), and we are starting
to consider disabling this on satellites entirely.
however, before we really can disable it on satellites,
we need to disable it on storage nodes so that graceful
exit and node to node transfers can work correctly.
Change-Id: I2e0a0781bd247e574b82f0065aafb88804e59c71
The blobstore implementation is entirely related to storagenode, so the
rightful place is together with the storagenode implementation.
Fixes https://github.com/storj/storj/issues/5754
Change-Id: Ie6637b0262cf37af6c3e558556c7604d9dc3613d
storj/storj uses storj/uplink and storj/uplink uses storj/storj (for integration test).
Without using the real defaults (instead of hard coded ones) in storj/storj, we couldn't modify them. (modification in uplink will fail when storj/storj is used for integration test, with the unchanged, hard-coded defaults).
Change-Id: Ifa68567dc2d5c8d08af8041ac338870c4fc26d45
This is not recommended for most nodes; leaving your node running when
it can't handle requests fast enough is a good way to fail audits and
get disqualified, which may happen before you even know about the
problem.
But some Windows users are finding that this is being triggered
regularly on their nodes, and that it apparently causes the whole system
to lock up occasionally. We are adding this option as a way to mitigate
that problem until we can collect more information.
Change-Id: I7a652b0f9f970bbb9ed9f2cb3ad1cb89d90db8d7
FileWalker implements methods to walk over pieces in
in a storage directory.
This is just a refactor to separate filewalker functions
from pieces.Store. This is needed to simplify the work
to create a separate filewalker subprocess and reduce the
number of config flags passed to the subprocess.
You might want to check https://review.dev.storj.io/c/storj/storj/+/9773
Change-Id: I4e9567024e54fc7c0bb21a7c27182ef745839fff
Download is server from two goroutines:
* one is waiting for the orders (and updates the actual limit)
* other one sends the valuable bytes back to the client (in case the actual order is big enough)
These two tasks are syncrhonized with the help of a `sync2.NewThrottle()`
But all of these happens in the same method, therefore we have no idea how much time is spent on waiting for next orders
(throttle can wait until we receive new orderlimit), and how much time is spent with actual work.
This patch moves the actual work (after sending routine is waked up) to a separated method to have better visibility and measure the actual work (read data + send it).
Change-Id: Ia5068c544560a53bc2fcea6cb6fce85cfacbd95b
if a drpc.ClosedError was returned, it would always take the
first (failure) branch, despite the second branch's existence.
Change-Id: Ife3b27869c4e9d37ca2914e2d1d1a2c60d326309
to support TCP_FAST_OPEN, we're considering just using
two TCP connections in parallel per request, one with
and one without. this allows us to safely fire both
concurrently without stressing out the node too much.
see https://review.dev.storj.io/c/storj/storj/+/9933
Change-Id: I9aa8a0252350db5ace04ee125bfe469203e980ec
Storagenode download metrics are not accurate:
* the current metric bump cancel metrics only for specific error messages, but there are cases where the error is already handled (err == nill)
* instead of the full size of the piece we need to use the size of the downloaded bytes
Change-Id: I6ca75770e2d40bf514f5e273785c78e02968c919
we may in the future want to accept writes and commits as part of the
initial request message, just like
https://review.dev.storj.io/c/storj/storj/+/9245
this change is forward compatible but continues to work with
existing clients.
Change-Id: Ifd3ac8606d498a43bb35d0a3751859656e1e8995
this change uses the new storj/common noise helpers, which:
* add a security fix (require an expected node id for validating
noise key attestations)
* stops doing an unnecessary order signature validation (it's
already been done inside of PutPiece)
* removes some duplicate code
Change-Id: I5e67a08ff216cd9c5b0b82e40b4d9de664b6b0fc
The used space graph values are correct when a single satellite is
selected but wrong for 'All satellites'. This is related to the
queries for getting the individual disk usages for all satellites
per day and the summary and average for all satellites per day:
1. dividing the sum of at_rest_total by the total_hours is wrong.
Simply put, we were assuming that, for example (4/2)+(6/3) equals
to (4+6)/(2+3), assuming we had 4 and 6 at_rest_total values with
2 and 3 respective hours.
2. To get the average, we need to first find the sum of the
at_rest_total_bytes for each timestamp across all satellites
before taking the average of the sums instead of just taking the
average from the individual satellite values.
Closes https://github.com/storj/storj/issues/5519
Change-Id: Ib1314e238b695a6c1ecd9f9171ee86dd56bb3b24
uplinks currently get the node's certificate chain over TLS. once Noise
is in use, uplinks will no longer be able to do this. we should start
having the upload request return the certificate chain in the same
release that starts supporting noise.
Change-Id: I619b23cb8e25691bcc62d760f884403a4ccd64a0
A user on the forum was seeing the error "bad message", which was not
very helpful. This case from the ext4 filesystem using the code EBADMSG
to indicate it detected an invalid CRC, suggesting disk corruption.
This change adds some explanatory information about probable disk
corruption to all errors coming from the (*blobInfo).Stat() call, which
is where storagenode fs corruption problems will usually manifest.
Refs: https://github.com/storj/storj/issues/5375
Change-Id: I87f4a800236050415c4191ef1a0fc952f9def315