As part of fixing the IO priority of filewalker related
processes such as the garbage collection and used-space
calculation, this change allows the initial used-space
calculation to run as a separate subprocess with lower
IO priority.
This can be enabled with the `--storage2.enable-lazy-filewalker`
config item. It falls back to the old behaviour when the
subprocess fails.
Updates https://github.com/storj/storj/issues/5349
Change-Id: Ia6ee98ce912de3e89fc5ca670cf4a30be73b36a6
We automatically start a chore to check whether the blobstore is
writeable and readable, however, we don't want to fail the tests due to
that reason. Usually we want to test some other failure.
There probably should be some nicer way to achieve this, but this is an
easier fix.
Change-Id: I77ada75329f88d3ea52edd2022e811e337c5255a
this change makes it so that the storage node no longer
cares if the cert of peers it talks to has been signed
by the sno registration server. this is fine because
the only reason a storage node would talk to a peer
besides the explicitly configured satellites is because
a satellite told it to.
we have already disabled this on uplinks (uplinks don't
care about the peer ca whitelist), and we are starting
to consider disabling this on satellites entirely.
however, before we really can disable it on satellites,
we need to disable it on storage nodes so that graceful
exit and node to node transfers can work correctly.
Change-Id: I2e0a0781bd247e574b82f0065aafb88804e59c71
The blobstore implementation is entirely related to storagenode, so the
rightful place is together with the storagenode implementation.
Fixes https://github.com/storj/storj/issues/5754
Change-Id: Ie6637b0262cf37af6c3e558556c7604d9dc3613d
storj/storj uses storj/uplink and storj/uplink uses storj/storj (for integration test).
Without using the real defaults (instead of hard coded ones) in storj/storj, we couldn't modify them. (modification in uplink will fail when storj/storj is used for integration test, with the unchanged, hard-coded defaults).
Change-Id: Ifa68567dc2d5c8d08af8041ac338870c4fc26d45
This is not recommended for most nodes; leaving your node running when
it can't handle requests fast enough is a good way to fail audits and
get disqualified, which may happen before you even know about the
problem.
But some Windows users are finding that this is being triggered
regularly on their nodes, and that it apparently causes the whole system
to lock up occasionally. We are adding this option as a way to mitigate
that problem until we can collect more information.
Change-Id: I7a652b0f9f970bbb9ed9f2cb3ad1cb89d90db8d7
FileWalker implements methods to walk over pieces in
in a storage directory.
This is just a refactor to separate filewalker functions
from pieces.Store. This is needed to simplify the work
to create a separate filewalker subprocess and reduce the
number of config flags passed to the subprocess.
You might want to check https://review.dev.storj.io/c/storj/storj/+/9773
Change-Id: I4e9567024e54fc7c0bb21a7c27182ef745839fff
Download is server from two goroutines:
* one is waiting for the orders (and updates the actual limit)
* other one sends the valuable bytes back to the client (in case the actual order is big enough)
These two tasks are syncrhonized with the help of a `sync2.NewThrottle()`
But all of these happens in the same method, therefore we have no idea how much time is spent on waiting for next orders
(throttle can wait until we receive new orderlimit), and how much time is spent with actual work.
This patch moves the actual work (after sending routine is waked up) to a separated method to have better visibility and measure the actual work (read data + send it).
Change-Id: Ia5068c544560a53bc2fcea6cb6fce85cfacbd95b
if a drpc.ClosedError was returned, it would always take the
first (failure) branch, despite the second branch's existence.
Change-Id: Ife3b27869c4e9d37ca2914e2d1d1a2c60d326309
to support TCP_FAST_OPEN, we're considering just using
two TCP connections in parallel per request, one with
and one without. this allows us to safely fire both
concurrently without stressing out the node too much.
see https://review.dev.storj.io/c/storj/storj/+/9933
Change-Id: I9aa8a0252350db5ace04ee125bfe469203e980ec
Storagenode download metrics are not accurate:
* the current metric bump cancel metrics only for specific error messages, but there are cases where the error is already handled (err == nill)
* instead of the full size of the piece we need to use the size of the downloaded bytes
Change-Id: I6ca75770e2d40bf514f5e273785c78e02968c919
we may in the future want to accept writes and commits as part of the
initial request message, just like
https://review.dev.storj.io/c/storj/storj/+/9245
this change is forward compatible but continues to work with
existing clients.
Change-Id: Ifd3ac8606d498a43bb35d0a3751859656e1e8995
this change uses the new storj/common noise helpers, which:
* add a security fix (require an expected node id for validating
noise key attestations)
* stops doing an unnecessary order signature validation (it's
already been done inside of PutPiece)
* removes some duplicate code
Change-Id: I5e67a08ff216cd9c5b0b82e40b4d9de664b6b0fc
The used space graph values are correct when a single satellite is
selected but wrong for 'All satellites'. This is related to the
queries for getting the individual disk usages for all satellites
per day and the summary and average for all satellites per day:
1. dividing the sum of at_rest_total by the total_hours is wrong.
Simply put, we were assuming that, for example (4/2)+(6/3) equals
to (4+6)/(2+3), assuming we had 4 and 6 at_rest_total values with
2 and 3 respective hours.
2. To get the average, we need to first find the sum of the
at_rest_total_bytes for each timestamp across all satellites
before taking the average of the sums instead of just taking the
average from the individual satellite values.
Closes https://github.com/storj/storj/issues/5519
Change-Id: Ib1314e238b695a6c1ecd9f9171ee86dd56bb3b24
uplinks currently get the node's certificate chain over TLS. once Noise
is in use, uplinks will no longer be able to do this. we should start
having the upload request return the certificate chain in the same
release that starts supporting noise.
Change-Id: I619b23cb8e25691bcc62d760f884403a4ccd64a0
A user on the forum was seeing the error "bad message", which was not
very helpful. This case from the ext4 filesystem using the code EBADMSG
to indicate it detected an invalid CRC, suggesting disk corruption.
This change adds some explanatory information about probable disk
corruption to all errors coming from the (*blobInfo).Stat() call, which
is where storagenode fs corruption problems will usually manifest.
Refs: https://github.com/storj/storj/issues/5375
Change-Id: I87f4a800236050415c4191ef1a0fc952f9def315
Calling stat() (really, lstat()) on every file during a directory walk
is the step that takes up the most time. Furthermore, not all directory
walk uses _need_ to have a stat done on every file. Therefore, in this
commit we avoid doing the stat at the lowest level of
walkNamespaceInPath. The stat will still be done when it is requested,
with the Stat() method on the blobInfo object.
The major upside of this is that we can avoid the stat call on most
files during a Retain operation. This should speed up garbage collection
considerably.
The major downside is that walkNamespaceInPath will no longer
automatically skip over directories that are named like blob files, or
blob files which are deleted between readdir() and stat(). Callers to
walkNamespaceInPath and its variants (WalkNamespace,
WalkSatellitePieces, etc) are now expected to handle these cases
individually.
Thanks to forum member Toyoo for the insight that this would speed up
garbage collection.
Refs: https://github.com/storj/storj/issues/5454
Change-Id: I72930573d58928fa25057ed89cd4ec474b884199
- Adds "Remote Address" field to all INFO logs related to GET,
PUT, and DELETE requests
- Adds Offset and Size fields to all info logs related to GET
requests
Resolves https://github.com/storj/storj/issues/5404
Change-Id: I5dab1867619385362e5f1e0455dfab17d295a37a
we may in the future want to accept orders as part of the initial
request message
(e.g. https://review.dev.storj.io/c/storj/uplink/+/9246).
this change is forward compatible but continues to work with
existing clients.
Change-Id: I475ad50d6cbfee8a1f843383230698e4ef9b9e54
This ensures that empty trash and restore trash cannot run at the same
time.
Fixes https://github.com/storj/storj/issues/5416
Change-Id: I9d2e3aa3d66e61e5c8a7427a95208bb96089792d
* normalize ExistsCheckWorkers flag to avoid setting incorrect values
* wait for limiter to finish if context was canceled
Change-Id: I688a395bd958cd09233605fc264d43f91ec45ed1
Adds new method Exists which can be used to verify which
requested piece ids exists on storage node. Will verify only pieces
which belongs to the satellite that used that endpoint.
Minum WASM size was increased a bit.
https://github.com/storj/storj/issues/5415
Change-Id: Ia5f9cadeb526541b2776a8973eb7d50133ad8636
Starting restore trash in the background allows the satellite to
continue to the next storagenode without needing to wait until
completion.
Of course, this means the satellite doesn't get feedback whether it
succeeds successfully or not. This means that the restore-trash needs to
be executed several times.
Change-Id: I62d43f6f2e4a07854f6d083a65badf897338083b
Previously because of the use of a LAG to calculate the hour_interval
the first record, which is usually the first day of the month usually,
doesn’t have a previous record and always assumes the at_rest_total is
for 24 hours.
Resolves https://github.com/storj/storj/issues/5390
Change-Id: Id532f8b38fe9df61432e62655318ff119a733d13
The query changes we did while fixing the usage graph led to wrong
payout calculations directly linked to disk space.
This change:
- avoids converting from Bh to B directly in the query
- returns the at_rest_total in the original bytes*hour value
- returns at_rest_total_bytes as the calculated disk spaced used in bytes
- uses the at_rest_total_bytes only for the disk space graph
- return summary_bytes as the average disk space used within the specified date
- updates the disk space graph header to "average disk space used this month"
The total disk used in the month is also displayed in B not B*day
Resolves https://github.com/storj/storj/issues/5355
Change-Id: I2cfefb0fe711f9c59de2adb547c4ab50b05c7cbb
Running all of the migrations necessary to initialize a storage node
database takes a significant amount of time during runs.
The package current supports initializing a database from manually coalesced
migration data (i.e. snapshot) which improves the situation somewhat.
This change takes things a bit further by changing the snapshot code to
instead hydrate the database directory from a pre-generated snapshot zip
file.
name old time/op new time/op delta
Run_StorageNodeCount_4/Postgres-16 2.50s ± 0% 0.16s ± 0% ~ (p=1.000 n=1+1)
Change-Id: I213bbba5f9199497fbe8ce889b627e853f8b29a0
The current implementation blocks the the startup until one or none
of the trusted satellites is able to reach the node via QUIC.
This can cause delayed startup. Also, the quic check is done
once during startup, and if there is a misconfiguration later,
snos would have to restart to node.
In this change, we reuse the contact service which pings the satellite
periodically for node checkin. During checkin the satellite tries
pinging the node back via both TCP and QUIC and reports both statuses.
WIth this, we are able to get a periodic update of the QUIC status
without restarting the node.
Also adds the time the node was last pinged via QUIC to the tooltip
on the QUIC status tab.
Resolves https://github.com/storj/storj/issues/4398
Change-Id: I18aa2a8e8d44e8187f8f2eb51f398fa6073882a4
Implement a new service to read retain filter from a bucket and
send them out to storagenodes.
This allows the retain filters to be generated by a separate command on
a backup of the database.
Paralellism (setting ConcurrentSends) and end-to-end garbage collection
tests will be restored in a subsequent commit.
Solves https://github.com/storj/team-metainfo/issues/121
Change-Id: Iaf8a33fbf6987676cc3cf74a18a8078916fe673d
The collector tries deleting a piece over and over again, though
the piece does not exist on the storagenode's filesystem.
We need to delete the piece info from the expired db if the
targeted file does not exist.
This does not resolve the base problem of why the file
is deleted before the collector tries deleting it.
This change deletes the piece info from the expired db
if the file does not exist, since we're already trying
to delete that piece anyway.
Closes https://github.com/storj/storj/issues/4192
Change-Id: If659185ca14f1cb29fd3c4237374df6fcd535df8
The collector calls the Delete() method on the pieces
which returns an error which is wrapped by many error classes.
Delete() method is using Stat() from
1aec831d98/storage/filestore/dir.go (L328)
under the hood.
os.IsNotExist(errors.Unwrap(err) will always be false unless
errors.Unwrap(err) is called multiple times till it gets to
the core os.ErrNotExist. Here is a test case to explain better:
func TestABC(t *testing.T) {
classA := errs.Class("A")
classB := errs.Class("B")
wrappedError := classB.Wrap(classA.Wrap(os.ErrNotExist))
require.True(t, os.IsNotExist(errs.Unwrap(wrappedError)))
require.True(t, os.IsNotExist(errors.Unwrap(wrappedError)))
}
Using errs.Is() seems to resolve this even without unwrapping the error:
func TestABC(t *testing.T) {
classA := errs.Class("A")
classB := errs.Class("B")
wrappedError := classB.Wrap(classA.Wrap(os.ErrNotExist))
require.True(t, errs.Is(wrappedError, os.ErrNotExist))
require.False(t, errs.Is(wrappedError, os.ErrExist))
require.False(t, os.IsNotExist(wrappedError))
}
Does not resolve the collector issue here but enhances it:
https://github.com/storj/storj/issues/4192
Change-Id: Ifb75dd15b54c1e1a5e23f6eba2d621d64874a5cc
Today each storagenode should have a port which is opened for the internet, and handles DRPC protocol calls.
When we do a HTTP call on the DRPC endpoint, it hangs until a timeout.
This patch changes the behavior: the main DRPC port of the storagenodes can accept HTTP requests and can be used to monitor the status of the node:
* if returns with HTTP 200 only if the storagnode is healthy (not suspended / disqualified + online score > 0.9)
* it CAN include information about the current status (per satellite). It's opt-in, you should configure it so.
In this way it becomes extremely easy to monitor storagenodes with external uptime services.
Note: this patch exposes some information which was not easily available before (especially the node status, and used satellites). I think it should be acceptable:
* Until having more community satellites, all storagenodes are connected to the main Storj satellites.
* With community satellites, it's good thing to have more transparency (easy way to check who is connected to which satellites)
The implementation is based on this line:
```
http.Serve(NewPrefixedListener([]byte("GET / HT"), publicMux.Route("GET / HT")), p.public.http)
```
This line answers to the TCP requests with `GET / HT...` (GET HTTP request to the route), but puts back the removed prefix.
Change-Id: I3700c7e24524850825ecdf75a4bcc3b4afcb3a74