98921f9faa
When we do `satellite run api --placement '...'`, the placement rules are not parsed well. The problem is based on `viper.AllSettings()`, and the main logic is sg. like this (from a new unit test): ``` r := ConfigurablePlacementRule{} err := r.Set(p) require.NoError(t, err) serialized := r.String() r2 := ConfigurablePlacementRule{} err = r2.Set(serialized) require.NoError(t, err) require.Equal(t, p, r2.String()) ``` All settings evaluates the placement rules in `ConfigurablePlacementRules` and stores the string representation. The problem is that we don't have proper `String()` implementation (it prints out the structs instead of the original definition. There are two main solutions for this problem: 1. We can fix the `String()`. When we parse a placement rule, the `String()` method should print out the original definition 2. We can switch to use pure string as configuration parameter, and parse the rules only when required. I feel that 1 is error prone, we can do it (and in this patch I added a lot of `String()` implementations, but it's hard to be sure that our `String()` logic is inline with the parsing logic. Therefore I decided to make the configuration value of the placements a string (or a wrapper around string). That's the main reason why this patch seems to be big, as I updated all the usages. But the main part is in beginning of the `placement.go` (configuration parsing is not a pflag.Value implementation any more, but a separated step). And `filter.go`, (a few more String implementation for filters. https://github.com/storj/storj/issues/6248 Change-Id: I47c762d3514342b76a2e85683b1c891502a0756a |
||
---|---|---|
.. | ||
batch.go | ||
csv_test.go | ||
csv.go | ||
main.go | ||
node_check_test.go | ||
node_check.go | ||
nodealias_test.go | ||
nodealias.go | ||
process_test.go | ||
process.go | ||
README.md | ||
service_test.go | ||
service.go | ||
summarize.go | ||
verify_test.go | ||
verify.go |
segment-verify is a tool for verifying the segments.
High Level Overview
segment-verify verifies segment status on storage nodes in a few stages:
- First it loads the metabase for a batch of
--service.batch-size=10000
segments. - They are then distributed into queues using every storage nodes. It will preferentially choose nodes specified in
--service.priority-nodes-path
file, one storagenode id per line. - Then it will query each storage node a single byte for each segment.
--service.concurrency=1000
concurrent connections at a time are made. - Every segment will be checked
--service.check=3
times. However, any failed attempt (e.g. node is offline) is only retried once. - When there are failures in verification process itself, then those segments are written into
--service.retry-path=segments-retry.csv
path. - When the segment isn't found at least on one of the nodes, then it's written into
--service.not-found-path=segments-not-found.csv
file.
There are few parameters for controlling the verification itself:
# This allows to throttle requests, to avoid overloading the storage nodes.
--verify.request-throttle minimum interval for sending out each request (default 150ms)
# When there's a failure to make a request, the process will retry after this duration.
--verify.order-retry-throttle duration how much to wait before retrying order creation (default 50ms)
# This is the time each storage-node has to respond to the request.
--verify.per-piece-timeout duration duration to wait per piece download (default 800ms)
# Just the regular dialing timeout.
--verify.dial-timeout duration how long to wait for a successful dial (default 2s)
# This allows to specify the minimum node version that has the Exists endpoint.
--verify.version-with-exists string minimum storage node version with implemented Exists method (default "v1.69.2")
Running the tool
- by specifying range boundaries:
segment-verify run range --low 00 --high ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff --config-dir ./satellite-config-dir
- by specifying buckets to be checked:
segment-verify run buckets --buckets-csv bucket.csv