The collector calls the Delete() method on the pieces
which returns an error which is wrapped by many error classes.
Delete() method is using Stat() from
1aec831d98/storage/filestore/dir.go (L328)
under the hood.
os.IsNotExist(errors.Unwrap(err) will always be false unless
errors.Unwrap(err) is called multiple times till it gets to
the core os.ErrNotExist. Here is a test case to explain better:
func TestABC(t *testing.T) {
classA := errs.Class("A")
classB := errs.Class("B")
wrappedError := classB.Wrap(classA.Wrap(os.ErrNotExist))
require.True(t, os.IsNotExist(errs.Unwrap(wrappedError)))
require.True(t, os.IsNotExist(errors.Unwrap(wrappedError)))
}
Using errs.Is() seems to resolve this even without unwrapping the error:
func TestABC(t *testing.T) {
classA := errs.Class("A")
classB := errs.Class("B")
wrappedError := classB.Wrap(classA.Wrap(os.ErrNotExist))
require.True(t, errs.Is(wrappedError, os.ErrNotExist))
require.False(t, errs.Is(wrappedError, os.ErrExist))
require.False(t, os.IsNotExist(wrappedError))
}
Does not resolve the collector issue here but enhances it:
https://github.com/storj/storj/issues/4192
Change-Id: Ifb75dd15b54c1e1a5e23f6eba2d621d64874a5cc
Part 2 of moving usedserials in memory
* Drop usedserials table in storagenodedb
* Use in-memory usedserials store in place of db for order limit
verification
* Update order limit grace period to be only one hour - this means
uplinks must send their order limits to storagenodes within an hour of
receiving them
Change-Id: I37a0e1d2ca6cb80854a3ef495af2d1d1f92e9f03
this commit updates our monkit dependency to the v3 version where
it outputs in an influx style. this makes discovery much easier
as many tools are built to look at it this way.
graphite and rothko will suffer some due to no longer being a tree
based on dots. hopefully time will exist to update rothko to
index based on the new metric format.
it adds an influx output for the statreceiver so that we can
write to influxdb v1 or v2 directly.
Change-Id: Iae9f9494a6d29cfbd1f932a5e71a891b490415ff
With this change RS configuration will be set on satellite. Uplink with
get RS values with BeginObject request and will use it. For backward
compatibility and to avoid super large change redundancy scheme stored
with bucket is not touched. This can be done in future.
Change-Id: Ia5f76fc10c37e2c44e4f7b8754f28eafe1f97eff
Remove direct dependency on uplink.RSConfig, this simplifies
moving the config file without introducing weird dependencies.
Change-Id: I7fd2a145401e0205d7047631df9d2810241efeec
This commit adds functionality to include the space used in the trash
directory when calculating available space on the node.
It also includes this trash value in the space used cache, with methods
to keep the cache up-to-date as files are trashed, restored, and
emptied.
As part of the commit, the RestoreTrash and EmptyTrash methods have
slightly changed signatures. RestoreTrash now also returns the keys that
were restored, while EmptyTrash also returns the total disk space
recovered. Each of these changes makes it possible to keep the cache
up-to-date and know how much space is being used/recovered.
Also changed is the signature of PieceStoreAccess.ContentSize method.
Previously this method returns only the content size of the blob,
removing the size of any header data. This method has been renamed
`Size` and returns both the full disk size and content size of the blob.
This allows us to only stat the file once, and in some instances (i.e.
cache) knowing the full file size is useful.
Note: This commit simply adds the trash size data to the piece size data
we were already collecting. The piece size data is not accurate for all
use-cases (e.g. because it does not contain piece header data); however,
this commit does not fix that problem. Now that the ContentSize (Size)
method returns the full size of the file, it should be easier to fix
this problem in a future commit.
Change-Id: I4a6cae09e262c8452a618116d1dc66b687f59f85
This PR introduces functionality for routine deletion of archived orders.
The user may specify an interval at which to run archive cleanup and a TTL for archived items. During each cleanup, all items that have reached the TTL are deleted
This archive cleanup job is combined with the order sender into a new combined orders service
Deprecate the pieceinfo database, and start storing piece info as a header to
piece files. Institute a "storage format version" concept allowing us to handle
pieces stored under multiple different types of storage. Add a piece_expirations
table which will still be used to track expiration times, so we can query it, but
which should be much smaller than the pieceinfo database would be for the
same number of pieces. (Only pieces with expiration times need to be stored in piece_expirations, and we don't need to store large byte blobs like the serialized
order limit, etc.) Use specialized names for accessing any functionality related
only to dealing with V0 pieces (e.g., `store.V0PieceInfo()`). Move SpaceUsed-
type functionality under the purview of the piece store. Add some generic
interfaces for traversing all blobs or all pieces. Add lots of tests.