This ensures that rows are closed to avoid leaks.
Also verifies that Err() is called, to ensure that no
error is left behind.
Change-Id: Idd1bec9bf479f40021da67b2c80ce83033149469
It looks like GetAll and DeleteMultiple are only used in tests for now,
but they didn't have handling for retry errors returned from cockroach.
If they're used in prod in the future, now they will retry.
Change-Id: I0f281454ddebf282789142ff1d66a69bda5727c9
This changeset replaces https://review.dev.storj.io/c/storj/storj/+/1839
which did the same thing but Nat couldn't figure out how to fix conflicting
files the correct gerrity way.
Change-Id: If05a8902aca986ea9f6c9168a90b31beebab839a
Currently Cockroach isn't performant for concurrent database setup and
tear-down. Instead of a single instance allow setting multiple potential
connection strings and let the tests pick one connection string
randomly.
This improves test duration by ~10 minutes.
While we are at significantly changing how pgtest works, introduce
helper PickPostgres and PickCockroach for selecting the database to
reduce code duplications in multiple places.
Change-Id: I8ad171d5c4c8a4fc081ec2ae9bdd0cc948a80619
In cases like the segment reaper script connecting to the metainfodb,
we don't want a db migration to happen automatically when we call
metainfo.NewStore. This adds MigrateToLatest method for postgreskv
and cockroackv, and calls MigrateToLatest in places where NewStore used
to create tables.
Change-Id: I682d0f26d609af0601dfdb32a24866cdf5d32a7e
storagenodes have like 10 or more databases. without this
tag they all get sent as the same value, stomping on each
other.
Change-Id: Ib12019684d6ea8f2a5b83df584056dfa79e3c4b3
this commit updates our monkit dependency to the v3 version where
it outputs in an influx style. this makes discovery much easier
as many tools are built to look at it this way.
graphite and rothko will suffer some due to no longer being a tree
based on dots. hopefully time will exist to update rothko to
index based on the new metric format.
it adds an influx output for the statreceiver so that we can
write to influxdb v1 or v2 directly.
Change-Id: Iae9f9494a6d29cfbd1f932a5e71a891b490415ff
DeleteMultiple will allow metainfo to delete multiple segments
and get the old pointers in a single request.
Change-Id: Ic144f30c5453274fa2b80df2895f123f5a9cc48b
Currently storage tests were tied to the default lookup limit.
By increasing the limits, the tests will take longer and sometimes
cause a large number of goroutines to be started.
This change adds configurable lookup limit to all storage backends.
Also remove boltdb.NewShared, since it's not used any more.
Change-Id: I1a052f149da471246fac5745da133c3cfc27582e
Replace all the remaining uses of sql.DB with tagsql.DB to
fix issues with context cancellation.
Introduce tagsql.Open which helps to get rid of all tagsql.Wrap-s.
Use tagsql in cockroachkv and postgreskv.
Change-Id: I8946d203341cb85a25976896fc7881e1f704e779
* Plumbs the limit through all backends ensuring they don't do
unnecessary work.
* Don't arbitrarily limit at the backend with hardcoded defaults. The
limit will be set by the caller.
Prior to this change the code on recursive in some backends would do 10k
results from the database and then only return the first 1k (throwing
out 9k of them).
Prior to this change some backends had no limit at all (e.g. redis).
Change-Id: I1f327eefe095776d123dd11362cd00994c22efdf
This reverts commit 8e242cd012.
Revert because lib/pq has known issues with context cancellation.
These issues need to be resolved before these changes can be merged.
Change-Id: I160af51dbc2d67c5449aafa406a403e5367bb555
this will allow for some nice runtime analysis down the road.
also, this allows for wrapping database handles in a way that
can interact with these contexts
requires https://review.dev.storj.io/c/storj/dbx/+/514
Change-Id: Ib087b7cd73296dd2c1e0331314da34d861f61d2b
CockroachDB collects query metrics and separates them by application name and we were not setting the correct application name for the cockroachkv client. This PR calls our existing function that appends it to the connection string.
Change-Id: I4a97ed248c31f8b187c680d84b45472f0d50fd7e
the relatively small batch size of 128 was chosen so that if we have
a set of keys like
a/1
a/2
...
a/100000
b
c
list operations would not have to walk 100k keys inside of a/ before
skipping to b. unfortunately, iteration is also used by the metainfo
loop. in that case, it's doing a recursive listing, and so there's
no need to skip large prefixes. thus, we can use a bigger batch
size when recursive listing is requested.
Change-Id: I87cf1ba385b6eb2928c5b7cc5e0f7a8c7bd126d9
this will allow us to inspect the type of `db.Driver()` on *sql.DB
connections to correctly differentiate between pg and crdb conns.
as a bonus, this moves all concerns about when to replace "cockroach://"
with "postgres://" out of view, letting the thin shim "driver" take care
of that.
Change-Id: Ib24103ab7c508231e681f89a7321b623e4e125e9
for storj-sim to work, we need to avoid schemas in cockroach urls
so we have storj-sim create namespaced databases instead of schemas
and we have the migrate command create the database in the same way
that it would create a schema for postgres. then it works!
a follow up commit will move the creation of the database/schemas
into storj-sim's setup step so that we can avoid doing these icky
creations during normal migration calls. it will also make the
pointerdb have an explicit call to migrate instead of just doing
it every time it's opened.
Change-Id: If69ef5cb96b6866b0438c761bd445afb3597ae5f