Commit Graph

17 Commits

Author SHA1 Message Date
Egon Elbre
1279eeae39 private/tagsql,storage: fixes to context cancellation
Replace all the remaining uses of sql.DB with tagsql.DB to
fix issues with context cancellation.

Introduce tagsql.Open which helps to get rid of all tagsql.Wrap-s.
Use tagsql in cockroachkv and postgreskv.

Change-Id: I8946d203341cb85a25976896fc7881e1f704e779
2020-01-20 15:44:39 +02:00
ccase
14b43b7e9b storage/postgreskv/schema/data.go: Regenerate migrations that failed to update.
Change-Id: I9fd5a9a5414214faea5f8c476778fccbe022cb6c
2020-01-19 11:22:00 +00:00
Stefan Benten
409d4123bb
Add proper Pathdata Index (#3750) 2020-01-17 00:48:59 +01:00
JT Olio
86093d0940 postgreskv: drop not null on buckets
Change-Id: I2a2bd7709de211a9d1808248af573f1bb630cfd5
2020-01-14 12:07:53 -07:00
Egon Elbre
0835b9024c private/dbutil/pgutil: add ctx argument
Change-Id: Icfd56ca8c1f831ad56c0195a0b883e8f0618daaf
2020-01-13 15:27:06 +02:00
Matt Robinson
9af97d366a Make sed a little more cross platformable (#3629) 2019-11-22 11:17:02 -07:00
Egon Elbre
ee6c1cac8a
private: rename internal to private (#3573) 2019-11-14 21:46:15 +02:00
Egon Elbre
1e64006e32 lint: add staticcheck as a separate step (#3569) 2019-11-14 10:31:30 +02:00
paul cannon
d5963d81d0
storage/postgreskv: fix ultra-slow nonrecursive list (#3564)
This is based on Jeff's most excellent work to identify why
non-recursive listing under postgreskv was phenomenally slow. It turns
out PostgreSQL's query planner was actually using two sequential scans
of the pathdata table to do its job. It's unclear for how long that has
been happening, but obviously it won't scale any further.

The main change is propagating bucket association with pathnames through
the CTE so that the query planner lets itself use the pathdata index on
(bucket, fullpath) for the skipping-forward part.

Jeff also had some changes to the range ends to keep NULL from being
used- I believe with the intent of making sure the query planner was
able to use the pathdata index. My tests on postgres 9.6 and 11
indicate that those changes don't make any appreciable difference in
performance or query plan, so I'm going to leave them off for now to
avoid a careful audit of the semantic differences.

There is a test included here, which only serves to check that the new
version of the function is indeed active. To actually ensure that no
sequential scans are being used in the query plan anymore, our tests
would need to be run against a test db with lots of data already loaded
in it, and that isn't feasible for now.

Change-Id: Iffe9a1f411c54a2f742a4abb8f2df0c64fd662cb
2019-11-13 17:52:14 -06:00
Jess G
8d92c288e2
satellitedb: separate migration into subcommand (#3436)
* separate sadb migration, add version check

* update checkversion to do same validation as migration

* changes per CR

* add sa migration to storj-sim

* add different debug port in storj-sim for migration

* add wait for exit for storj-sim migration

* update sa docker entrypoint to support migration

* storj-sim satellite parts all wait for migration

* upgrade golang-migrate/migrate to v4 because bug

* fix go mod tidy
2019-11-02 13:09:07 -07:00
ethanadams
268dc6b7e4
Enable gocritic linter (#2051)
* first round cleanup based on go-critic

* more issues resolved for ifelsechain and unlambda checks

* updated from master and gocritic found a new ifElseChain issue

* disable appendAssign. i reports false positives

* re-enabled go-critic appendAssign and disabled lint check at code line level

* fixed go-critic lint error

* fixed // nolint add gocritic specifically
2019-05-29 09:14:25 -04:00
Jess G
8518618b7a
add postgres support to storj-sim (#1908)
* add flags to sotrj-sim for SA dbs

* add schema to postgres

* add createschema with parse to sa

* add metainfo db postgres support

* add kv default as bolt

* add debug log to see db source

* add env var for postgres to test-sim.sh

* fix lint errs

* dynamically add postgres to args

* add postgres to integration tests

* add sqlite and postgres integration jenkins

* fix db name

* merge integration tests into one step

* test integration tests w/psql

* try using different schema

* debug failure

* use correct host for running storj-sim

* rm sqlite integration

* add back integration
2019-05-14 08:13:18 -07:00
Jennifer Li Johnson
856b98997c
updates copyright 2018 to 2019 (#1133) 2019-01-24 15:15:10 -05:00
Egon Elbre
99d3b7a3c8
Fix import grouping (#1111) 2019-01-22 17:48:23 +02:00
Egon Elbre
6d401a4351
Check for go.mod validity (#605) 2018-11-09 15:32:35 +02:00
Bryan White
ee62e2a9d8
Use transport client and cleanup all the clients (#574)
* wip

* linter fixes

* linter fixes

* test fixes

* linter fixes

* fix merge + restructure piecestore packages

* review feedback

* linter fixes

* linter fixes

* remove unnecessary aliases to piecestore

* more merge fixing
2018-11-06 18:49:17 +01:00
paul cannon
e2c0dd437a
offer PostgreSQL storage for pointerdb (#440)
..although it ought to work for other storage.KeyValueStore needs as
well. it's just optimized to work pretty well for a largish hierarchy of
paths.

This includes the addition of "long benchmarks" for KeyValueStore
testing. These will only be run when -test-bench-long is added to the
test flags. In these benchmarks, a large corpus of paths matching a
natural ("real-life") hierarchy is read from paths.data.gz (which you
can get from https://github.com/storj/path-test-corpus) and imported
into a particular KeyValueStore. Recursive and non-recursive queries are
run on it to detect performance problems that arise only at scale.

This also includes alternate implementation of the postgreskv client,
which works in a less-bizarre way for non-recursive queries, but suffers
from poor performance in tests such as the long benchmarks. Once this
alternate impl is committed to the tree, we can remove it again; I just
want it to be available for future reference.
2018-10-25 12:11:28 -05:00