Commit Graph

47 Commits

Author SHA1 Message Date
Jeff Wendling
5d6cb68cd7 storage/{cockroachkv,postgreskv}: detailed monitoring for list
Change-Id: Iedba10776367233e59f3a6523efdb303b836b241
2020-02-12 10:55:07 +00:00
Jeff Wendling
7999d24f81 all: use monkit v3
this commit updates our monkit dependency to the v3 version where
it outputs in an influx style. this makes discovery much easier
as many tools are built to look at it this way.

graphite and rothko will suffer some due to no longer being a tree
based on dots. hopefully time will exist to update rothko to
index based on the new metric format.

it adds an influx output for the statreceiver so that we can
write to influxdb v1 or v2 directly.

Change-Id: Iae9f9494a6d29cfbd1f932a5e71a891b490415ff
2020-02-05 23:53:17 +00:00
Egon Elbre
da5e408afe storage: add DeleteMultiple method
DeleteMultiple will allow metainfo to delete multiple segments
and get the old pointers in a single request.

Change-Id: Ic144f30c5453274fa2b80df2895f123f5a9cc48b
2020-01-29 13:13:54 -05:00
Egon Elbre
76fdb5d863 storage: add configurable lookup limits
Currently storage tests were tied to the default lookup limit.
By increasing the limits, the tests will take longer and sometimes
cause a large number of goroutines to be started.

This change adds configurable lookup limit to all storage backends.

Also remove boltdb.NewShared, since it's not used any more.

Change-Id: I1a052f149da471246fac5745da133c3cfc27582e
2020-01-22 21:35:56 +02:00
ccase
818242f452
storage/postgreskv: Reverting back to the venerable PG CAS
This was inadvertently converted to the Cockroach version. This reverts
most of that and keeps the changes since then.

Change-Id: Ia440eeebb01bc89fbfa8ce266668030173061469
2020-01-22 11:36:25 -05:00
Egon Elbre
1279eeae39 private/tagsql,storage: fixes to context cancellation
Replace all the remaining uses of sql.DB with tagsql.DB to
fix issues with context cancellation.

Introduce tagsql.Open which helps to get rid of all tagsql.Wrap-s.
Use tagsql in cockroachkv and postgreskv.

Change-Id: I8946d203341cb85a25976896fc7881e1f704e779
2020-01-20 15:44:39 +02:00
ccase
034f9845b1 storage: Plumb limit through storage backends.
* Plumbs the limit through all backends ensuring they don't do
  unnecessary work.
* Don't arbitrarily limit at the backend with hardcoded defaults. The
  limit will be set by the caller.

Prior to this change the code on recursive in some backends would do 10k
results from the database and then only return the first 1k (throwing
out 9k of them).

Prior to this change some backends had no limit at all (e.g. redis).

Change-Id: I1f327eefe095776d123dd11362cd00994c22efdf
2020-01-19 21:23:20 +00:00
Egon Elbre
1abfe42142 satellite: use tagsql
Change-Id: I2170dee409fb0c2fe85913ddd36e7811a3b853ed
2020-01-19 14:39:16 +02:00
ccase
14b43b7e9b storage/postgreskv/schema/data.go: Regenerate migrations that failed to update.
Change-Id: I9fd5a9a5414214faea5f8c476778fccbe022cb6c
2020-01-19 11:22:00 +00:00
Stefan Benten
409d4123bb
Add proper Pathdata Index (#3750) 2020-01-17 00:48:59 +01:00
Egon Elbre
64fb2d3d2f Revert "dbutil: statically require all databases accesses to use contexts"
This reverts commit 8e242cd012.

Revert because lib/pq has known issues with context cancellation.
These issues need to be resolved before these changes can be merged.

Change-Id: I160af51dbc2d67c5449aafa406a403e5367bb555
2020-01-15 07:28:00 +00:00
JT Olio
8e242cd012 dbutil: statically require all databases accesses to use contexts
this will allow for some nice runtime analysis down the road.
also, this allows for wrapping database handles in a way that
can interact with these contexts

requires https://review.dev.storj.io/c/storj/dbx/+/514

Change-Id: Ib087b7cd73296dd2c1e0331314da34d861f61d2b
2020-01-14 18:20:47 -05:00
JT Olio
86093d0940 postgreskv: drop not null on buckets
Change-Id: I2a2bd7709de211a9d1808248af573f1bb630cfd5
2020-01-14 12:07:53 -07:00
Egon Elbre
0835b9024c private/dbutil/pgutil: add ctx argument
Change-Id: Icfd56ca8c1f831ad56c0195a0b883e8f0618daaf
2020-01-13 15:27:06 +02:00
Egon Elbre
d3d75a597f satellite,storage: clean global ctx usage in tests
Change-Id: I89ea5c95fc6895518b464f8eb6a4c74c6ae37651
2020-01-09 10:37:21 +00:00
paul cannon
0135852a0e storage/postgreskv: use transactional helper
We may never need this code to work with CockroachDB, but I'm on a
mission to avoid problematic uses of Begin() and BeginTx(), and anywhere
they appear is a possible place for someone to copy-and-paste and do
something wrong. dbutil.WithTx makes this code a little bit simpler too,
so it seems worthwhile.

Change-Id: I9b4ab484db4590cad5ab07de515bbf5d9708daed
2020-01-06 23:24:44 +00:00
paul cannon
94651921c3 storage/testsuite: pass ctx in to bulk setup methods
to make them cancelable. Also,

* rename BulkDelete->BulkDeleteAll

this leaves room for a new method `BulkDelete(items storage.Items)` that
does a bulk deletion of a specified list of items, as opposed to
deleting _everything_. such a method would be used in the
`cleanupItems()` function found in utils.go, because when individual
deletes are fairly slow, that step takes way too long during tests.

* use BulkDelete method if available

nothing currently provides `BulkDelete(items storage.Items) error`,
but we made use of it with the Bigtable testing and code, and may make
use of it again when adding new kv backends.

* and eliminate the global context in test_iterate.go

Change-Id: I171c7a3818beffbad969b131e98b9bbe3f324bf2
2019-12-10 20:22:08 +00:00
paul cannon
378b863b2b private,satellite: unite all the "temp db schema" things
first, so that they all work the same way, because it's getting
complicated, and second, so that we can do the appropriate thing
instead of CREATE SCHEMA for cockroachdb.

Change-Id: I27fbaeeb6223a3e06d97bcf692a2d014b31465f7
2019-12-05 15:36:59 +00:00
Matt Robinson
9af97d366a Make sed a little more cross platformable (#3629) 2019-11-22 11:17:02 -07:00
Egon Elbre
ee6c1cac8a
private: rename internal to private (#3573) 2019-11-14 21:46:15 +02:00
Egon Elbre
1e64006e32 lint: add staticcheck as a separate step (#3569) 2019-11-14 10:31:30 +02:00
paul cannon
d5963d81d0
storage/postgreskv: fix ultra-slow nonrecursive list (#3564)
This is based on Jeff's most excellent work to identify why
non-recursive listing under postgreskv was phenomenally slow. It turns
out PostgreSQL's query planner was actually using two sequential scans
of the pathdata table to do its job. It's unclear for how long that has
been happening, but obviously it won't scale any further.

The main change is propagating bucket association with pathnames through
the CTE so that the query planner lets itself use the pathdata index on
(bucket, fullpath) for the skipping-forward part.

Jeff also had some changes to the range ends to keep NULL from being
used- I believe with the intent of making sure the query planner was
able to use the pathdata index. My tests on postgres 9.6 and 11
indicate that those changes don't make any appreciable difference in
performance or query plan, so I'm going to leave them off for now to
avoid a careful audit of the semantic differences.

There is a test included here, which only serves to check that the new
version of the function is indeed active. To actually ensure that no
sequential scans are being used in the query plan anymore, our tests
would need to be run against a test db with lots of data already loaded
in it, and that isn't feasible for now.

Change-Id: Iffe9a1f411c54a2f742a4abb8f2df0c64fd662cb
2019-11-13 17:52:14 -06:00
paul cannon
0c025fa937 storage/: remove reverse-key-listing feature
We don't use reverse listing in any of our code, outside of tests, and
it is only exposed through libuplink in the
lib/uplink.(*Project).ListBuckets() API. We also don't know of any users
who might have a need for reverse listing through ListBuckets().

Since one of our prospective pointerdb backends can not support
backwards iteration, and because of the above considerations, we are
going to remove the reverse listing feature.

Change-Id: I8d2a1f33d01ee70b79918d584b8c671f57eef2a0
2019-11-12 18:47:51 +00:00
Jess G
8d92c288e2
satellitedb: separate migration into subcommand (#3436)
* separate sadb migration, add version check

* update checkversion to do same validation as migration

* changes per CR

* add sa migration to storj-sim

* add different debug port in storj-sim for migration

* add wait for exit for storj-sim migration

* update sa docker entrypoint to support migration

* storj-sim satellite parts all wait for migration

* upgrade golang-migrate/migrate to v4 because bug

* fix go mod tidy
2019-11-02 13:09:07 -07:00
Egon Elbre
e9c36d560f
satellite: make PointerDB an argument to satellite.New (#3233) 2019-10-10 21:06:26 +03:00
Egon Elbre
2d69d47655
all: fix Error.New formatting (#2840) 2019-08-21 19:30:29 +03:00
Kaloyan Raev
0e1cb7bfb8
CompareAndSwap in KeyValueStore (#2602) 2019-07-23 22:46:33 +03:00
Ivan Fraixedes
3c8f1370d2
[v3 2137] - Add more info to find out repair failures (#2623)
* pkg/datarepair/repairer: Track always time for repair
  Make a minor change in the worker function of the repairer, that when
  successful, always track the metric time for repair independently if the
  time since checker queue metric can be tracked.

* storage/postgreskv: Wrap error in Get func
  Wrap the returned error of the Get function as it is done when the
  query doesn't return any row.

* satellite/metainfo: Move debug msg to the right place
  NewStore function was writing a debug log message when the DB was
  connected, however it was always writing it out despite if an error
  happened when getting the connection.

* pkg/datarepair/repairer: Wrap error before logging it
  Wrap the error returned by process which is executed by the Run method
  of the repairer service to add context to the error log message.

* pkg/datarepair/repairer: Make errors more specific in worker
  Make the error messages of the "worker" method of the Service more
  specific and the logged message for such errors.

* pkg/storage/repair: Improve error reporting Repair
  In order of improving the error reporting by the
  pkg/storage/repair.Repair method, several errors of this method and
  functions/methods which this one relies one have been updated to be
  wrapper into their corresponding classes.

* pkg/storage/segments: Track path param of Repair method
  Track in monkit the path parameter passed to the Repair method.

* satellite/satellitedb: Wrap Error returned by Delete
  Wrap the error returned by repairQueue.Delete method to enhance the
  error with a class and stack and the
  pkg/storage/segments.Repairer.Repair method get a more contextualized
  error from it.
2019-07-23 16:28:06 +02:00
JT Olio
cc47bf2df7
postgreskv: actually monitor the iterate call (#2129)
Change-Id: I3a5221d6641b6c95b7b746ec3f7ac4522f464b85
2019-06-05 11:22:46 -06:00
JT Olio
f1641af802 storage: add monkit task to missing places (#2122)
* storage: add monkit task to missing places

Change-Id: I9e17a6b14f7c25bbf698eeecf32785e9add3f26e

* fix tests

Change-Id: Id078276fa3de61a28eb3d01d4e751732ecbb173f

* import order

Change-Id: I814e33755b9f10b5219af37cd828cd75eb3da1a4

* remove part of other commit

Change-Id: Idaa4c95cd65e97567fb466de49718db8203cfbe1
2019-06-05 16:23:10 +02:00
JT Olio
d02427e41a db: set max open conns, conn max lifetime, add db stat monitoring (#2117) 2019-06-04 23:30:21 +02:00
ethanadams
268dc6b7e4
Enable gocritic linter (#2051)
* first round cleanup based on go-critic

* more issues resolved for ifelsechain and unlambda checks

* updated from master and gocritic found a new ifElseChain issue

* disable appendAssign. i reports false positives

* re-enabled go-critic appendAssign and disabled lint check at code line level

* fixed go-critic lint error

* fixed // nolint add gocritic specifically
2019-05-29 09:14:25 -04:00
Egon Elbre
9c23c2d427 db: set max idle connections higher to avoid redialing all the time (#1991) 2019-05-21 17:30:06 +03:00
paul cannon
5d24af9b41 Revert "storage/postgreskv: use batch size 1000 (#1988)" (#1992)
This reverts commit f988543764.
2019-05-17 14:41:15 -04:00
Egon Elbre
f988543764 storage/postgreskv: use batch size 1000 (#1988) 2019-05-17 19:09:04 +02:00
Jess G
8518618b7a
add postgres support to storj-sim (#1908)
* add flags to sotrj-sim for SA dbs

* add schema to postgres

* add createschema with parse to sa

* add metainfo db postgres support

* add kv default as bolt

* add debug log to see db source

* add env var for postgres to test-sim.sh

* fix lint errs

* dynamically add postgres to args

* add postgres to integration tests

* add sqlite and postgres integration jenkins

* fix db name

* merge integration tests into one step

* test integration tests w/psql

* try using different schema

* debug failure

* use correct host for running storj-sim

* rm sqlite integration

* add back integration
2019-05-14 08:13:18 -07:00
Egon Elbre
db939d37ec
cover all the things (#1818) 2019-04-26 16:39:11 +03:00
Egon Elbre
ac18432dc5 Public Jenkins (#1779)
* initial test

* add parenthesis

* remove pipeline

* add few todos

* use docker image for environment

* use pipeline

* fix

* add missing steps

* invoke with bash

* disable protoc

* try using golang image

* try as root

* Disable install-awscli.sh temporarily

* Debugging

* debugging part 2

* Set absolute path for debugging

* Remove absolute path

* Dont run as root

* Install unzip

* Dont forget to apt-get update

* Put into folder that is in PATH

* disable IPv6 Test

* add verbose info and check protobuf

* make integration non-parallel

* remove -v and make checkout part of build

* make a single block for linting

* fix echo

* update

* try using things directly

* try add xunit output

* fix name

* don't print empty lines

* skip testsuites without any tests

* remove coverage, because it's not showing the right thing

* try using dockerfile

* fix deb source

* fix typos

* setup postgres

* use the right flag

* try using postgresdb

* expose different port

* remove port mapping

* start postgres

* export

* use env block

* try using different host for integration tests

* eat standard ports

* try building images and binaries

* remove if statement

* add steps

* do before verification

* add go get goversioninfo

* make separate jenkinsfile

* add check

* don't add empty packages

* disable logging to reduce output size

* add timeout

* add comment about mfridman

* Revert Absolute Path

* Add aws to PATH

* PATH Changes

* Docker Env Fixes

* PATH Simplification

* Debugging the PATH

* Debug Logs

* Debugging

* Update PATH Handling

* Rename

* revert changes to Jenkinsfile
2019-04-22 15:45:53 +02:00
Bill Thorp
b53f9896d3
Removed ReverseList from KeyValueStore interfaces (#1306)
Removed ReverseList from KeyValueStore interfaces
2019-02-13 12:27:03 -05:00
Jennifer Li Johnson
856b98997c
updates copyright 2018 to 2019 (#1133) 2019-01-24 15:15:10 -05:00
Egon Elbre
99d3b7a3c8
Fix import grouping (#1111) 2019-01-22 17:48:23 +02:00
Egon Elbre
c4c9e75109
Use errs.Combine in storage (#923) 2018-12-21 12:54:20 +02:00
Egon Elbre
c4b90f84dd
Use better defaults and naming for postgres database (#659) 2018-11-15 20:36:57 +02:00
Kaloyan Raev
0357e61bbd
metainfo objects tests (#662) 2018-11-15 17:31:33 +02:00
Egon Elbre
6d401a4351
Check for go.mod validity (#605) 2018-11-09 15:32:35 +02:00
Bryan White
ee62e2a9d8
Use transport client and cleanup all the clients (#574)
* wip

* linter fixes

* linter fixes

* test fixes

* linter fixes

* fix merge + restructure piecestore packages

* review feedback

* linter fixes

* linter fixes

* remove unnecessary aliases to piecestore

* more merge fixing
2018-11-06 18:49:17 +01:00
paul cannon
e2c0dd437a
offer PostgreSQL storage for pointerdb (#440)
..although it ought to work for other storage.KeyValueStore needs as
well. it's just optimized to work pretty well for a largish hierarchy of
paths.

This includes the addition of "long benchmarks" for KeyValueStore
testing. These will only be run when -test-bench-long is added to the
test flags. In these benchmarks, a large corpus of paths matching a
natural ("real-life") hierarchy is read from paths.data.gz (which you
can get from https://github.com/storj/path-test-corpus) and imported
into a particular KeyValueStore. Recursive and non-recursive queries are
run on it to detect performance problems that arise only at scale.

This also includes alternate implementation of the postgreskv client,
which works in a less-bizarre way for non-recursive queries, but suffers
from poor performance in tests such as the long benchmarks. Once this
alternate impl is committed to the tree, we can remove it again; I just
want it to be available for future reference.
2018-10-25 12:11:28 -05:00