This change is a special case for batch processing. If in batch request
CommitSegment and CommitObject are one after another we can execute
these request as one. This will avoid current logic where we are saving
pointer for CommitSegment and later we are deleting this pointer and
saving it once again as under last segment path for CommitObject.
Change-Id: If170e78c8410f5ba5916cbff6a29b9221db9ce2e
Fix uplink setup step for uplink versions that requires an access field.
Update how script selects uplink versions to test.
Use significantly smaller remote files for test (performance).
Change-Id: If590b8798767e2a0621fb84cd3b8852d02f6d1da
Replace all the remaining uses of sql.DB with tagsql.DB to
fix issues with context cancellation.
Introduce tagsql.Open which helps to get rid of all tagsql.Wrap-s.
Use tagsql in cockroachkv and postgreskv.
Change-Id: I8946d203341cb85a25976896fc7881e1f704e779
Not having a skew caused an issue where:
1. Uplink calls "begin segment", where segment isn't committed to the
database.
2. Uplink stores piece X to the storage node A with timestamp 1.
3. Satellite runs garbage collection with timestamp 2.
4. Satellite sends retain request to storage node A with timestamp 2.
5. Storage node A deletes piece X, because 1 < 2.
6. Uplink calls "commit segment" with storage node A in it.
7. Download of segment fails, because A doesn't have piece X.
In production this is not an issue since the MaxTimeSkew is 72h by
default.
Change-Id: Id87ca3ddc44103dcd85d031b1367168c014b8e7b
* Plumbs the limit through all backends ensuring they don't do
unnecessary work.
* Don't arbitrarily limit at the backend with hardcoded defaults. The
limit will be set by the caller.
Prior to this change the code on recursive in some backends would do 10k
results from the database and then only return the first 1k (throwing
out 9k of them).
Prior to this change some backends had no limit at all (e.g. redis).
Change-Id: I1f327eefe095776d123dd11362cd00994c22efdf
Multiple time.Now calls in a short amount of time
may return the exact same time.
This adds a time.Sleep() to ensure we get different internal
timing.
Change-Id: I533b5eedfdcab13a0197976ccf4b9fa96d4c3bba
Migration step was closing a database that was used by
the migration itself. There is an active tranasction
over the database.
Instead of closing in the same transaction we can wait
until restart for the database cleanup.
Change-Id: Ic971d8cea81a3ab783f4a1bdc6357009c8b31386
Also added temporary types withRebind and withTagTx,
which will be later removed. Currently they help to avoid
changing the whole codebase at the same time.
Change-Id: I7f07ba8f4709a23a463bfa67464628665a05808f
dbschema.Query is used only for testing and sqlite,
so this won't cause us problems in production.
Change-Id: Ib296a7daf161a9d3de23a7dfdc4f505d47ac4a37
storagenode database preflight check.
Disable preflight database check by default, and have the option to
enable it. This will allow us to enable it once it is definitely
working.
Also change the name of the config flag for preflight time sync.
Change-Id: Ie2e20f9e25dcb38794eafa7e1505e7c6ff287c99
On pieces usage cache init we now load the trash info from the db. Also
fixes a test that was masking the failure here.
Change-Id: I9ff7da5bc6c0f74cf0942e20931b40e0c88d70fa
warning: databases migrated to version 77 before this commit
is merged must be manually re-migrated. this should not be a
problem for anything but staging databases.
Change-Id: Ie1631c48379472352014183ee43f1465e22200f7
for storagenode
Ensure that database schema matches latest test migration schema before
allowing the node to start up.
Ensure minimal read/write functionality for each storagenode database
before allowing the node to start up.
This will eliminate many unhandled audit errors we are seeing.
Change-Id: Ic0e628b04a9c35b7a8243f6a81d4683918170ba9