This changes when we write the drpcheader. Rather than making it its own
write to the connection, it now prepends the drpc header to the first
write on the connection (typically the tls handshake). This results in
one less packet being sent at the beginning of each drpc connection.
For an operation like uploading a file from uplink, this results in many
packets being dropped: one when communicating with the satellite, and
one for each communication with the storage nodes.
Change-Id: I7644b46e90ffa7acea73ac56831396307352ed7a
this will allow us to inspect the type of `db.Driver()` on *sql.DB
connections to correctly differentiate between pg and crdb conns.
as a bonus, this moves all concerns about when to replace "cockroach://"
with "postgres://" out of view, letting the thin shim "driver" take care
of that.
Change-Id: Ib24103ab7c508231e681f89a7321b623e4e125e9
Backstory: I needed a better way to pass around information about the
underlying driver and implementation to all the various db-using things
in satellitedb (at least until some new "cockroach driver" support makes
it to DBX). After hitting a few dead ends, I decided I wanted to have a
type that could act like a *dbx.DB but which would also carry
information about the implementation, etc. Then I could pass around that
type to all the things in satellitedb that previously wanted *dbx.DB.
But then I realized that *satellitedb.DB was, essentially, exactly that
already.
One thing that might have kept *satellitedb.DB from being directly
usable was that embedding a *dbx.DB inside it would make a lot of dbx
methods publicly available on a *satellitedb.DB instance that previously
were nicely encapsulated and hidden. But after a quick look, I realized
that _nothing_ outside of satellite/satellitedb even needs to use
satellitedb.DB at all. It didn't even need to be exported, except for
some trivially-replaceable code in migrate_postgres_test.go. And once
I made it unexported, any concerns about exposing new methods on it were
entirely moot.
So I have here changed the exported *satellitedb.DB type into the
unexported *satellitedb.satelliteDB type, and I have changed all the
places here that wanted raw dbx.DB handles to use this new type instead.
Now they can just take a gander at the implementation member on it and
know all they need to know about the underlying database.
This will make it possible for some other pending code here to
differentiate between postgres and cockroach backends.
Change-Id: I27af99f8ae23b50782333da5277b553b34634edc
- also updated ping chore to pick up trust changes
- fixed small typo in blueprint
- fixed flags for storj-sim
- wired up changes to testplanet
Change-Id: I02982f3a63a1b4150b82a009ee126b25ed51917d
Add randomized test cases for testing the observer processSegment method
when the observer has a time range (from and to) set and there are
objects with segments whose creation date is outside of this range.
Change-Id: Ieac82a21f278f0850b95275bfdd2e8a812cc57a5
Conversions such as will be invalid with Go 1.14
(*[1<<30]byte)(unsafe.Pointer(data))
The recommended way is to use:
*(*[]byte)(unsafe.Pointer(
&reflect.SliceHeader{
Data: uintptr(unsafe.Pointer(data)),
Len: int(length),
Cap: int(length),
},
))
Also fixed a memory leak.
Change-Id: I1768b7b85505e6b57b49deb62a510474f1bf84c1
* Use unexported existent method in logic that was duplicated in some
exported methods.
* Log a forgotten internal error.
* Improve the documentation adding more and fixing some to fit to our
code style conventions.
Change-Id: Ie6f8bc59f9089f92b8b0d1b4c09c2142c3f273f5
- non-blocking, memory only operations
- have a negative impact on performance
- make monkit SVGs for downloads all but unreadable
Change-Id: I2a1397249a7aaaa2de26b70cd65128663278a424
The Endpoint.getPointer method lacked of tracing.
Also add a dot at the end of documentation comment for following our
code style conventions.
Change-Id: I9b63ad297f04e31825648aae43aa8f9ebba2b4e2
the old code to eventually create an api key on a satellite had
some issues. namely, there were some ignored errors, swallowed
errors, incorrect returns of nil errors when there should have
been an error, and it did not handle working against an already
existing database.
this commit fixes the above issues and organizes the code into
a set of methods performing individual steps rather than one big
function. it adds retries and attempts to get existing values
instead of creating them when possible, which means that it will
work if the values already exist. additionally, it removes the
3 second sleep in favor of a bounded retry loop with a small sleep
which improves startup times.
Change-Id: I4de04659e5a62dd3f675fbf3c76f3311c410a03e
Return an error when misusing the endpoint method
'listSegmentsFromNumberOfSegments' because there is the method
'listSegmentsManually' for being used when the number of segments is
less or equal than 0.
If we don't return an error on `listSegmentsFromNumberOfSegments` we
would realize that we have a bug much more later than returning an error
because the clients wouldn't receive an error and would receive an empty
list, making them to wonder what they are doing wrong to receive 0
results before they realize that they could be in front of a bug.
This commit also renames the function to be plural as "numberOfSegments"
parameter and the test function which missed also the end 's'.
Change-Id: I02318685bf36aa3af26545731a1711621a5e2e39
After changing how we execute the storagenode-updater process we lost
timestamps in the log.
The fix is to start using zap logging.
The Windows Installer is changed to register the storagenode-updater
service in a way that the Windows Service Manager passes the
--log.output flag instead of the old --log.
The old --log flag is deprecated, but not removed. We will support it
for backward compatibility. This is required as the storagenode-updater
can auto-updated itself, but the Windows Service Manager of this old
installtion will continue passing the old --log flag when starting it.
Change-Id: I690dff27e01335e617aa314032ecbadc4ea8cbd5
Signed-off-by: Kaloyan Raev <kaloyan@storj.io>
long lived uplinks could just hold on to connections forever
if their client to the storagenode or satellite isn't closed.
this will prevent that from happening on the client. more
changes will be necessary to add appropriate prevention on
the servers.
Change-Id: Ib36d85e70cbafb315664ad7657bb70b936b3828c
to make them cancelable. Also,
* rename BulkDelete->BulkDeleteAll
this leaves room for a new method `BulkDelete(items storage.Items)` that
does a bulk deletion of a specified list of items, as opposed to
deleting _everything_. such a method would be used in the
`cleanupItems()` function found in utils.go, because when individual
deletes are fairly slow, that step takes way too long during tests.
* use BulkDelete method if available
nothing currently provides `BulkDelete(items storage.Items) error`,
but we made use of it with the Bigtable testing and code, and may make
use of it again when adding new kv backends.
* and eliminate the global context in test_iterate.go
Change-Id: I171c7a3818beffbad969b131e98b9bbe3f324bf2
* add integration tests to jenksin
* have jenkins run storj-sim integration tests w/crdb
Change-Id: I696d55c5894aaf630dcd7a566e1dd705ee88486b
* rm crdb integration tests to see if postgres passes
Change-Id: I1727a027ff802acbff5fc55961a0d605faefcf2d
* comment out aws tests to see if that is the error
Change-Id: I456c3d36f6a4ce7760ea0b6c402b6ea16cfe77e3
* add aws profile to integration tests
Change-Id: Ic01185dbc7b84ac48dfb846f8f272b34b50379b6
* add tmp path for aws profile and creds
Change-Id: I7b82ee5a99937edd3f66ae01bfb5cb21028a62cf
* change linux KiB syntax to bytes to support osx
Change-Id: Ia1f1027ba8da64a6ba537062deb9b3519973621f
planet.Start starts a testplanet system, whereas planet.Run starts a testplanet
and runs a test against it with each DB backend (cockroach compat).
Change-Id: I39c9da26d9619ee69a2b718d24ab00271f9e9bc2
Implement some unit test cases for the observer.processSegment method and fix a bug found by these tests.
A production snapshot is more certain but it's huge for having in the repository and run the test with it by the CI.
We want to have tests for the different cases to detect zombie segments and relaying on production data cannot guarantee to have all of them.
NOTE the test has been implemented with random values for not having always the same combination of segments list and the same values avoiding that the implementation gets stale due to the test. The issue about this is that it's harder to understand and we could get only sometimes failures in the CI in case that the implementation has some bug.
The random tests allow to eventually check cases that a static test may now cover because it is not expressed or because it is needed to implement a large number of cases.
Because we are worried that the test implementation is complex and we could have bugs on it despite that the same bugs should exist in both, the implementation and the test, moreover that we have to consider that the implementation and the tests have been written by different people. Because of that, we may replace entirely these random tests by a list of static ones.
for storj-sim to work, we need to avoid schemas in cockroach urls
so we have storj-sim create namespaced databases instead of schemas
and we have the migrate command create the database in the same way
that it would create a schema for postgres. then it works!
a follow up commit will move the creation of the database/schemas
into storj-sim's setup step so that we can avoid doing these icky
creations during normal migration calls. it will also make the
pointerdb have an explicit call to migrate instead of just doing
it every time it's opened.
Change-Id: If69ef5cb96b6866b0438c761bd445afb3597ae5f