* create upsert query for check-in method
* add tests
* fix lint err
* add benchmark test for db query
* fix lint and tests
* add a unit test, fix lint
* add address to tests
* replace print w/ b.Fatal
* refactor query per CR comments
* fix disqualified, only set if null
* fix query
* add version to updatecheckin query
* fix version
* fix tests
* change version for tests
* add version to tests
* add IP, add transport, mv unit test
* use node.address as arg
* add last ip
* fix lint
What: we move api keys out of the grpc connection-level metadata on the client side and into the request protobufs directly. the server side still supports both mechanisms for backwards compatibility.
Why: dRPC won't support connection-level metadata. the only thing we currently use connection-level metadata for is api keys. we need to move all information needed by a request into the request protobuf itself for drpc support. check out the .proto changes for the main details.
One fun side-fact: Did you know that protobuf fields 1-15 are special and only use one byte for both the field number and type? Additionally did you know we don't use field 15 anywhere yet? So the new request header will use field 15, and should use field 15 on all protobufs going forward.
Please describe the tests: all existing tests should pass
Please describe the performance impact: none
* add test to make sure we will reverify the share in the containment db rather than in the pointer passed into reverify
* use pending audit information only when running reverify
* Split the info.db database into multiple DBs using Backup API.
* Remove location. Prev refactor assumed we would need this but don't.
* Added VACUUM to reclaim space after splitting storage node databases.
* Added unique names to SQLite3 connection hooks to fix testplanet.
* Moving DB closing to the migration step.
* Removing the closing of the versions DB. It's already getting closed.
* Swapping the database connection references on reconnect.
* Moved sqlite closing logic away from the boltdb closing logic.
* Moved sqlite closing logic away from the boltdb closing logic.
* Remove certificate and vouchers from DB split migration.
* Removed vouchers and bumped up the migration version.
* Use same constructor in tests for storage node databases.
* Use same constructor in tests for storage node databases.
* Adding method to access underlining SQL database connections and cleanup
* Adding logging for migration diagnostics.
* Moved migration closing database logic to minimize disk usage.
* Cleaning up error handling.
* Fix missing copyright.
* Fix linting error.
* Add test for migration 21 (#3012)
* Refactoring migration code into a nicer to use object.
* Refactoring migration code into a nicer to use object.
* Fixing broken migration test.
* Removed unnecessary code that is no longer needed now that we close DBs.
* Removed unnecessary code that is no longer needed now that we close DBs.
* Fixed bug where an invalid database path was being opened.
* Fixed linting errors.
* Renamed VersionsDB to LegacyInfoDB and refactored DB lookup keys.
* Renamed VersionsDB to LegacyInfoDB and refactored DB lookup keys.
* Fix migration test. NOTE: This change does not address new tables satellites and satellite_exit_progress
* Removing v22 migration to move into it's own PR.
* Removing v22 migration to move into it's own PR.
* Refactored schema, rebind and configure functions to be re-useable.
* Renamed LegacyInfoDB to DeprecatedInfoDB.
* Cleaned up closeDatabase function.
* Renamed storageNodeSQLDB to migratableDB.
* Switched from using errs.Combine() to errs.Group in closeDatabases func.
* Removed constructors from storage node data access objects.
* Reformatted usage of const.
* Fixed broken test snapshots.
* Fixed linting error.
* update audit status as failed for nodes that failed piece hash verification
* remove comment
* fix lint error
* add test
* fix format
* use named return value for Get
* add comments
* add more better comment
* format
The fundamental problem is that both drpc and grpc servers
want to close the listener and they both want to ignore the
error from Accept after the listener is closed. There's no
way to do this in a race free way. Fortunately, the mux
hands out listeners that can be independently closed. That
means they can both do their own shutdown logic where they
ignore the error, and then after they're closed, the code
orchestrating the servers can close the listeners.
The final weird bit is that the server's Close method is
required to wait until the Run method has exited (or at
least enough for the listeners to definitely be closed)
because tests depend on that behavior, so we have to add
some channels/mutexes/onces to ensure that Run has exited
and that a new call can't start after Close is called.
Change-Id: I7c4ef293f7963f83138815f51824fd5b8d09ce15