Reduce the number of non-methods to reduce funcs in the namespace also
combine a func to slightly condense the code more.
Change-Id: Ifbe728eb8c8ca4c981df648decd259c2097b6b40
uuid.UUID implements driver.Value so it can be directly used as a
scannable result.
Replace uses of dbutil.BytesToUUID with uuid.FromBytes.
Change-Id: I51a670185ceb3cc2199d5aa2b76bc3fc191ca8fe
Instead of providing the database from outside to testplanet create it
inside and then allow wrapping and modifying it. This is more convenient
to use.
Change-Id: I9b8f69e6e0a19ff984b4e2bfe927c9100c77bc6c
storagenodes have like 10 or more databases. without this
tag they all get sent as the same value, stomping on each
other.
Change-Id: Ib12019684d6ea8f2a5b83df584056dfa79e3c4b3
The goal of this change is to improve the storagenode_storage_tallies table by removing the unneeded id column that is not being used but only taking up space, and also to add an index on a different column that needs it. Removing and adding a column seems simple, but ended up being more complicated because of some cockroachdb limitations.
The cockroachdb limitation when trying to remove a column from a table and create a new primary key are:
1. only allows primary key creation at table creation time (docs: https://www.cockroachlabs.com/docs/stable/primary-key.html)
2. table drop or rename is performed async and cannot be done in a transaction (issue: https://github.com/cockroachdb/cockroach/issues/12123, https://github.com/cockroachdb/cockroach/issues/22868)
To address these differences between cockroachdb and Postgres, this PR performs different migrations for the two database. The Postgres migration is straight forward and what you would expect, but the cockroach migration has two main changes:
1. To change a primary key, use the recommended process from the cockroachdb docs to create a new table with the new primary key you want and then migrate the data.
2. In order to do 1, we needed to do the new table renaming in a separate transaction from the data migration.
Ref: SM-65
Change-Id: Idc9aee3ab57aa4d5570e3d2980afea853cd966bf
by doing an indexed anti-join we're able to reduce the time to
select the pending orders by over 10x on postgres. this should
help us process pending orders much more quickly.
it probably won't do as good a job on cockroach because it does
not do an indexed anti-join and instead does a hash join after
scanning the entire consumed serials table. we should either
remove orders entirely or try to make that more efficient
when necessary.
Change-Id: I8ca0535acd21c51e74955b24c9b86d20e4f2ff9c
Make sure that suspended nodes are treated appropriately by the overlay
cache. This means we should expect the following behavior:
* suspended nodes (vetted or not) should not be selected for uploading
new segments
* suspended nodes should be treated by the checker and repairer as
"unhealthy", and should be removed upon successful repair
This commit also removes unused overlay functionality.
Fixes a bug with commit 8b72181a1f where
the audit reporter was automatically suspending nodes regardless of
audit outcome (see test added).
Tests:
* updates repair tests to ensure that a suspended node is treated as
unhealthy and will be removed from the pointer on successful repair
* updates overlay tests for KnownUnreliableOrOffline and KnownReliable
to expect suspended nodes to be considered "unreliable"
* adds satellitedb test that ensures overlay.SelectStorageNodes and
overlay.SelectNewStorageNodes do not include suspended nodes
* adds audit reporter test to ensure that different audit outcomes
result in the correct suspended/disqualified states
Change-Id: I40dba67278c8e8d2ce0bcec5e0a5cb6e4ce2f561
Initial change for checking bucket existence on satellite side for
requests like BeginObject and ListObjects. This is simple implementation
that is just checking bucket in DB but should be improved in future to
avoid DB calls as much as possible.
Part of https://storjlabs.atlassian.net/browse/USR-365
Change-Id: I9076acddc44d7dbfa7612a1c24a007de01621583
* change overlay.UpdateStats to allow a third audit outcome. Now it can
handle successful, failed, and unknown audits.
* when "unknown audit reputation"
(unknownAuditAlpha/(unknownAuditAlpha+unknownAuditBeta)) falls below the
DQ threshold, put node into suspension.
* when unknown audit reputation goes above the DQ threshold, remove node
from suspension.
* record unknown audits from audit reporter.
* add basic tests around unknown audits and suspension.
Change-Id: I125f06f3af52e8a29ba48dc19361821a9ff1daa1
My understanding is that the nodes table has the following fields:
- `address` field which can be a hostname or an IP
- `last_net` field that is the /24 subnet of the IP resolved from the address
This PR does the following:
1) add back the `last_ip` field to the nodes table
2) for uplink operations remove the calls that the satellite makes to `lookupNodeAddress` (which makes the DNS calls to resolve the IP from the hostname) and instead use the data stored in the nodes table `last_ip` field. This means that the IP that the satellite sends to the uplink for the storage nodes could be approx 1 hr stale. In the short term this is fine, next we will be adding changes so that the storage node pushes any IP changes to the satellite in real time.
3) use the address field for repair and audit since we want them to still make DNS calls to confirm the IP is up to date
4) try to reduce confusion about hostname, ip, subnet, and address in the code base
Change-Id: I96ce0d8bb78303f82483d0701bc79544b74057ac
We missed this in the migration that added the num_healthy_pieces
column. It exists in dbx, but not on the actual satellite table.
Change-Id: If16b5ec2325d56406250298531b3285215188bf3
Previously, we were simply discarding rows from the repair queue when
they couldn't be repaired (either because the overlay said too many
nodes were down, or because we failed to download enough pieces).
Now, such segments will be put into the irreparableDB for further
and (hopefully) more focused attention.
This change also better differentiates some error cases from Repair()
for monitoring purposes.
Change-Id: I82a52a6da50c948ddd651048e2a39cb4b1e6df5c
The migration was broken into one migration per table to reduce table locking and reduce the
chances of failure due to SQL timeouts.
Of the 14 fields that lacked time zones, only the 3 named 'interval_start` seemed to have non-UTC data in them.
These fields are fixed in the migration by removing the +00 and adding AT TIME ZONE current_setting('TIMEZONE')
Field with good data are migrated by adding AT TIME ZONE 'UTC'
Note that postgres's timezone() is different than cockroach's timezone() so AT TIME ZONE is used.
https://storjlabs.atlassian.net/browse/SM-104
Change-Id: I410f2f1d7c11b143f17844347f37e6f4b1e70fce
On satellite, remove all references to free_bandwidth column in nodes table.
On storage node, remove references to AllocatedBandwidth and MinimumBandwidth and mark as deprecated.
Protobuf message, NodeCapacity, is left intact for backwards compatibility.
Once this is released to all satellites, we can drop the column from the DB.
Change-Id: I2ff6c6537fc9008a0c5588e951afea58ede85838
these tables are used in future commits with respect to the new
storagenode payments code. if we create them now, it will make
backfilling them with historical data easier.
Change-Id: I3c08c9770ec5b2baa38b4f2fd18c2f07746a61c2
Add a column to the repair queue table in the satellite db for healthy
piece count. When an item is selected from the repair queue, the least
durable segment that has not been attempted in the past hour should be
selected first. This prevents our repairer from getting stuck doing work
on segments that are close to the repair threshold while allowing
segments that are more unhealthy to degrade further.
The migration also clears the repair queue so that the migration runs
quickly and we can properly account for segment health in future repair
work.
We do not select items off the repair queue that have been attempted in
the past six hours. This was changed from on hour to allow us time to
try a wider variety of segments when the repair queue is very large.
Change-Id: Iaf183f1e5fd45cd792a52e3563a3e43a2b9f410b
This change adds two new tables to process orders as fast as we used
to but in an asynchronous manner and with hopefully less storage
usage. This should help scale on cockroach, but limits us to one
worker. It lays the groundwork for the order processing pipeline to
be queue rather than database driven.
For more details, see the added fast billing changes blueprint.
It also fixes the orders db so that all the timestamps that are
passed to columns that do not contain a time zone are converted to
UTC at the last possible opportunity, making it less likely to use
the APIs incorrectly. We really should migrate to include timezones
on all of our timestamp columns.
Change-Id: Ibfda8e7a3d5972b7798fb61b31ff56419c64ea35
Enhance the documentation of the UseSerialNumber method (interface and
implementation) and add several missing dots in doc comments of the
methods of the same interface and implementation.
Change-Id: I792cd344f0d2542e060fa2ec288b71231cae69de
In the methods we use to retrieve a user's chargeable BW, we were summing GET, GET_AUDIT,
and GET_REPAIR. We only want to charge for GET
Change-Id: Icead7695494b22c7c835482cf8b1512a980d59f1
this commit updates our monkit dependency to the v3 version where
it outputs in an influx style. this makes discovery much easier
as many tools are built to look at it this way.
graphite and rothko will suffer some due to no longer being a tree
based on dots. hopefully time will exist to update rothko to
index based on the new metric format.
it adds an influx output for the statreceiver so that we can
write to influxdb v1 or v2 directly.
Change-Id: Iae9f9494a6d29cfbd1f932a5e71a891b490415ff
it was noticed that if you had a long lived transaction A that
was blocking some other transaction B and A was being aborted
due to retriable errors, then transaction B was never given
priority. this was due to using savepoints to do lightweight
retries.
this behavior was problematic becaue we had some queries blocked
for over 16 hours, so this commit addresses the issue with two
prongs:
1. bound the amount of time we will retry a transaction
2. create new transactions when a retry is needed
the first ensures that we never wait for 16 hours, and the value
chosen is 10 minutes. that should be long enough for an ample
amount of retries for small queries, and huge queries probably
shouldn't be retried, even if possible: it's more preferrable to
find a way to make them smaller.
the second ensures that even in the case of retries, queries that
are blocked on the aborted transaction gain priority to run.
between those two changes, the maximum stall time due to retries
should be bounded to around 10 minutes.
Change-Id: Icf898501ef505a89738820a3fae2580988f9f5f4
before dbx would generate a compilcated blob of conditions that
encoded a row comparison, which only optimized to an index seek
on cockroachdb. this means that sqlite and postgres both had
quadratic behavior on paged queries of this form. instead, use
the implicit row construction feature supported in all of the
databases to do paged support so that they all optimize well.
Change-Id: Iac8703929ba2a59ee3ffa619b916d12663422887
The old migration was not working. It was updateding pending (status 0)
and failed (status -1) to completed (status 100).
Change-Id: I808ff3cc692fe6c698ce26a8b411b134e67b752b
A uuid.UUID is an array of bytes, and slicing it refers to the
underlying value, much like taking the address. Because range
in Go reuses the same value for every loop iteration, this means
that later iterations would overwrite earlier stored project
ids. We fix that by making a copy of the value before slicing it
for every loop iteration.
Change-Id: Iae3f11138d11a176ce360bd5af2244307c74fdad
Currently storage tests were tied to the default lookup limit.
By increasing the limits, the tests will take longer and sometimes
cause a large number of goroutines to be started.
This change adds configurable lookup limit to all storage backends.
Also remove boltdb.NewShared, since it's not used any more.
Change-Id: I1a052f149da471246fac5745da133c3cfc27582e
Currently Cockroach DB setup takes a significant amount of time.
This flattens the database setup into a single query,
which improves the test time significantly.
The migration tests still test each migration separately.
Change-Id: Iaca16f34a6af3926fa2b5ebf618f939fd59460b3
Since incoming times may be in any time zone, and we want the output
to be in UTC and for them to have 00:00:00 hours, minutes and seconds
we first convert the incoming timestamp to UTC before doing the
truncate to the day and adding a day.
Because the old code always returned a timestamp that was in the
future, this is just for efficiency.
Change-Id: Ie692d47bca8691e73852c822d5c56cf8773d99b4
Limits how many times metainfo APIs can be called per second by project ID. If limit is exceeded, the API will return Unauthorized/Too Many requests.
Limit per second and the size of the limiter cache per project are configurable, as well as whether the limiter is enabled.
Tests added/updated for the new rate_limit field in projects table.
Tests added for exceeding limits and disableing limiter.
Change-Id: Ic8ad102de3b690a475809d4f684156d5715f20fa
Replace all the remaining uses of sql.DB with tagsql.DB to
fix issues with context cancellation.
Introduce tagsql.Open which helps to get rid of all tagsql.Wrap-s.
Use tagsql in cockroachkv and postgreskv.
Change-Id: I8946d203341cb85a25976896fc7881e1f704e779
Also added temporary types withRebind and withTagTx,
which will be later removed. Currently they help to avoid
changing the whole codebase at the same time.
Change-Id: I7f07ba8f4709a23a463bfa67464628665a05808f
warning: databases migrated to version 77 before this commit
is merged must be manually re-migrated. this should not be a
problem for anything but staging databases.
Change-Id: Ie1631c48379472352014183ee43f1465e22200f7
this commit introduces the reported_serials table. its purpose is
to allow for blind writes into it as nodes report in so that we have
minimal contention. in order to continue to accurately account for
used bandwidth, though, we cannot immediately add the settled amount.
if we did, we would have to give up on blind writes.
the table's primary key is structured precisely so that we can quickly
find expired orders and so that we maximally benefit from rocksdb
path prefix compression. we do this by rounding the expires at time
forward to the next day, effectively giving us storagenode petnames
for free. and since there's no secondary index or foreign key
constraints, this design should use significantly less space than
the current used_serials table while also reducing contention.
after inserting the orders into the table, we have a chore that
periodically consumes all of the expired orders in it and inserts
them into the existing rollups tables. this is as if we changed
the nodes to report as the order expired rather than as soon as
possible, so the belief in correctness of the refactor is higher.
since we are able to process large batches of orders (typically
a day's worth), we can use the code to maximally batch inserts into
the rollup tables to make inserts as friendly as possible to
cockroach.
Change-Id: I25d609ca2679b8331979184f16c6d46d4f74c1a6
everyone was importing it as dbx anyway. why should it be
named satellitedb? so yeah just pass the "-p dbx" flag.
Change-Id: I5efa669f4f00f196b38a9acd0d402009475a936f
This reverts commit 8e242cd012.
Revert because lib/pq has known issues with context cancellation.
These issues need to be resolved before these changes can be merged.
Change-Id: I160af51dbc2d67c5449aafa406a403e5367bb555
this will allow for some nice runtime analysis down the road.
also, this allows for wrapping database handles in a way that
can interact with these contexts
requires https://review.dev.storj.io/c/storj/dbx/+/514
Change-Id: Ib087b7cd73296dd2c1e0331314da34d861f61d2b
When an uplink requests an upload or download from the satellite we are trackig the
allocated bandwidth twice. The value in bucket_bandwidth_rollups is used
for project limits but the value in storagenode_bandwidth_rollups is not
used at all. We can increase the performance by removing it. Uplinks
will get a faster response from the satellite.
Change-Id: Icccd41f94107ef34668f30f99bf5f728c384b07e
any database error doesn't mean the order wasn't found. for example
in cockroach it may say that the transaction is aborted. then what?
maybe we get big old row level deadlocks like we've observed? so
instead explicitly check for ErrNoRows to reject the order and bail
out otherwise. the surrounding logic will give it a retry.
Change-Id: I6e1f8f6e6a6def3e45b44f5088cbdc158e1098e4
With the new storage node downtime tracking feature, we need remove current uptime reputation configs: UptimeReputationAlpha, UptimeReputationBeta, and
UptimeReputationDQ. This is the first step of removing the uptime
reputation columns from satellitedb
Change-Id: Ie8fab13295dbf545e33aeda0c4306cda4ba54e36
turns out portable sed is hard: it has to work with both
linux and bsd sed, etc. instead, use a really really basic
bash script and a temporary file. this should be much less
likely to cause issues on a wide range of machines.
Change-Id: Ia759789fb52aa1ee3361426bb6c02ed4eac3d23a
Transactions in our code that might need to work against CockroachDB
need to be retried in the event of a retryable error. The transaction
helper functions in dbutil do that automatically. I am changing this
code to use those helpers instead.
I also fleshed out consoledb_test.go to do actual inserts and gets to
make sure things were working correctly.
Change-Id: I089bf4c776d15dc8578080e26760bd6dff4beec9
Transactions in our code that might need to work against CockroachDB
need to be retried in the event of a retryable error. The transaction
helper functions in dbutil do that automatically. I am changing this
code to use those helpers instead.
Change-Id: I22b850ce5859fa07d13bf475be5140e6bde95b8a
Transactions in our code that might need to work against CockroachDB
need to be retried in the event of a retryable error. The WithTx
helper functions in dbutil and dbx do that automatically. I am changing
this code to use those helpers instead.
Change-Id: Iaf492af35471931125f2b7365aa4338f44154881
Disqualifies a node when the node fails to complete a graceful
exit.
Adds a new DisqualifyNode method to the overlay cache, since there
wasn't an existing method to disqualify a node but do nothing else
to its stats.
Adds checks to existing tests to make sure that a storage node that
fails a graceful exit is marked as disqualified in the overlay
cache.
https: //storjlabs.atlassian.net/browse/V3-3342
Change-Id: I4d554a519ab59db31ad3b8e28764c8683a6e3888
crdb.ExecuteTx is great, but I don't think it will work right with
PostgreSQL. It works by way of cockroach savepoints, which allows
it to react to retryable errors, whereas tx.Commit() doesn't. But
I don't think PostgreSQL savepoints work exactly the same way. I'm not
100% sure, but it doesn't seem worth the risk.
So, I'm switching one case here to use the new dbutil.WithTx instead,
which will use crdb.ExecuteTx if appropriate. The other case doesn't
need a transaction at all.
Change-Id: I39283f3b5d8d47596db7aff5048bb74597e5918f
Transactions in our code that might need to work against CockroachDB
need to be retried in the event of a retryable error. The transaction
helper functions in dbutil do that automatically. I am changing this
code to use those helpers instead.
Change-Id: I660540885a0784fae844cf99376d1537e208fa69
overlay.GetOfflineNodesLimited
We only care about node ID, address, and last contact success/failure
from the downtime service, so the overlay should only return these
values for the downtime-specific queries.
Change-Id: I08a6ecfdd2a12b82cae62e87d6adeab53975bfce
Transactions in our code that might need to work against CockroachDB
need to be retried in the event of a retryable error. The transaction
helper functions in dbutil do that automatically. I am changing this
code to use those helpers instead.
Change-Id: Icd3da71448a84c582c6afdc6b52d1f345fe9469f
Transactions in our code that might need to work against CockroachDB
need to be retried in the event of a retryable error. The transaction
helper functions in dbutil do that automatically. I am changing this
code to use those helpers instead.
Change-Id: Ibaadd2c8540ba5c8cccd6ecbf529017ab98b78ca
Transactions in our code that might need to work against CockroachDB
need to be retried in the event of a retryable error. The transaction
helper functions in dbutil do that automatically. I am changing this
code to use those helpers instead.
Change-Id: Id24906f5f3ae83245dabb218e1f70e0bcb3b417a
Adds the KnownReliable method to Overlay Service that filters all nodes
from the given list to be only reliable nodes (online and qualified).
The method return []*pb.Node of reliable nodes. The pb.Node values are
ready for dialing.
The first use case is when deleting an object to efficiently dial all
reliable nodes holding a piece of that object and send them a delete
request.
Change-Id: I13e0a8666f3807c5c31ef1a1087476018a5d3acb
this will allow us to inspect the type of `db.Driver()` on *sql.DB
connections to correctly differentiate between pg and crdb conns.
as a bonus, this moves all concerns about when to replace "cockroach://"
with "postgres://" out of view, letting the thin shim "driver" take care
of that.
Change-Id: Ib24103ab7c508231e681f89a7321b623e4e125e9
Backstory: I needed a better way to pass around information about the
underlying driver and implementation to all the various db-using things
in satellitedb (at least until some new "cockroach driver" support makes
it to DBX). After hitting a few dead ends, I decided I wanted to have a
type that could act like a *dbx.DB but which would also carry
information about the implementation, etc. Then I could pass around that
type to all the things in satellitedb that previously wanted *dbx.DB.
But then I realized that *satellitedb.DB was, essentially, exactly that
already.
One thing that might have kept *satellitedb.DB from being directly
usable was that embedding a *dbx.DB inside it would make a lot of dbx
methods publicly available on a *satellitedb.DB instance that previously
were nicely encapsulated and hidden. But after a quick look, I realized
that _nothing_ outside of satellite/satellitedb even needs to use
satellitedb.DB at all. It didn't even need to be exported, except for
some trivially-replaceable code in migrate_postgres_test.go. And once
I made it unexported, any concerns about exposing new methods on it were
entirely moot.
So I have here changed the exported *satellitedb.DB type into the
unexported *satellitedb.satelliteDB type, and I have changed all the
places here that wanted raw dbx.DB handles to use this new type instead.
Now they can just take a gander at the implementation member on it and
know all they need to know about the underlying database.
This will make it possible for some other pending code here to
differentiate between postgres and cockroach backends.
Change-Id: I27af99f8ae23b50782333da5277b553b34634edc
for storj-sim to work, we need to avoid schemas in cockroach urls
so we have storj-sim create namespaced databases instead of schemas
and we have the migrate command create the database in the same way
that it would create a schema for postgres. then it works!
a follow up commit will move the creation of the database/schemas
into storj-sim's setup step so that we can avoid doing these icky
creations during normal migration calls. it will also make the
pointerdb have an explicit call to migrate instead of just doing
it every time it's opened.
Change-Id: If69ef5cb96b6866b0438c761bd445afb3597ae5f
satellitedb migration tests ran against multiple base versions, however after the merging all the steps the base versions didn't exists anymore - which meant none of the migration tests were actually running.
first, so that they all work the same way, because it's getting
complicated, and second, so that we can do the appropriate thing
instead of CREATE SCHEMA for cockroachdb.
Change-Id: I27fbaeeb6223a3e06d97bcf692a2d014b31465f7
it doesn't necessarily _have_ to be UTC; the time is correct as returned
either way, but this will make it a little less prone to variance.
also, there is a test that depends on the time being returned in UTC.
Change-Id: Ia71e24ecd9973ba70a1cfb5621a3030a5c82d004
This will make it so we don't need to comment out those lines every time
we want to enable the cockroachdb tests during development.
Once it's ready this flag can go away.
* update migration steps, add crdb support to testplanet
* add crdb support
* have jenkins run a bares bones crdb compat test
* skip crdb tests
* skip crdb tests
* fix root_piece_id column
* write crdb store to tmp dir
* escape
* merge migration
* rm migration versions
* rm unneeded migration test data
* create index w/postgres + crdb compatible syntax
* add default to offers.invitee_credit_duration_days
* changes so that schema matches from master to branch
* change to be crdb compatible
* add check to confirm db version
* mv version check to migration
* update tests
* add minversion to sadb migration, update tests
* confirm min version for all dbs in a migration
* add validate migration to sadb
* fix lint err
* rm min version check from migrate
* change sadb check
* hard code min db version
* fix comment
* satellite/nodeselection: dont select nodes that havent checked in for a while
* change testplanet online window to one minute
* remove satellite reconfigure online window = 0 in repair tests
* pass timestamp into UpdateCheckIn
* change timestamp to timestamptz
* edit tests to set last_contact_success to 4 hours ago
* fix syntax error
* remove check for last_contact_success > last_contact_failure in IsOnline
* separate sadb migration, add version check
* update checkversion to do same validation as migration
* changes per CR
* add sa migration to storj-sim
* add different debug port in storj-sim for migration
* add wait for exit for storj-sim migration
* update sa docker entrypoint to support migration
* storj-sim satellite parts all wait for migration
* upgrade golang-migrate/migrate to v4 because bug
* fix go mod tidy
* add overall failure percentage check and inactive time frame check before sending a response to sno
* update comment
* delete node from transfer queue if it has been inactive for too long
* fix linting error
* add test config value
* fix nil pointer
* add config value into testplanet
* add unit test for overall failure threshold
* move timeframe threshold to chore
* update protolock
* add chore test
* add per peiece failure count logic
* change config name from EndpointMaxFailures to MaxFailuresPerPiece
* address comments
* fix linting error
* add error handling for no row returned from progress table
* fix test for graceful exit chore on storagenode
* fix typo InActive -> Inactive
* improve readability for failure threshold calculation
* update config lock
* change error handling for GetProgress in graceful exit endpoint on the satellite side
* return proper rpc error in endpoint
* add check in chore test for checking finish timestamp and queue
* update lock file and add comment
* add created at and bytes transferred
* cleanup
* rename db func to GetGracefulExitNodesByTimeFrame
* fix flag
* split into two overlay functions
* := to =
* fix test
* add node not found error class
* fix overlay test
* suggested test changes
* review suggestions
* get exit status from overlay.Get()
* check rows.Err
* fix panic when ExitFinishedAt is nil
* fix comments in cmdGracefulExit
* create upsert query for check-in method
* add tests
* fix lint err
* add benchmark test for db query
* fix lint and tests
* add a unit test, fix lint
* add address to tests
* replace print w/ b.Fatal
* refactor query per CR comments
* fix disqualified, only set if null
* fix query
* add version to updatecheckin query
* fix version
* fix tests
* change version for tests
* add version to tests
* add IP, add transport, mv unit test
* use node.address as arg
* add last ip
* fix lint
* implement contact.checkin method
* add batching to update uptime checks
* rm batching
* rm other unneeded things
* fix lint
* fix unit test
* changes per CR comments
* couple more CR changes
* add identity check into grpcOpt
* fix lint
* why do you fix the test
* revert test change
* stop contact chore for repair test
* put node in cache
* comment out contact chore. See what happens
* Revert "comment out contact chore. See what happens"
This reverts commit 2e45008e36a50e0a842ae455ac83de77093d4daa.
* try stopping contact earlier
* stop contact chore in uplink_test
* replace self on chore with *RoutingTable for access to latest node info
* Revert "stop contact chore in uplink_test"
This reverts commit 302db70f4071112d1b9f7ee0279225ea12757723.
* Revert "try stopping contact earlier"
This reverts commit 806cc3b82f9d598899dafd83da9315a1cb0cb43c.
* Revert "stop contact chore for repair test"
This reverts commit dd34de1cfdfc09b972186c9ab9a4f1e822446b79.
* satellite/satellitedb: Always release savepoint
Release the savepoint when processing orders in any case.
* satellite/satellitedb: Wrap errors exec savepoints
Wrap the errors returned by the execution of savepoints operations when
processing orders.
* V3-2529: Add DB savepoint to fix issue with postgres. Add test force a rejected order
Co-Authored-By: Ivan Fraixedes <ivan@fraixed.es>
* Update satellite/satellitedb/orders.go
Creates a new chore, dbcleanup, which can be used for routine deletion of items from the satellite database and adds functionality for deletion of expired serial numbers
* update offer once redemption cap has reached
* use transaction to get offer info before insert
* update offer status when redeemable capacity has reached
* fix format
* use pgutil to check constraint error
* change error message
* satellitedb/certDB: refactors of the node certificate storage DB table
The existing implementation doesnt allow to store the complete certificate chain of uplinkIDs or storagenodeIDs, so the current table is dropped and new table will be added which addresses the storage and retrieval of certificates
pkg/identity: fixes spelling mistakes that I missed on PR#2754
Fixes V3-1992/V3-2388
* Added batch update stats for recordAuditSuccessStatus
* Added batch update stats to recordAuditFailStatus
* added configurable batch size
* build individual update/delete statements so the statements can be batched into 1 call to the DB
* notified #config-changes channel and ran make update-satellite-config-lock
* updated tests to use batch update stats
* parent 13dd501042
author Yingrong Zhao <yingrong.zhao@gmail.com> 1563560530 -0400
committer Yingrong Zhao <yingrong.zhao@gmail.com> 1563581673 -0400
parent 13dd501042
author Yingrong Zhao <yingrong.zhao@gmail.com> 1563560530 -0400
committer Yingrong Zhao <yingrong.zhao@gmail.com> 1563581428 -0400
satellite/console: add referral link logic (#2576)
* setup referral route
* referredBy
* add user id
* modify user query
* separate optional field from userInfo
* get current reward on init of satellite gui
* remove unsed code
* fix format
* only apply 0 credit on registration
* only pass required information for rewards
* fix time parsing
* fix test and linter
* rename method
* add todo
* remove user referral logic
* add null check and fix format
* get current offer
* remove partnerID on CreateUser struct
* fix storj-sim user creation
* only redeem credit when there's an offer
* fix default offer configuration
* fix migration
* Add helper function for get correct credit duration
* add comment
* only store userid into user_credit table
* add check for partner id to set correct offer type
* change free credit to use invitee credits
* remove unecessary code
* add credit update in activateAccount
* remove unused code
* fix format
* close reader and fix front-end build
* move create credit logic into CreateUser method
* when there's no offer set, user flow shouldn't be interrupted by referral program
* add appropriate error messages
* remove unused code
* add comment
* add error class for no current offer error
* add error class for credits update
* add comment for migration
* only log secret when it's in debug level
* fix typo
* add testdata
* Update overlaycache.go
Removes one select statement and columns gets filtered in first query.
Needs to be tested agains real database that this query is working and faster!
* Correct linting
reorder scans that this fit to new sql result order
* rename pkg/linksharing to linksharing
* rename pkg/httpserver to linksharing/httpserver
* rename pkg/eestream to uplink/eestream
* rename pkg/stream to uplink/stream
* rename pkg/metainfo/kvmetainfo to uplink/metainfo/kvmetainfo
* rename pkg/auth/signing to pkg/signing
* rename pkg/storage to uplink/storage
* rename pkg/accounting to satellite/accounting
* rename pkg/audit to satellite/audit
* rename pkg/certdb to satellite/certdb
* rename pkg/discovery to satellite/discovery
* rename pkg/overlay to satellite/overlay
* rename pkg/datarepair to satellite/repair
* satellite/satellitedb: User var block for Error
To follow with the code style of the majority of the sources of the
current code base the Error variable should be in a block.
Replacing a single var expression to a block one makes the godoc more
consistent across the repository.
* satellite/satellitedb: Remove empty spaces end of line
* pkg/datarepair/repairer: Track always time for repair
Make a minor change in the worker function of the repairer, that when
successful, always track the metric time for repair independently if the
time since checker queue metric can be tracked.
* storage/postgreskv: Wrap error in Get func
Wrap the returned error of the Get function as it is done when the
query doesn't return any row.
* satellite/metainfo: Move debug msg to the right place
NewStore function was writing a debug log message when the DB was
connected, however it was always writing it out despite if an error
happened when getting the connection.
* pkg/datarepair/repairer: Wrap error before logging it
Wrap the error returned by process which is executed by the Run method
of the repairer service to add context to the error log message.
* pkg/datarepair/repairer: Make errors more specific in worker
Make the error messages of the "worker" method of the Service more
specific and the logged message for such errors.
* pkg/storage/repair: Improve error reporting Repair
In order of improving the error reporting by the
pkg/storage/repair.Repair method, several errors of this method and
functions/methods which this one relies one have been updated to be
wrapper into their corresponding classes.
* pkg/storage/segments: Track path param of Repair method
Track in monkit the path parameter passed to the Repair method.
* satellite/satellitedb: Wrap Error returned by Delete
Wrap the error returned by repairQueue.Delete method to enhance the
error with a class and stack and the
pkg/storage/segments.Repairer.Repair method get a more contextualized
error from it.
* setup referral route
* referredBy
* add user id
* modify user query
* separate optional field from userInfo
* get current reward on init of satellite gui
* remove unsed code
* fix format
* only apply 0 credit on registration
* only pass required information for rewards
* fix time parsing
* fix test and linter
* rename method
* add todo
* remove user referral logic
* add null check and fix format
* get current offer
* remove partnerID on CreateUser struct
* fix storj-sim user creation
* only redeem credit when there's an offer
* fix default offer configuration
* fix migration
* Add helper function for get correct credit duration
* add comment
* only store userid into user_credit table
* add check for partner id to set correct offer type
* change free credit to use invitee credits
* remove unecessary code
* Add partnerID on user creation
* added support for partner ID on create user in consoleql User
* add partner ID to api key if the user creating it has a partner ID associated with it
* updates for consoleal user and userinfo
* add RedeemRewards method
* remove redeem from reward.db
* add redeemable cap check in redeem
* rename offerCap to redeemableCap
* remove redeem test
* update error message
* fix build
* Trigger Jenkins
* use correct credit setting for redeem
* fix comment
* change create qury to get redeemable_cap from offers table
* change referredBy to a pointer in user credit struct
* add default offer for offers table
* fix migration test
* Trigger Jenkins
* set the default value to be correct type
* skip soon will deleted test
* fix test data
* add orderby for ListAll
* change durations, redeemable cap to be a nullable field
* remove unecessary code
* organize offers
* revert changes to go.mod and go.sum
* change OfferStatus enums back to original
* revert modified auto-gen files
* don't render empty row if offers is empty
* change return val of ListAll to Offers
* fix build
* add method to check for empty offer when rendering template
* fix typo
* fix lint and typos
* lean out IsEmpty
* dont use named return vals
* better clarify offer statuses
* change back order of setting offer.Status
* lint
* satellite/marketingweb: allow disabling rewards (#2392)
* implement handler for stop offer endpoint
* use proper text and fix data-target for free-credit stop modal
* add db interface and methods, add sa metainfo endpoints and svc
* add bucket metainfo svc funcs
* add sadb bucekts
* bucket list gets all buckets
* filter buckets list on macaroon restrictions
* update pb cipher suite to be enum
* add conversion funcs
* updates per comments
* bucket settings should say default
* add direction to list buckets, add tests
* fix test bucket names
* lint err
* only support forward direction
* add comments
* minor refactoring
* make sure list up to limit
* update test
* update protolock file
* fix lint
* change per PR
* add bucket metadata table in SA masterDB
* fix indentation
* update db model per CR comments
* update testdata
* add missing field on sql testdata
* fix args to testdata
* unique bucket name
* fix fkey constraint for test
* fix one too many commas
* update timestamp type
* Trigger Jenkins
* Trigger Jenkins yet again
* added satalite partner value attribution report. WIP
* WIP
* basic attribution report test completed. still a WIP
* cleanup
* fixed projectID conversion
* report display cleanup
* cleanup .added more test data
* added partnerID to query results
* fixed lint issues
* fix import order
* suggestions from PR review
* updated doc to reflect implementation
* clarification comments in the report SQL
* Changed based on PR suggestion
* More changes based on PR suggestions
* Changes based on PR suggestions
* reordered tests to make consistant with previous 2
* small comments cleanup
* More PR suggestions
* fixed lint issue and removed printf
* fixed var name
* Updates based on PR suggestions
* fixed message
* fixed test
* changes required after merge from master
* v3-2023: add project_id migration for bucket_storage_tallies and bucket_bandwidth_rollups
* added test data for migration 37
* corrected data format
* test sql update
* migrate script updates
* adding previous data
* fix orderdDB methods to take correct args
* update tally to save projectID in correct format
* update var names in splitBucket test
* changes per CR comments
* move offer out of marketing package and remove marketing package
* fix imports
* fix rename errors
* remove offer service
* change package name from offers to rewards
* fix linting
* remove unused code and use appropriate comment
Adds a migration step to pull in old reputation success / total counts into modern alpha / beta scores
If audit success count is less than 50, audit alpha will be set to 50
If uptime success count is less than 100, uptime alpha will be set to 100
This helps us deal with cases where nodes have not been audited or checked for uptime yet, in which case alpha/beta values of 0/0 would cause a node to be considered disqualified.
A node with audit alpha/beta of 50/0 will be disqualified on the 19th check
A node with uptime alpha/beta of 100/0 will be disqualified on the 44th check
This does not affect brand new nodes (nodes that were not in the database before this change). The alpha/beta values for those nodes will be set to 1/0 as before
* add dbx queries
* add migration file
* start service
* Add TotalReferredCountByUserId and availableCreditsByUserID
* implement UserCredits interface and UserCredit struct type
* add UserCredits into consoledb
* add setupData helper function
* add test for update
* update lock file
* fix lint error
* add invalidUserCredits tests
* rename method
* adds comments
* add checks for erros in setupData
* change update method to only execute one query per request
* rename vairable
* should return a signal from Update method if the charge is not fully complete
* changes for readability
* prevent sql injection
* rename
* improve readability
* add counters for nodes that have/have not been seen in the past 24 hours/week
* add additional uptime counters
* add monkit stats for containment mode
* satellite/satellitedb: Alter nodes disqualification column
Change the type of the 'disqualification' column of the nodes table from
boolean to timestamp.
* overlay/cache: Change Disqualified field type
Change the Disqualified field type the NodeDossier struct type from bool
to time.Time to match with the disqualified type used by the DB layer.
* satellite/satellitedb: Update queries uses disqualified
Update the queries which uses the disqualified column due to the column
type has been changed from boolean to nullable timestamp.
* docs/design: Update disqualification due impl changes
Update the disqualification design document to contain the architectural
change required to be able to restore unfair disqualified nodes in case
of an unexpected cause (bug, mistake, hard network disconnection, etc.).
* add user credits table
* change primary key, change type for credit_type, and change relation kind of foreign keys from cascade to restrict
* modify table and query methods
* modify schema
* add dbx queries
* add migration file
* add orderby to read available credit entries
* change Offers interface to separate Update method into two.
* Implement Finish and Redeem method to avoid concurrent updates
* Implement FinishOffer and RedeemOffer service methods
* add tests
* fix linting issue
* add tests for checking Finish and Redeem's results to work as expected
* fix linting error
* adds model to satellite dbx
* cleans up model spacing
* generated golang from dbx
* added migration steps
* Added testdata
* changed node_id -> bucket_id
* adds -- NEW DATA -- to testdata
* more testdata changes
* adds -- NEW DATA -- line
* dbx makes the table plural
* missed a singular value_attribution
* restart jenkins
* Update satellitedb.dbx
* adjust to PR comments
* autogenerated dbx models
* restart jenkins
* init marketing service
Fix linting error
Create offerdb implementation
Create offers service
Add update method
Create offer table and migration
Fix linting error
fix conflicts
Insert new data
Change duration to have clear indication to be based on days
add error wrapper
Change from using uuid to int for id field
* Create Marketing service
* make error virable name more readable
* add condition in update service method to check offer status
* generate lock file
Change get to listAllOffers
* Add method for getting current offer
wip
* add check for expires_at in update method
* Fix conflicts
* add copyright header
* Fix linting error
* only allow update to active offers
* add isDefault argument to GetCurrent
* Update lock file
* add migration file
* finish migrate for adding credit_in_cents for both award and invitee
* save 100 years as expiration date for default offers
* create crud test for offers
* add GetCurrent test
* modify doc
* Fix GetCurrent to work with default offer
* fix linting issue
* add more tests and address feedbacks
* fix migration file
* add type column back to match with mockup design
* add type column back to match with mockup design
* move doc changes to new pr
* add comments
* change GetCurrent to GetCurrentByType
* fix typo
* set up voucher service skeleton, basic test
* add VetNode db method
* basic test for VetNode
* encode and sign voucher functions
* fill out and sign vouchers
* test pass/fail voucher request
* match EncodeVoucher to other Encode functions
* added scopelint and correcte issues found
* corrected scopelint issue
* made updates based on Ivan's suggestions
Most were around naming conventions
Some were false positives, but I kept them since the test.Run could eventually be changed to run in parallel, which could cause a bug
Others were false positives. Added // nolint: scopelint
* first round cleanup based on go-critic
* more issues resolved for ifelsechain and unlambda checks
* updated from master and gocritic found a new ifElseChain issue
* disable appendAssign. i reports false positives
* re-enabled go-critic appendAssign and disabled lint check at code line level
* fixed go-critic lint error
* fixed // nolint add gocritic specifically
What: Changes to support custom usage limit for the project. With this implementation by default project usage limit is taken from configuration flag. If project DB field usage_limit will be set to value larger than 0 it will become custom usage limit and we will be used to verify is limit was exceeded.
Whats changed:
usage_limit (bigint) field added to projects table (with migration)
things related to project usage moved from metainfo endpoint to project usage type
accounting.ProjectAccounting extended with GetProjectUsageLimits() method
Why: We need to have different usage limits per project. https://storjlabs.atlassian.net/browse/V3-1814
* add last_ip field to dbx model node, generate dbx
* add last_ip to node proto, generate pb
* migrate
* resolve address in transport.DialNode, update lastIp in cache.UpdateAddress
* use net.SplitHostPort to isolate host address from port
* define DistinctIPs flag
* add test for GetIP
* select last_ip when querying for nodes
* if distinctIPs flag == true, query for nodes with distinct IPs
* some basic tests
* change last_ip to field 14 in proto
* remove comments
* check err
* change distinctIPs to distinctIP
* exclude IPs from newNodes in query for reputable nodes
* add index on last_ip
* only add to excludedIPs if flag is true
* test half new nodes returns distinct IPs
* fix alignment
* add test
* rework ip filter query, add retry logic, add switch for database driver
* add retry to SelectNewNodes
* change discovery intervals so IPs don't get overwritten
* remove TestGetIP
* edit updating node stats in test
* split exclude into nodeIDs and IPs
* separate non-distinct IP query into other function
* trigger checks
* remove else block
* add flags to sotrj-sim for SA dbs
* add schema to postgres
* add createschema with parse to sa
* add metainfo db postgres support
* add kv default as bolt
* add debug log to see db source
* add env var for postgres to test-sim.sh
* fix lint errs
* dynamically add postgres to args
* add postgres to integration tests
* add sqlite and postgres integration jenkins
* fix db name
* merge integration tests into one step
* test integration tests w/psql
* try using different schema
* debug failure
* use correct host for running storj-sim
* rm sqlite integration
* add back integration
* Initial Webserver Draft for Version Controlling
* Rename type to avoid confusion
* Move Function Calls into Version Package
* Fix Linting and Language Typos
* Fix Linting and Spelling Mistakes
* Include Copyright
* Include Copyright
* Adjust Version-Control Server to return list of Versions
* Linting
* Improve Request Handling and Readability
* Add Configuration File Option
Add Systemd Service file
* Add Logging to File
* Smaller Changes
* Add Semantic Versioning and refuses outdated Software from Startup (#1612)
* implements internal Semantic Version library
* adds version logging + reporting to process
* Advance SemVer struct for easier handling
* Add Accepted Version Store
* Fix Function
* Restructure
* Type Conversion
* Handle Version String properly
* Add Note about array index
* Set temporary Default Version
* Add Copyright
* Adding Version to Dashboard
* Adding Version Info Log
* Renaming and adding CheckerProcess
* Iteration Sync
* Iteration V2
* linting
* made LogAndReportVersion a go routine
* Refactor to Go Routine
* Add Context to Go Routine and allow Operation if Lookup to Control Server fails
* Handle Unmarshal properly
* Linting
* Relocate Version Checks
* Relocating Version Check and specified default Version for now
* Linting Error Prevention
* Refuse Startup on outdated Version
* Add Startup Check Function
* Straighten Logging
* Dont force Shutdown if --dev flag is set
* Create full Service/Peer Structure for ControlServer
* Linting
* Straighting Naming
* Finish VersionControl Service Layout
* Improve Error Handling
* Change Listening Address
* Move Checker Function
* Remove VersionControl Peer
* Linting
* Linting
* Create VersionClient Service
* Renaming
* Add Version Client to Peer Definitions
* Linting and Renaming
* Linting
* Remove Transport Checks for now
* Move to Client Side Flag
* Remove check
* Linting
* Transport Client Version Intro
* Adding Version Client to Transport Client
* Add missing parameter
* Adding Version Check, to set Allowed = true
* Set Default to true, testing
* Restructuring Code
* Uplink Changes
* Add more proper Defaults
* Renaming of Version struct
* Dont pass Service use Pointer
* Set Defaults for Versioning Checks
* Put HTTP Server in go routine
* Add Versioncontrol to Storj-Sim
* Testplanet Fixes
* Linting
* Add Error Handling and new Server Struct
* Move Lock slightly
* Reduce Race Potentials
* Remove unnecessary files
* Linting
* Add Proper Transport Handling
* small fixes
* add fence for allowed check
* Add Startup Version Check and Service Naming
* make errormessage private
* Add Comments about VersionedClient
* Linting
* Remove Checks that refuse outgoing connections
* Remove release cmd
* Add Release Script
* Linting
* Update to use correct Values
* Change Timestamp handling
* Adding Protobuf changes back in
* Adding SatelliteDB Changes and adding Storj Node Version to PB
* Add Migration Table
* Add Default Stats for Creation
* Move to BigInt
* Proper SQL Migration
* Ensure minimum Version is passed to the node selection
* Linting...
* Remove VersionedClient and adjust smaller changes from prior merge
* Linting
* Fix PB Message Handling and Query for Node Selection
* some future-proofing type changes
Change-Id: I3cb5018dcccdbc9739fe004d859065992720caaf
* fix a compiler error
Change-Id: If66bb92d8b98e31cd618ecec9c6448ab9b037fa5
* Comment on Constant for Overlay
* Remove NOT NULL and add epoch call as function
* add versions to bootstrap and satellites
Change-Id: I436944589ea5f21600cdd997742a84fe0b16e47b
* Change Update Migration
* Fix DB Migration
* Increase Timeout temporarily, to see whats going on
* Remove unnecessary const and vars
Cleanup Function calls from deprecated NodeVersion struct
* Updated Protopuf, removed depcreated Code from Inspector
* Implement NodeVersion into InfoResponse
* Regenerated locked.go
* Linting
* Fix Tests
* Remove unnecessary constant
* Update Function and Flag Description
* Remove Empty Stat Creation
* return properly with error
* Remove unnecessary struct
* simplify migration step
* Update Inspector to return Version Info
* Update local Endpoint Version Handling
* Reset Travis Timeout
* Add Default for CommitHash
* single quotes
* update ProjectStorageTotals func to get all records for project
* fix var name
* update test to catch bug
* fix spelling
* modify query so we only have to make 1
* reorg uplink cmd files for consistency
* init implementation of usage limiting
* Revert "reorg uplink cmd files for consistency"
This reverts commit 91ced7639bf36fc8af1db237b01e233ca92f1890.
* add changes per CR comments
* fix custom query to use rebind
* updates per convo about what to limit on
* changes per comments
* fix syntax and comments
* add integration test, add db methods for test
* update migration, add rebind to query
* update testdata for psql
* remove unneeded drop index statement
* fix migrations, fix calculate usage limit
* fix comment
* add audit test back
* change methods to use bucketName/projectID, fix tests
* add changes per CR comments
* add test for uplink upload and err ssg
* changes per CR comments
* check get/put limit separately
* Added retry logic and tests for handling if crypto/rand fails.
* Added retry logic and tests for handling if crypto/rand fails.
* Fixing linting error.
* Removing use of `crypto/rand` for use of `math/rand`.
* Changing code comment about why we ignore this error.
* add MetadataSize to stats
* add logic for accumulating bucket stats in calculateAtRestData
* rename stats to BucketTally, move to accounting package
* define method on accountingDB for inserting bucketTallies
* insert bucketTallies into bucket_storage_tally table
This changes semantics slightly! with this change,
CreateEntryIfNotExists() will do a cache Update with every node passed
in, whether it exists or not. Update() already does a race-free upsert
operation, so that change removes the problematic race in
CreateEntryIfNotExists(). As far as I can tell, this semantic change
doesn't break any expectations of callers, and shouldn't affect
performance in a significant way, as we already have an awful lot of
round-trips to the db either way. But if I've misunderstood the
intention of the method, someone ought to catch it during review.
* go gen
* undo changes to bucket usage
* update locked
* spacing
* moves changes to migration v11
* minor changes to fix lint and test err
* change sql to fix errs
This change adds satellite endpoint for receiving OrderLimits sent by storage node.
Change includes:
* wire up orders sender in storage node (also in testplanet)
* saving serial number for OrderLimit in serial_numbers table
* satellite endpoint for receiving, verifying and storing OrderLimit and Order serial number
* initial implementation for Orders DB
* basic test for sending orders to satellite
* Move from Unique to Index
* Remove Index
* Make some more Indexes Unique and adjust migration
* Fix Migration Statements
* Fix Typo
* Fix Migration of older Table
* Exchange DROP statement
* Remove "if not exists"
* Revert Change in old Migration
* define irreparable inspector protobuf
* add IrreparableDB method GetLimited
* fill out irreparable inspector API
* add IrreparableInspector server to satellite, fix small error
* refactor IrreparableDB to use pb.IrreparableSegment instead of irreparable.RemoteSegmentInfo
* Add Application Name to PostgreSQL Connection String,
if not existing already
* Relocate Check
* Handle Application Name only for Postgres
* URL Encode
* Relocate Function into pgutil
* Fix Error, when ApplicationName is set
* Rename parameter
* Rename Check Value
* Straightline Comment
* Remove fmt Dependency
* Fix Linting Recommendation
* Wiring up DumpNodes response for Inspector
* Finalize everything and test that it works
* Get Count and DumpNodes working for Overlay Cache
* WIP updating payment rollup to check statDB instead of overlay
* FIrst pass at updating statDB to take wallet and email
* Passing tests
* use pb.NodeOperator instead of Meta struct
* remove TODO
* revert go.mod
* Get SQL migration working correctly
* Changes Meta to Operator in NodeStats struct
* Adds update operator logic for statDB
* Fix db migrate tests - added v5 snapshot
* User friendly msg for missing snapshot version
* Passing tests
* Change node update to happen in discovery instead of in overlay
* Fix logic and update function calls
* Update comment on UpdateOperator interface method
* Update name of parameter
* Change type of argument to UpdateOperator
* Updates statDB tests
this change removes the cryptopasta dependency.
a couple possible sources of problem with this change:
* the encoding used for ECDSA signatures on SignedMessage has changed.
the encoding employed by cryptopasta was workable, but not the same
as the encoding used for such signatures in the rest of the world
(most particularly, on ECDSA signatures in X.509 certificates). I
think we'll be best served by using one ECDSA signature encoding from
here on, but if we need to use the old encoding for backwards
compatibility with existing nodes, that can be arranged.
* since there's already a breaking change in SignedMessage, I changed
it to send and receive public keys in raw PKIX format, instead of
PEM. PEM just adds unhelpful overhead for this case.
* pass UTC times to db so that sqlite3 understands
this fixes the pkg/accounting/tally tests, at least for me.
(credit to Bill Thorp)
* also fix this one
* Divide uplink and gateway params set
* attempt to fix docker
* attempt to fix all in one
* test
* more reorganization
* fix compilation error
* fix imports order
* fix dependency
* rename structs
* keep minio params for now
* review comments
* remove manual flag check
* got tests passed
* wire up paginate function for cache node retrieval
* Add tests for paginate but they're failing
* fix the test arguments
* Updates paginate function to return more variable
* Updates
* Some test and logic tweaks
* improves config handling in discovery
* adds refresh offset to discovery struct
* add 45 day expiration to PBAs
* add expiration field to relevant areas, DeleteExpired placeholder
* reject expired BWAs
* test for expired BWAs
* add BwExpiration config value
* fix - Satellite crashing on receiving a manipulated bandwidthagreement
* provider.PeerIdentityFromContext called twice. Remove one
* add storage node ID to serial number
* remove serialNum query and transaction
* add uuid to GeneratePayerBandwidthAllocation for testing
* enable expected failure on duplicate serialnum cases
* Revert "enable expected failure on duplicate serialnum cases"
This reverts commit 5948f43ed1741c280f0bb34a86c1c490365417bc.
* enable expected failure on duplicate serialnum cases
Update statdb args/return values to minimize structs
Simplify statdb.Update() to update all stats instead of an arbitrary subset determined by flags
Remove CreateIfNotExists logic from statdb.Update()
Simplify audit code structure
* intial changes to migrate statdb to masterdb framework
* statdb refactor compiles
* added TestCreateDoesNotExist testcase
* Initial port of statdb to masterdb framework working
* refactored statdb proto def to pkg/statdb
* removed statdb/proto folder
* moved pb.Node to storj.NodeID
* CreateEntryIfNotExistsRequest moved pd.Node to storj.NodeID
* moved the fields from pb.Node to statdb.UpdateRequest
ported TestUpdateExists, TestUpdateUptimeExists, TestUpdateAuditSuccessExists TestUpdateBatchExists
* Merge bwagreement db into satellite master db
* adjust to recent tally changes
* linter problems
* linter problems
* returning db structs in more optimal way
* added pointer for assignment
* error message changed
* better param message