When an uplink requests an upload or download from the satellite we are trackig the
allocated bandwidth twice. The value in bucket_bandwidth_rollups is used
for project limits but the value in storagenode_bandwidth_rollups is not
used at all. We can increase the performance by removing it. Uplinks
will get a faster response from the satellite.
Change-Id: Icccd41f94107ef34668f30f99bf5f728c384b07e
* satellitedb/certDB: refactors of the node certificate storage DB table
The existing implementation doesnt allow to store the complete certificate chain of uplinkIDs or storagenodeIDs, so the current table is dropped and new table will be added which addresses the storage and retrieval of certificates
pkg/identity: fixes spelling mistakes that I missed on PR#2754
Fixes V3-1992/V3-2388
* add default offer for offers table
* fix migration test
* Trigger Jenkins
* set the default value to be correct type
* skip soon will deleted test
* fix test data
* add orderby for ListAll
* change durations, redeemable cap to be a nullable field
* remove unecessary code
* add bucket metadata table in SA masterDB
* fix indentation
* update db model per CR comments
* update testdata
* add missing field on sql testdata
* fix args to testdata
* unique bucket name
* fix fkey constraint for test
* fix one too many commas
* update timestamp type
* Trigger Jenkins
* Trigger Jenkins yet again
* satellite/satellitedb: Alter nodes disqualification column
Change the type of the 'disqualification' column of the nodes table from
boolean to timestamp.
* overlay/cache: Change Disqualified field type
Change the Disqualified field type the NodeDossier struct type from bool
to time.Time to match with the disqualified type used by the DB layer.
* satellite/satellitedb: Update queries uses disqualified
Update the queries which uses the disqualified column due to the column
type has been changed from boolean to nullable timestamp.
* docs/design: Update disqualification due impl changes
Update the disqualification design document to contain the architectural
change required to be able to restore unfair disqualified nodes in case
of an unexpected cause (bug, mistake, hard network disconnection, etc.).
* add user credits table
* change primary key, change type for credit_type, and change relation kind of foreign keys from cascade to restrict
* modify table and query methods
* modify schema
* add dbx queries
* add migration file
* add orderby to read available credit entries
* adds model to satellite dbx
* cleans up model spacing
* generated golang from dbx
* added migration steps
* Added testdata
* changed node_id -> bucket_id
* adds -- NEW DATA -- to testdata
* more testdata changes
* adds -- NEW DATA -- line
* dbx makes the table plural
* missed a singular value_attribution
* restart jenkins
* Update satellitedb.dbx
* adjust to PR comments
* autogenerated dbx models
* restart jenkins
* init marketing service
Fix linting error
Create offerdb implementation
Create offers service
Add update method
Create offer table and migration
Fix linting error
fix conflicts
Insert new data
Change duration to have clear indication to be based on days
add error wrapper
Change from using uuid to int for id field
* Create Marketing service
* make error virable name more readable
* add condition in update service method to check offer status
* generate lock file
Change get to listAllOffers
* Add method for getting current offer
wip
* add check for expires_at in update method
* Fix conflicts
* add copyright header
* Fix linting error
* only allow update to active offers
* add isDefault argument to GetCurrent
* Update lock file
* add migration file
* finish migrate for adding credit_in_cents for both award and invitee
* save 100 years as expiration date for default offers
* create crud test for offers
* add GetCurrent test
* modify doc
* Fix GetCurrent to work with default offer
* fix linting issue
* add more tests and address feedbacks
* fix migration file
* add type column back to match with mockup design
* add type column back to match with mockup design
* move doc changes to new pr
* add comments
* change GetCurrent to GetCurrentByType
* fix typo
What: Changes to support custom usage limit for the project. With this implementation by default project usage limit is taken from configuration flag. If project DB field usage_limit will be set to value larger than 0 it will become custom usage limit and we will be used to verify is limit was exceeded.
Whats changed:
usage_limit (bigint) field added to projects table (with migration)
things related to project usage moved from metainfo endpoint to project usage type
accounting.ProjectAccounting extended with GetProjectUsageLimits() method
Why: We need to have different usage limits per project. https://storjlabs.atlassian.net/browse/V3-1814