Finished implementing queries for both bandwidth and storage using pgx.Batch.
Fixed CSP styling issue.
Change-Id: I5f9e10abe8096be3115b4e1f6ed3b13f1e7232df
There is a sev-2 issue to add more browser caching.
In this PR I made object map and object preview to be fetched by signed request with non-public credentials using AWS SignatureV4 package.
Change-Id: Ib5013fa6d6af3faa97eed5168c11a13f9629cd87
Before this change we were returning full DB error message.
That can be very confusing for end user. This change is translating
error message into more user frindly version and fixes also DRPC
error status code.
Fixes https://github.com/storj/team-metainfo/issues/76
Change-Id: I29b06ab4ba50a0d14db7a822a2906d95d65ab524
We don't have different version of object than 1 so at the moment
this method is not needed. Also using GetObjectExactVersion
should be slightly more performant.
Change-Id: I78235d8ae22594cc1d6345dabcc915f41cd7797b
We already split main code base, now we need to split test
to reflect new files structure (bucket/object/segment/other).
Fixes https://github.com/storj/team-metainfo/issues/12
Change-Id: Ica1054c4fc7df764483b03f204b4beba094df8e1
We have here two migrations in fact. One is for existing users,
we need to check if its paying user (paid_tier) and set 1M for
them and 150K for others.
Second migration is to set limits for projects depends on owner.
If owner is a paying user (paid_tier) then project should have
1M limit, otherwise it should be 150K. In this case to make
migration faster initially projects table segment_limit was set
to 1M by default. With migration we are selecting all paying
users and we are setting 150K limit for all projects which owners
are not in paying users set.
Initially we had a concern if that query wil lbe quick enough to
be executed during deployment but after investigation CRDB
team confirms that this should take seconds for out DBs.
Fixes https://github.com/storj/team-metainfo/issues/70
Change-Id: I8be06e9f949b68b993e043cc15525e8483bf49ea
Existing tests are checking errors by comparing error message.
We plan to expose storage/segment limit errors as a public uplink
API but before that will happen we need to update tests to handle
both cases as new uplink error will have a bit different error message
than current tested error.
https://github.com/storj/uplink/issues/77
Change-Id: Id2706323c60d050d96752e66e859d4ec051a69b9
Implemented endpoint and query to get bandwidth chart data for new project dashboard.
Connected backend with frontend.
Storage chart data is mocked right now.
Change-Id: Ib24d28614dc74bcc31b81ee3b8aa68b9898fa87b
This is part of change to split metainfo endpoint into smaller
files. It will be grouped by bucket/object/segment/other requests.
Tests will be split as a separate set of changes.
Updates https://github.com/storj/team-metainfo/issues/12
Change-Id: I6c691e4d0e192fe3ad7974d2d0ab5ced0d272f3c
For better performance we should use AOST for getting
data where being up to data it not very crucial. We don't
care about small differences for segment limit calculation.
Change-Id: I9b2d4f2bd15ebc9d1c46bc84dd51a2e9d9231506
This is part of change to split metainfo endpoint into smaller
files. It will be grouped by bucket/object/segment/other requests.
Tests will be split as a separate set of changes.
Change-Id: I9b097dcc8fa889f985b7f4ef5f8f435a1ff0ef95
Additional test case to cover situation when cache value
expired and we need to get this value directly from DB.
Change-Id: I07bda67e19333e09a567104ce70f112fd47a7845
The database table got invalid input and the resulting error
was not checked. This adds updates that contain invalid fields
to trigger different errors.
Change-Id: Iacea32cbef5599aab562c88e4113073596cc9996
We had two problems here. First was how we were handling
errors message from GetProjectSegmentUsage. We were always
returning error, even for ErrKeyNotFound which should be used
to refresh value in cache and not propagated out of method.
Second, we were using wrong value to update cache. We used
current value from cache which obviously it's not what we intend.
Fixes https://github.com/storj/storj/issues/4389
Change-Id: I4d7ca7a1a0a119587db6a5041b44319102ef64f8
This is part of change to split metainfo endpoint into smaller
files. It will be grouped by bucket/object/segment/other requests.
Tests will be split as a separate set of changes.
Change-Id: I5128c84e06c82777fe71460bf5f9a6e26e52a243
Currently the rate limit has kept per satellite api endpoint.
Since we run 9+ api endpoints in production, we do not need
a limit of 1000, since the intention was to allow 1000 total.
This change reduces the effective limit given 9 instances
down to 900, which should be close enough.
Change-Id: Ia579149ccc3a12e8febe0cfd5586b8a39de40f55
We were returning pure non rpc errors in two cases.
This change added loging and correct rpc error as a
return.
Change-Id: I581ceb17dcdc00921dfa3c1057015c3b4d04308d
Users signing up through a url containing a promo code will have that code applied to their stripe account instead of the free tier coupon.
Change-Id: I071041b0934648ef3f5bdb05b6ec97c400f89ae4
A few months ago we removed all references to the contained
column in nodes and reputations
bb21551a9c
and
56fe636123
But we never did the migration to drop the columns.
This commit will finally do that.
Change-Id: I82aa2f257b1fb14a2f1c4c4a1589f80895360ae4
The previous change (59648dc272) ends up removing a lot of characters
for valid non-English names. Instead, only replace URL characters such
as slashes, colons, and periods. Since someone may use these characters
to separate two parts of a name, e.g. Name1/Name2, replace these
characters with a hyphen.
Change-Id: I4cc3d1bdb05d525a83970cf1b42479414c9678e7
When a user is created, but before verification or forgot password email
is sent, remove any special characters in the provided name. This
protects us against certain phishing attacks.
Change-Id: Ieddd3479da20eb80b9f1b56eb86c8f46bca2642c
We need to combine methods from accounting.Service (ExceedsStorageUsage and ExceedsSegmentUsage)
to run checks concurrently.
Resolves https://github.com/storj/team-metainfo/issues/73
Change-Id: I47831bca92457f16cfda789da89dbd460738ac97
Free-tier segment usage limit was defined as 150k, not 140k. This change
is correcting that.
https://github.com/storj/team-metainfo/issues/8
Change-Id: I71ec0961930b19fd09b2b996e01acd406a8dcf8f
Refactoring to do few things:
* move simple validation before validations with DB calls
* combine validation check/update for storage and segment
limits together
Change-Id: I6c2431ba236d4e388791d2e2d01ca7e0dd4439fc
updating project segment usage entry to redis is enabled only if
segment validation is enabled in config value, so no incompatible
changes will be added while deployment.
Change-Id: I1288cb9ff0a8a00f095dc94e20d2f14393e9a613
The following 2 commits added 2 new query parameters to set the `burst`
and `segments` limits for a project and also to new fields to the
response JSON object body to the "get project limits" endpoint:
* c911360eb5
* b7b010adc9
However, the API documentation and the Typescript client API (used by
the UI) weren't updated with them.
Later, the commit dc6128e9e2 updated the
Typescript client API with the `segments` limit but it didn't update the
documentation to reflect it.
This commit brings all things that were missed in those previous
commits.
Change-Id: Iff12cdd4a0d3c448cd73b57a98d171ba468d2c98
We want to know usage statistics for our main tools
like uplink-cli or rclone. Initially we will collect
only usage stats without relation to specific process
e.g. download or upload.
Change-Id: I203b1a6c07ae014e710368f77163f13fdf10763c
Use overlay db node as primary source of truth for reputation status when node
query it's status via Endpoint.Get. If node was was disqualified in overlay reputation
status may be inconsistend with reputation table.
Change-Id: I1bf858a4b020324035847a9215ef29cf5432f6a4
comment tests that contains fields needs
to rename on uplink side without breaking compatibility.
after rename tests will be moved back from comments.
Change-Id: I3bc4aff6ae7f6711ade956ac389f0d7e1a1ab91a
comment tests that contains fields needs
to rename on uplink side without breaking compatibility.
after rename tests will be moved back from comments.
Change-Id: I439783c62678c32805a85aa52bef1d2b767543a1
We want to be able to limit the number of segments per project for users.
To limit this we need to check limit value associated with project
and value of used segments already in BeginMoveObject, BeginMoveSegment
and increment cache segments usage after each CommitSegment call.
Resolves https://github.com/storj/team-metainfo/issues/1
Change-Id: I6290e67c095a174b9d101c4521802d9bfe0453b8
inconsistency
The original design had a flaw which can potentially cause discrepancy
for nodes reputation status between reputations table and nodes table.
In the event of a failure(network issue, db failure, satellite failure, etc.)
happens between update to reputations table and update to nodes table, data
can be out of sync.
This PR tries to fix above issue by passing through node's reputation from
the beginning of an audit/repair(this data is from nodes table) to the next
update in reputation service. If the updated reputation status from the service
is different from the existing node status, the service will try to update nodes
table. In the case of a failure, the service will be able to try update nodes
table again since it can see the discrepancy of the data. This will allow
both tables to be in-sync eventually.
Change-Id: Ic22130b4503a594b7177237b18f7e68305c2f122
Batching of the order submissions can lead to combining the allocated
traffic totals for two completely different time windows, resulting
in incorrect customer accounting. This change will group the batched
order submissions by projectID as well as time window, leading to
distinct updates of a buckets bandwidth rollup based on the hour
window in which the order was created.
Change-Id: Ifb4d67923eec8a533b9758379914f17ff7abea32