Add node tally ranged loop observer and partial.
Add node tally randed observer to range loop peer.
Add config flag to select which loop to use for node tally.
Update satellite core to use segement/ranged loop based on a flag.
Duplicate existing node tally test but using ranged loop.
Change-Id: I6786f1a16933463fab5f79601bf438203a7a5f9e
bucket_bandwidth_rollups table
We have performance problems with updating bucket_bandwidth_rollups. To
improve situation we can stop storing allocated bandwidth in this table.
This should reduce large number of updates which are comming from
metainfo endpoints, repair workers and audit.
Next step will be to drop `allocated` column completely from
bucket_bandwidth_rollups.
Allocated GET bandwidth is all we need and we are keeping it in
bucket_bandwidth_rollups table.
Change-Id: Ifdd26a89ba8262acbca6d794a6c02883ad0c0c9b
We have a bug where if number of buckets in the system will be
multiplication of batch size (2500) then loop that is going over
all buckets can run indefinitely.
Fixes https://github.com/storj/storj/issues/5374
Change-Id: Idd4d97c638db83e46528acb9abf223c98ad46223
We added alternative way to calculate bucket tallies for accounting and
now it's tested and we will enable it by default.
CollectBucketTallies was extended to support overriding current time
to be able to test handling expired objects.
Change-Id: I738b99a33fd2e086245f92d874c1cbb806e834c0
since amount of objects is growing and looping through all of them
starts taking lot of time, we are switching for SQL query to do it
in chunks of tallies per bucket. 2nd part of issue fix.
Closes https://github.com/storj/team-metainfo/issues/125
Change-Id: Ia26bcac0a7e2c6503df9ebbf4817a636841d3284
We removed monkit call from "object" method because it was using
too much cpu and was visible on cpu profile but we should first try
optimized version of this call.
Change-Id: Ib76d8a2968a704ce47235c6dac6edad4e40bde48
This is another change to remove monkit calls from fast methods. Those
calls are visible in CPU profiles.
Change-Id: Ib3beba0dca6a6d93c3342b0994c580f78bbdd50b
Benchmark against 'main':
name old time/op new time/op delta
RemoteSegment/Cockroach/multiple_segments-8 5.56µs ± 5% 0.69µs ±12% -87.57% (p=0.008 n=5+5)
name old alloc/op new alloc/op delta
RemoteSegment/Cockroach/multiple_segments-8 2.72kB ± 0% 0.00kB ~ (p=0.079 n=4+5)
name old allocs/op new allocs/op delta
RemoteSegment/Cockroach/multiple_segments-8 50.0 ± 0% 0.0 -100.00% (p=0.008 n=5+5)
Change-Id: I20527fb576cd81db667a81929fa95b810ee11b14
While investigating high memory consumption on a Redis instance, it was found that much of the memory consumed was from cached redis scripts.
Redis caches scripts based on the hash of the script itself. This change uses ARGV to reduce the number of cached scripts.
Change-Id: Ia878fe81552a3067f09e60c44bb4ace25c6b5f9a
We made optimization for segment loop observers to avoid
heavy monkit initialization on each call. It was applied to very
often executed methods. Unfortunately we used wrong monkit
method to track function times. Instead mon.Task we used
mon.Func().
https://github.com/spacemonkeygo/monkit#how-it-works
Change-Id: I9ca454dbd828c6b43ba09ca75c341991d2fd73a8
Adding an interval_end_time column to the accounting_rollups table
to keep the last interval_end_time for each daily storage tallies.
Updates https://github.com/storj/storj/issues/4178
Change-Id: If7a8210c5e9fe2fc9df84b137a8b6e3db2471c58
removed segment limit validation and checks in metainfo endpoint and accounting/projectusage
since feature is live and has always has segment limitation now
Resolves: https://github.com/storj/storj/issues/4470
Change-Id: I8cf87cbbc40ac61262f9f05e52573d3ae6410611
Previously there was no realtime administration of the storage usage
during copies. Now there is.
Closes https://github.com/storj/storj/issues/4719
Change-Id: I0d536bf551d16208116c3aceac89ed590ec473bf
Currently we have a significant number of tallies that need to be
deleted together. Add a limit (by default 10k) to how many will
be deleted at the same time.
Change-Id: If530383f19b4d3bb83ed5fe956610a2e52f130a1
It seems the tests relied on time.Now(), which might cause some
discrepancies in calculations. Use a fixed time.Now() rather than
recalculating.
As a sidefix, remove "Test" prefix from t.Run. These are unnecessary.
Change-Id: I1de903fcf0fcf46fc8e3acf2463e17239b8e3cc6
Satellite caches the project bandwidth in Redis when it doesn't have it
because was not set or the key expired, however, it doesn't perform the
check and set if not exists in a transaction. It also uses the increase
function which increases the value if it exists otherwise it sets it.
This provokes that multiple concurrent request to the same project may
increase the total project by multiples of the bandwidth usage
registered in the database rather than setting it because they may check
if the key exists before any other has executed the increase and then
the first one executing it will set the value but the others will
increased causing that Redis has a wrong bandwidth usage value which is
N magnitude of the real one and making the satellite to deny the
downloading if it surpasses the project limit.
This commit changes the "update"" project bandwidth usage by an "insert"
but using a Redis function that only sets the value if the key doesn't
exists for solving the increase issue but also not overriding the value
due to may contain updates of other downloading requests which aren't
already registered in the DB.
Change-Id: I33e2fe462930b2fdb4061fc94002bd3544476f94
TestRollupNoDeletes is very flaky (passes locally but fails in the main branch build).
The exact reason is not clear, but stopping the loop seems to be async, the following lines may not stop the loops immediatelly which is a potential problem:
```
satellitePeer.Accounting.Rollup.Loop.Pause()
satellitePeer.Accounting.Tally.Loop.Pause()
```
Fortunatelly these test check only the database interfaces. Instead of testplanet.Run we can run only satellitedbtest.Run which is faster and more predictable (no background loops).
Other potential problem: comment claims that the default of DeleteTallies is false:
```
// In testplanet the setting config.Rollup.DeleteTallies defaults to false.
```
But it seems to be true (rollup.go):
```
DeleteTallies bool `help:"option for deleting tallies after they are rolled up" default:"true"`
```
This is also fixed in the patch (as we need set it explicit), but TBH it can be fixed with testplanet, too.
Change-Id: Id7ec80d5c069bed2c556f4d001c71aa23fc5af23
We don't have metric to track how many pending objects we have in the
system. This change is using tally objects loop to collect pending
objects per bucket and at the end its combining all buckets values
into single metric for all pending objects in a system.
Change-Id: Iac7a6bfb48854f7e70127d275ea8fdd60c4eb8b7
Recently we applied this optimization to metrics observer and time
used by its method dropped from 12m to 3m for us1 (220m segments).
It looks that it make sense to apply the same code to all observers.
Change-Id: I05898aaacbd9bcdf21babc7be9955da1db57bdf2
modify tally to calculate how we need to update segments in the live accounting
cache with UpdateProjectSegmentUsage method. adjust accounting.ExceedsUploadLimits
to use only cache for segment validation, if cache returns 0 or key not found then
we shouldn't reject such project as its possible that we won't have this value before first object iteration
https://github.com/storj/storj/issues/4744
Change-Id: I32c22d7fb71236e354653ba8719e029fc71f04c7
Doing server-side copy operation should not affect user monthly
bandwidth. This test covers that case.
Fixes https://github.com/storj/storj/issues/4717
Change-Id: I84ffab96b84851f395ea3a34d88f7dba424ec440
Initialy we wanted to put segment usage into cache values retrieved
directly from metabase but it cause performance issues. Now we
will be collecting segment usage from tally object loop and those
values will be put into a cache but becuse we cannot get those values
on demand we shouldn't expire cache value at all because objects
loop requires sustencial amount of time to be executed.
Part of solution for https://github.com/storj/storj/issues/4744
Change-Id: I3b37e23badeecebed0c95064156e85b38038bfe2
Fix for this customer issue
https://github.com/storj/customer-issues/issues/34
By this change we fetch bucket usage since its creation instead of using project's createdAt timestamp.
Change-Id: Ic0ea5d169056a5bd64ed143d13954d794da6e1d2
return storage and segment totals as a single result, instead of returning only storage
and bandwidth and segment values are filtered out, https://github.com/storj/storj/issues/4744
Change-Id: I624d67ed5205ae21ecd5a2f39775f63ed042e629
We were already able to override (or not) metadata with this method
but to be explicit we are introducting new option to control storing
metadata with object. Separate option should be less error prone.
https://github.com/storj/team-metainfo/issues/105
Change-Id: I4c5bce953a633a0009b05c5ca84266ca6ceefc26
We implemented server-side copy feature and we would like to
confirm that it is not affecting accounting/tally service.
Resolves https://github.com/storj/storj/issues/4697
Change-Id: I3944ea52c0acc68107ec15c1911750dc7d947501
Added new endpoint to get project's single bucket usage rollup.
Extended generation code to handle service method args.
Change-Id: Ief768632a801c047c66e0617056fbd7b30427b33
Added new projectaccounting query to get project's single bucket usage rollup.
Added new service method to call new query.
Added implementation for IsAuthenticated method which is used by new generated API.
Change-Id: I7cde5656d489953b6c7d109f236362eb465fa64a
Finished implementing queries for both bandwidth and storage using pgx.Batch.
Fixed CSP styling issue.
Change-Id: I5f9e10abe8096be3115b4e1f6ed3b13f1e7232df
Existing tests are checking errors by comparing error message.
We plan to expose storage/segment limit errors as a public uplink
API but before that will happen we need to update tests to handle
both cases as new uplink error will have a bit different error message
than current tested error.
https://github.com/storj/uplink/issues/77
Change-Id: Id2706323c60d050d96752e66e859d4ec051a69b9
Implemented endpoint and query to get bandwidth chart data for new project dashboard.
Connected backend with frontend.
Storage chart data is mocked right now.
Change-Id: Ib24d28614dc74bcc31b81ee3b8aa68b9898fa87b
For better performance we should use AOST for getting
data where being up to data it not very crucial. We don't
care about small differences for segment limit calculation.
Change-Id: I9b2d4f2bd15ebc9d1c46bc84dd51a2e9d9231506
Additional test case to cover situation when cache value
expired and we need to get this value directly from DB.
Change-Id: I07bda67e19333e09a567104ce70f112fd47a7845
We had two problems here. First was how we were handling
errors message from GetProjectSegmentUsage. We were always
returning error, even for ErrKeyNotFound which should be used
to refresh value in cache and not propagated out of method.
Second, we were using wrong value to update cache. We used
current value from cache which obviously it's not what we intend.
Fixes https://github.com/storj/storj/issues/4389
Change-Id: I4d7ca7a1a0a119587db6a5041b44319102ef64f8
We need to combine methods from accounting.Service (ExceedsStorageUsage and ExceedsSegmentUsage)
to run checks concurrently.
Resolves https://github.com/storj/team-metainfo/issues/73
Change-Id: I47831bca92457f16cfda789da89dbd460738ac97
We want to be able to limit the number of segments per project for users.
To limit this we need to check limit value associated with project
and value of used segments already in BeginMoveObject, BeginMoveSegment
and increment cache segments usage after each CommitSegment call.
Resolves https://github.com/storj/team-metainfo/issues/1
Change-Id: I6290e67c095a174b9d101c4521802d9bfe0453b8
Batching of the order submissions can lead to combining the allocated
traffic totals for two completely different time windows, resulting
in incorrect customer accounting. This change will group the batched
order submissions by projectID as well as time window, leading to
distinct updates of a buckets bandwidth rollup based on the hour
window in which the order was created.
Change-Id: Ifb4d67923eec8a533b9758379914f17ff7abea32
We want to set maximum number of segments per project. This change will
add functionality to get number of segments currently used by project.
To avoid often DB calls segment cound will be cached and refreshed every
few minutes.
Change-Id: I2ecb6484f5afc3875c0e0dfaea360e8872f9d196
Exposes functionality to get and update project segment
limit. It will be used to limit number of segments per project
while uploading object.
Change-Id: I971d48eebb4e7db8b01535c3091829e73437f48d
Added new query to get project object and segment count.
Added appropriate object and segment count view for new project dashboard.
Change-Id: I69a2e55442f318c51dc365c0c578b964f2f06c7f