This patch is required to fix the nightly deployment:
* We need to use the exact docker image tag what we built earlier
* Migration should be full instead of snapshot (snapshot couldn't update existing, but older dbs)
Change-Id: Id2a2070638072a7b0021326326b0d53533817168
Reworked new Add coupon modal to use common VModal component to improve UX.
Issue:
https://github.com/storj/storj/issues/5104
Change-Id: Idd186ffd30a529761d15052bee4fac7cee4f5814
Reworked old Add coupon code modal to use common VModal component to improve UX.
Issue:
https://github.com/storj/storj/issues/5104
Change-Id: I4e1b38cfa2a88bacb4bcc58b837bc5b4319aca3d
Reworked delete bucket modal to use common VModal component to improve UX.
Issue:
https://github.com/storj/storj/issues/5104
Change-Id: I74acd07a0cb3f7e0231f88bf1255de9ac000ace5
Reworked share object modal to use common VModal component to improve UX.
ISSUE:
https://github.com/storj/storj/issues/5104
Change-Id: Ie07550bb3e116651b08721298994eb0c867cf6f4
Instead of sending emails at the time the node is seen to be back
online, we have decided to send the event to the node events table,
which will initiate the email sending process at some point.
Change-Id: Id756209498112579de8e78ee20ad2df54571a617
Add nodeevents.DB to satellite overlay service so we can insert node
events into the nodeevents DB.
Change-Id: I642c0ccc9941ecdb08cb22d5c8cf701959a55156
New flag 'MultipleVersions' was not correctly passed from metainfo
configuration to metabase configuration. Because configuration was
set correctly for unit tests we didn't catch it and issue was found
while testing on QA satellite.
This change reduce number of places where new metabase flags needs
to be propagated from metainfo configuration to avoid problems with
setting new flags in the future.
Fixes https://github.com/storj/storj/issues/5274
Change-Id: I74bc122649febefd87f665be2fba628f6bfd9044
We need to make exceptions for older uplink versions, because it does
not compile with newer Go versions due to quic dependency.
Change-Id: I3e073694f0942029c56740f0689088058ee068c3
since amount of objects is growing and looping through all of them
starts taking lot of time, we are switching for SQL query to do it
in chunks of tallies per bucket. 2nd part of issue fix.
Closes https://github.com/storj/team-metainfo/issues/125
Change-Id: Ia26bcac0a7e2c6503df9ebbf4817a636841d3284
Change he bloomfilter generation process to prefix the objects with a date and update the LATEST object with the prefix. The sender will read the LATEST file to get the prefix to process.
Change-Id: Iae0d3c49015d57f391d87789fb799a7d774066bf
The current deployment strategy requires that the GC bloomfilter generation process executes only once and exits.
Change-Id: I952991f126596aa165d1f2e9fce6f8548c21bdba
This change adds AB testing for a new upgrade banner and sends a hit event when "Upgrade" is clicked.
Change-Id: Ie463e224af5b0cb74a601b68eedb2b34f9089fd7
Implement node events DB with Insert and GetLatestByEmailAndEvent. Get
was changed to GetLatestByEmailAndEvent so we can verify items are being
inserted into the table without needing the ID, which is not available
to us in the tests.
Change-Id: I4abe63631c44774cd7e795fbab0cbab4d801db4c
Change node_events schema to use an id column as primary key rather than
node id because there can be multiple events per node id.
Change-Id: I518d8ef9ea658764876483e282a4058d3c4910f4
Add new table for node events. We can use this to notify node
operators of certain node events. Further, we can squash events for
multiple nodes with the same email into a single notification.
Change-Id: Icea6dd939df8fe4a98806bd79c014e21d239c43e
problem: when uploading large files we face a problem with session expire while waiting for file to upload
solution: added callback to progress event to reset session timer
timer is refreshed only when less than 3 min left
Change-Id: Ieab9250be61757762a884165902cd48f90048409
and clarify some related implementation details.
Most notably, this change clarifies that the verification audit workers
and reverification audit workers are meant to live in a process or
processes separate from the satellite core, and outlines an extra queue
that will be used for communication with the core.
It's not entirely clear to me that this is the right approach; we would
save some fairly significant implementation time by leaving both types
of worker in the core. That would make it necessary to reconfigure and
restart the core when we wanted to change the number of verification
and/or reverification workers, and scaling would be limited to the
computational capacity of the core vm, but maybe those are acceptable
conditions.
Another option would be to leave the Verifier workers in the core, and
having a separate process for Reverifiers. That would be sort of a
middle way between the two above.
Change-Id: Ida12e423b94ef6088733b13d5cc58bdb78f2e93f
ReverifyPiece() is not currently hooked up to anything, but is planned
to take the place of audit.(*Verifier).Reverify().
ReverifyPiece() works by downloading one piece in its entirety, rather
than pulling an entire stripe across many nodes.
Change-Id: Ie2c680f4d3c3b65273a72466a3f9f55c115b0311
This table will be used as a queue for pieces that need to be reverified
(a regular audit timed out on the owning node, so now that node is
contained and we need to validate the piece before un-containing it).
Refs: https://github.com/storj/storj/issues/5228
Change-Id: I5dcd26b6adced8674cbd81884c1543a61ea9d4c8
BeginCopyObject checks twice for write permission in the destination
bucket. One check should be enough.
Change-Id: I3d5935d34f69cd48eaaf00d0117683edfdcefc05
Earthly is a build tool, it uses buildkitd to create reproducable and highly cacheable builds.
It is used by a new experimental nightly build to easily create storj-up images. (but can be used for any ad-hoc storj-up cluster to create the images).
To make the nightly more robust, I would prefer to commit the helper files (today I do a rebase every time, but sometimes it fails).
More detailed information about Earthly can be found at https://earthly.dev or https://www.youtube.com/watch?v=nChpMEdOaCQ
Change-Id: I683601e0558aca53b45ed3819c46c909534f8b15
The procedure responsible for node reputation status comparison could
return an invalid result due to comparing a status timestamp against
itself rather than comparing it against another. This results in
unnecessary database updates that could be avoided otherwise.
This change modifies the procedure to resolve this issue.
Change-Id: Id147e1942e994e8bca4ced2a9358f2474927d6ec
We had multiple experiment so far to collect high cardinality data (mainly in aggregated form).
1. we have a `/top` endpoint which aggregates events with upper bound
2. we use same api (eventstat) to publish S3 gateway-mt agents to influxdb
This patch starts to replace theses api with jtolio/eventkit. Instead of aggregation all events can be sent to a collector host where we can do aggregation and/or persisting data.
Change-Id: Id6df4882b51d2dbd2be9401ee4199d14f3ff7186
The threshold of piece deletions from the nodes during CommitObject
when overriding an existing object seemed to cause a race condition in
tests.
This change makes the threshold configurable so we can set it to maximum
so CommitObject waits until all pieces are removed from the nodes in the
test.
Change-Id: Idf6b52e71d0082a1cd87ad99a2edded6892d02a8
this change clears the stored wallet, coupon and priceSummaryForSelectedProject on logout.
see: https://github.com/storj/storj/issues/5252
Change-Id: I2f51d727be97af9a0f26163bb3e93affa35dd2ee
We removed monkit call from "object" method because it was using
too much cpu and was visible on cpu profile but we should first try
optimized version of this call.
Change-Id: Ib76d8a2968a704ce47235c6dac6edad4e40bde48