As a reminder: latest clingy removed the requirement of having custom context (which made the usage of context.WithValue harder) and uses simple context instead.
Clingy saves the stdin/stdout/stderr to the context (earlier to separated context type) to make it available for unit testing.
Change-Id: I8896574f4670721de43a577cd4b35952e3b5d00e
Adds USDollarsMicro currency to the billing DB which support fraction of a cent with decimal places for better billing amounts accuracy.
Change-Id: Id07dfae104d94e27c7b22ab8f5781010e16c4c8e
Removed batch insert of payments since they do not guarantee order. Order of payments sent to the payments DB is important, because the billing chore will request new payments based on the last received payment. If the last payment inserted is not the last payment received, duplicate payments will be inserted into the billing table.
Change-Id: Ic3335c89fa8031f7bc16f417ca23ed83301ef8f6
Improve styling for lists in `customHtmlDescription` for alternate
signup landing pages in `registrationViewConfig`.
Change-Id: If20831044099a1183e5dc8b41668bf2f68c272a2
We would like to have separate process/command to collect bloom
filters from source different than production DBs. Such process will
use segment loop to build bloom filters for all storage nodes and
will send it to Storj bucket. This change is just initial code with peer
which will be used to build this new process.
Updates https://github.com/storj/team-metainfo/issues/120
Change-Id: I10a52b74865ce8ec4c29b7c6a2836f9232620422
Add new protobuff message to store retain filters before sending them
out to storagenodes.
This allows the retain filters to be generated by a separate command on
a backup of the database.
Change-Id: I8a2892d9d34bed1dbd6e5a7ded000dcdea7ec061
We are preparing to use object versions internally and to do
that we need to prepare different parts of the system to handle
object versions different than '1'. This change adjust code
responsible for server-side move and copy.
What was done:
* begin methods for move and copy are now using GetObjectLastCommitted
to find object
* results from begin move and copy operation contains now version to
be able to map object correctly with finish operation
* begin methods are putting version into satellite stream id and
finish methods are using this version as parameter instead hardcoded
value
Fixes https://github.com/storj/storj/issues/4867
Change-Id: I1380911279c21e10a3fff0342793efd2e73eafad
Implemented share bucket feature.
Refactored share object modal a bit (has to be refactored entirely).
Issue:
https://github.com/storj/storj/issues/4945
Change-Id: Icefd4bfe3eef9173ae824eea44d30450acde8044
This concludes the saga that began with commit c053bdbd, migrating away
from using gob-encoded big.Float values in the database to using
integers with an implied number of decimal places.
Nothing is using or accessing or expecting the _gob columns to exist
anymore. The transition code and the migration chore are gone. All that
remains is to drop the columns.
Change-Id: I9b15ee52f7781510a6dc91cf7c54f7f9022b1210
update responsiveness in billing pages to match figma designs including added billing header scrolling for smaller windows
Fixes https://github.com/storj/storj/issues/5033
Change-Id: I94fc836f95e3985d6b675d3495961ec703e43044
Currently we are tracking loop time with RunOnce method
metrics but it may give inaccurate values if loop if waiting
for observer for a ver long time. We had such case with GC
which is only loop observer in GC peer and its joning loop
every 5 days.
Change-Id: I08546c912e00c3641488de6a5d75948fe75c8e99
Today each storagenode should have a port which is opened for the internet, and handles DRPC protocol calls.
When we do a HTTP call on the DRPC endpoint, it hangs until a timeout.
This patch changes the behavior: the main DRPC port of the storagenodes can accept HTTP requests and can be used to monitor the status of the node:
* if returns with HTTP 200 only if the storagnode is healthy (not suspended / disqualified + online score > 0.9)
* it CAN include information about the current status (per satellite). It's opt-in, you should configure it so.
In this way it becomes extremely easy to monitor storagenodes with external uptime services.
Note: this patch exposes some information which was not easily available before (especially the node status, and used satellites). I think it should be acceptable:
* Until having more community satellites, all storagenodes are connected to the main Storj satellites.
* With community satellites, it's good thing to have more transparency (easy way to check who is connected to which satellites)
The implementation is based on this line:
```
http.Serve(NewPrefixedListener([]byte("GET / HT"), publicMux.Route("GET / HT")), p.public.http)
```
This line answers to the TCP requests with `GET / HT...` (GET HTTP request to the route), but puts back the removed prefix.
Change-Id: I3700c7e24524850825ecdf75a4bcc3b4afcb3a74
Currently, the satellite tracks connectivity information about all nodes
that have contacted it, even if we have never successfully contacted
the node back.
This behavior was leveraged during a security audit to create hundreds
of thousands of "junk nodes" in the nodes table on one satellite, which
affected performance of queries such as node selection.
With this change, we should no longer track information about nodes that
have never been successfully contacted.
Note that it will still be possible to cause the creation of "junk
node" entries in the db; the attacker just has to set up individual
publicly-routable IP+port pairs for each node as it is created, so it
can respond to a PingBack.
Change-Id: Ibb6da6cc908fd4fc85aae1ba00313ba2738409ab
This change fixes the duplicate navigation error on the new billing screen.
see: https://github.com/storj/storj/issues/5096
Change-Id: Ie0512de806835acfc29e86182e4c49ced6355383
Decode the JSON input string to its corresponding unicode
decoding if the special byte order mark is present.
Fixes https://github.com/storj/storj/issues/4950
Change-Id: If91bac22590c89b35c58bf54f6d3bdb8a67d7a4f
Metadata validation for CommitObject request was placed in a wrong
place. There is a case (old uplink) where encrypted key is bundled
inside encrypted metadata bytes and we need to extract it before we
can validate it. This change moves metadata validation to a place where
we are sure we have encrypted metadata and encrypted metadata encrypted
key ready to be checked.
"Run Versions Test" is covering this case and it was failing without
this change.
Change-Id: Ib709ad901fbb3fa4865a393195b7b3f4c0d87e7a
For object Version in different places we are using different types.
Satellite StreamID is using int32 but metabase accepts int64. Metabase
type is correct one and we should align other places with it.
As a small addition this change is also passing version correctly
between requests instead of using hardcoded value.
Change-Id: I63634d73c0a48c009e4db5f203ff18b7f3218b02
Updated metabase.UpdateObjectMetadata method to update set metdata always for last committed object
Closes https://github.com/storj/storj/issues/4870
Change-Id: I060683e31efcaf3e2531fea143cf0567e5ff5f73
We couldn't use environment variables safely to configure storagenode, since we introduced the embedded updater.
For example STORJ_DEBUG_ADDR=localhost:11111 would try to set debug port 11111 for both the storagenode and storagenode-updated, causing port conflict.
This small change enables to configure storagenode-updater with STORJUPDATER_... environment variables.
Tested with creating custom image and installing to my own storage node.
Change-Id: I6b0a601a4dc63d2d1ff3c191ae89981434e55c30
Sessions now expire after a much shorter amount of time, requiring
clients to issue API requests for session extension. This is handled
behind the scenes as the user interacts with the page, but once session
expiration is imminent, a modal appears which informs the user of his
inactivity and presents him with the choice of loging out or preserving
his session.
Change-Id: I68008d45859c814a835d65d882ad5ad2199d618e
after redirecting from bucket page to upload page if we have no files/folders
related to this passphrase but have other objects in bucket we show closable
reminder that we support multiple passphrases per bucket
Change-Id: I6420aedd5605100e4aa35b598771e5298e251f91
When enconding structs into JSON, byte slices are marshalled as base64
encoded string using the base64.StdEncoding.Encode():
ea9c3fd42d/src/encoding/json/encode.go (L833-L861)
We, however, expect API Secrets to be encoded as base64URL, so when
an marshalled secret (with byte slice type) is added to the multinode
dashboard, it fails with `illegal base64 data at input byte XX`.
This change changes the type of APISecret field in the
multinode/nodes.Nodes struct to use multinodeauth.Secret type instead
of []byte.
multinodeauth.Secret is extended with custom MarshalJSON and
UnmarshalJSON methods which implement the json.Marshaler and
json.Unmarshaler interfaces, respectively.
Resolves https://github.com/storj/storj/issues/4949
Change-Id: Ib14b5f49ceaac109620c25d7ff83be865c698343
This change tracks signup captcha scores in the signup_captcha column in the users table.
It slightly modifies the captcha verify method to return both the score and success.
see: https://github.com/storj/storj/issues/5067
Change-Id: I7b3993e44958cfcf179806c7df19d6887fe3eda9
Use the provided ConstraintViolation method of the pgutil package rather than importing jackc pgxerrcode directly.
Change-Id: I4e86713000b3f5f0aadd54beee8ee239f0c8df8e
This change reverts satellite/metabase/iterator.go of 7390f389c to the
previous version (without optimization) but leaves added benchmarks in
place.
We noticed that this optimization doesn't work and actually elevates
listing times for most buckets, hence the revert until we come up with a
better idea.
Benchmarks:
name old time/op new time/op delta
NonRecursiveListing/Postgres/listing_no_prefix-8 1.30ms ± 4% 4.52ms ± 4% +246.92% (p=0.008 n=5+5)
NonRecursiveListing/Postgres/listing_with_prefix-8 3.26ms ± 3% 4.44ms ± 2% +36.19% (p=0.008 n=5+5)
NonRecursiveListing/Cockroach/listing_no_prefix-8 618µs ± 3% 2225µs ± 2% +259.94% (p=0.008 n=5+5)
NonRecursiveListing/Cockroach/listing_with_prefix-8 1.81ms ± 5% 2.60ms ± 5% +43.96% (p=0.008 n=5+5)
Updates storj/team-metainfo#115
Change-Id: I96e4e7a563b188df478f8489027dc0042469b839
Following the changes made to fix the storage usage graph
on the storagenode dashboard, we added a new
interval_end_time column to the accounting_rollups table.
We noticed a two-day delay in the graph, turns out the sub-query
was wrong due to the conflicting interval_end_time column
in both tables so we have to explicitly state which table
column we are referring to.
Also, set the default for interval_end_time in the accounting
rollups to the start_time if the interval_end_time is null
which will be removed once we backfill the column and alter
it to be a non-nullable column.
Updates https://github.com/storj/storj/issues/4178
Change-Id: Iff32b261d07b6ee219d2b6b6542377f0c54633a1
This is longest metabase at the moment and would nice to speed
up is a bit to improve overal tests execution time.
Change-Id: I86da8e0e593d20024b3ec778cbeab34a4613151f
Similar to the existing snapshot based tests of satellite/metabase db we make a migration here which is:
* dedicated to unit tests
* faster (with less steps)
* but safe: additional unit test ensures that the snapshot based migration and normal prod migration have the same results.
Change-Id: Ie324b09f64b4553df02247a9461ece305a6cf832
Database migration tests are rather slow, reduce them to
only the last 10 migrations, which should be sufficient.
Change-Id: Ib9d964fe6ec86ddeeef26c66b6ea9207b7868855
Context cancellation that aborts a non-essential Redis operation must
not be logged as an error because the operation is intentionally
canceled.
We are actually considering them not to be an error in following
operation because of the same reason and we return a RPC canceled status
code.
On the other hand it doesn't make sense to continue if the context is
canceled because although this is a non-essential operation if this one
is canceled due to the context the next one will be canceled for the
same reason, hence, we return earlier.
Change-Id: Ib3331975adeb06367d1ea0a578263ef50ae3f079
Adds USDMicro currency which support fraction of a cent with decimal places
for better billing amounts accuracy.
Adds JSON marshaling and unmarshaling for monetary.Amount, so that it
can be converted to/from JSON.
Change-Id: I034eba120ed23b6ba00b2d81a4f1b9db5f9a203f