It turns out, that running a docker image build for specific
arches is not possible from amd64 (eg. installing ca-certificates).
Change-Id: I8b8f002b7e532fb4a0c6542d5b573c294c501068
this change tries really hard to never have all of the storage node
rollups in memory at the same time, up until the rollups are actually
getting summed together.
Change-Id: If67f49e7d71106798d996a6850b3e48671bd9e18
Firstly, this changes the repair functionality to return Canceled errors
when a repair is canceled during the Get phase. Previously, because we
do not track individual errors per piece, this would just show up as a
failure to download enough pieces to repair the segment, which would
cause the segment to be added to the IrreparableDB, which is entirely
unhelpful.
Then, ignore Canceled errors in the return value of the repair worker.
Apparently, when the worker returns an error, that makes Cobra exit the
program with a nonzero exit code, which causes some piece of our
deployment automation to freak out and page people. And when we ask the
repair worker to shut down, "canceled" errors are what we _expect_, not
an error case.
Change-Id: Ia3eb1c60a8d6ec5d09e7cef55dea523be28e8435
This makes it possible to remove of this obsolete flag from the
multi-tenant gateway.
As a consequence, displaying the GATEWAY_0_ACCESS env var will always
require a running storj-sim. Until now, it was required only the first
time. Then the value was stored in the 'access' config. But this is now
not possible anymore.
The changes in StripeMock are required to fix failures in integration
tests. StripeMock is in-memory and its data does not survive restarts of
storj-sim. The second and following starts of storj-sim had invalid
state of StripeMock, which failed requests that were required to
populate the GATEWAY_0_ACCESS env var. The changes in StripeMock makes
it repopulate the Stripe customers from the database.
Change-Id: I981a208172b76577f12ecdaae485f5ae4ea269bc
log.Fatal immediately terminates the program without running any defers.
We should properly close all the services and databases.
Change-Id: I5e959cef3eafedeacb3a2062e3da47e8d04e8e75
Sadly the build process with this command is very, very flaky and often fails pulling down curl via apk.
As we currently do not need it anyway, it is safe to remove.
Change-Id: I8a396c560d61a7fe6324560152a68c07c6b31638
The VerifyPieceHashes method has a sanity check for the number pieces to
be removed from the pointer after the audit for verifying the piece
hashes.
This sanity check failed when we executed the command on the production
satellites because the Verify command removes Fails and PendingAudits
nodes from the audit report if piece_hashes_verified = false.
A new temporary UsedToVerifyPieceHashes flag is added to
audits.Verifier. It is set to true only by the verify-piece-hashes
command. If the flag is true then the Verify method will always include
Fails and PendingAudits nodes in the report.
Test case is added to cover this use case.
Change-Id: I2c7cb6b12029d52b2fc565365eee0826c3de6ee8
Jira: https://storjlabs.atlassian.net/browse/PG-69
There are a number of segments with piece_hashes_verified = false in
their metadata) on US-Central-1, Europe-West-1, and Asia-East-1
satellites. Most probably, this happened due to a bug we had in the
past. We want to verify them before executing the main migration to
metabase. This would simplify the main migration to metabase with one
less issue to think about.
Change-Id: I8831af1a254c560d45bb87d7104e49abd8242236
Jira: https://storjlabs.atlassian.net/browse/PG-67
There are a number of old-style objects (without the number of segments
in their metadata) on US-Central-1, Europe-West-1, and Asia-East-1
satellites. We want to migrate their metadata to contain the number of
segments before executing the main migration to metabase. This would
simplify the main migration to metabase with one less issue to think
about.
Change-Id: I42497ae0375b5eb972aab08c700048b9a93bb18f
Jira: https://storjlabs.atlassian.net/browse/USR-822
This the last step of dropping these 2 db tables. It also deletes all
code associate with them.
Change-Id: I8be840dc2a7be255cf6308c9434b729fe4d9391e
Add a config so that some percent of users require credit cards /
account balances
in order to create a project or have a promotional coupon applied
UI was updated to match needed paywall status
At this point we decided not to use a field to store if a user is in an
A/B
test, and instead just use math to see if they're in a test. We decided
to use MD5 (because its in Postgres too) and User UUID for that math.
Change-Id: I0fcd80707dc29afc668632d078e1b5a7a24f3bb3
We passed in revocationDB and metainfoDB for no reason.
Lets remove it from the dependency list to further reduce the footprint.
Change-Id: Ic0317bb92670fbd305d4a8b0ed1cb82858e2f6d3
Jira: https://storjlabs.atlassian.net/browse/USR-821
The `migrate-credits` billing command checks the available credits
balance for all users and moves it to the Stripe balance by creating a
new credit balance transaction.
Change-Id: Iafc7b95a4edad47f7c145a22e210f8c821ac183d
When a request comes in on the satellite api and we validate the
macaroon, we now also check if any of the macaroon's tails have been
revoked.
Change-Id: I80ce4312602baf431cfa1b1285f79bed88bb4497
Reverts https://review.dev.storj.io/c/storj/storj/+/2074
That change broke all billing commands with `sql: database is closed`.
Fixing the issue does not make sense as the whole setup of billing
commands is now error-prone. It's better to revert it.
Change-Id: Id13f817b73c7313ba60181f740b0712e4bdce077
This allows to seeing logs in the output of the invoice commands.
Existing ensure-stripe-customer commands is moved from the 'reports' to
the new 'billing' root command.
Change-Id: I752c7ab6ca59bfac8e0f174a45d2ab45fc18e467
able to cover more testing scenarios
Currently, its hard to implement test suite for payments because
mockpayments is on to high level and we cannot emulate many things e.g.
adding credit card. This change is first to be able to add mock for
Stripe client and do more granular tests.
Change-Id: Ied85d4bd0642debdffe1161657c1e475202e9d23
See https://storjlabs.atlassian.net/browse/SM-752
These changes allow us to change the log level at runtime through a handler off of the debug endpoint.
Examples of changing the log level on storj-sim
To get the current level for the satellite api process:
curl -XGET 'http://127.0.0.1:10009/logging' --header 'Content-Type: text/plain'
To change the log level:
curl -XPUT 'http://127.0.0.1:10009/logging' --header 'Content-Type: text/plain' --data-raw '{"level":"error"}'
Change-Id: I05d164b290929fa06b6d78c01075ee41f8238044
CreateTables hasn't been quite true for a while now, rename to
MigrateToLatest to be clearer in it's behavior.
Change-Id: Ida48e95122a5d9b7a814e922d3698e00024a2ba7
In cases like the segment reaper script connecting to the metainfodb,
we don't want a db migration to happen automatically when we call
metainfo.NewStore. This adds MigrateToLatest method for postgreskv
and cockroackv, and calls MigrateToLatest in places where NewStore used
to create tables.
Change-Id: I682d0f26d609af0601dfdb32a24866cdf5d32a7e
currently production uses a different application suffix for gc
services, so chronograf can distinguish between gc processes and core
processes, but it'd be nice to be a bit more consistent with repairers
and api servers
Change-Id: Icb96fed006c59d7afd730317d35636a6e4573b58
uuid.UUID implements driver.Value so it can be directly used as a
scannable result.
Replace uses of dbutil.BytesToUUID with uuid.FromBytes.
Change-Id: I51a670185ceb3cc2199d5aa2b76bc3fc191ca8fe
Previous split to a storj.io/private repository broke tag-release.sh
script. This is the minimal temporary fix to make things work.
This links the build information to specified variables and sets them
inline. This approach, of course, is very fragile.
Change-Id: I73db2305e6c304146e5a14b13f1d917881a7455c
* debug
* traces
* cfgstruct
* process
Package `storj/private/version` will be removed as a separate change.
Change-Id: Iadc40faa782e6225513b28218952f02d9c240a9f
Previously, we were simply discarding rows from the repair queue when
they couldn't be repaired (either because the overlay said too many
nodes were down, or because we failed to download enough pieces).
Now, such segments will be put into the irreparableDB for further
and (hopefully) more focused attention.
This change also better differentiates some error cases from Repair()
for monitoring purposes.
Change-Id: I82a52a6da50c948ddd651048e2a39cb4b1e6df5c
This peer will contain our administrative panels.
It's completely separated from our other satellite
processes because it allows better control for restricting
access to it.
Change-Id: Ifca473bee82ff6c680b346918ba32b835a7a6847
Currently we risk losing pending bandwidth rollup writes even on a clean
shutdown. This change ensures that all pending writes are actually
written to the db when shutting down the satellite.
Change-Id: Ideab62fa9808937d3dce9585c52405d8c8a0e703
this commit introduces the reported_serials table. its purpose is
to allow for blind writes into it as nodes report in so that we have
minimal contention. in order to continue to accurately account for
used bandwidth, though, we cannot immediately add the settled amount.
if we did, we would have to give up on blind writes.
the table's primary key is structured precisely so that we can quickly
find expired orders and so that we maximally benefit from rocksdb
path prefix compression. we do this by rounding the expires at time
forward to the next day, effectively giving us storagenode petnames
for free. and since there's no secondary index or foreign key
constraints, this design should use significantly less space than
the current used_serials table while also reducing contention.
after inserting the orders into the table, we have a chore that
periodically consumes all of the expired orders in it and inserts
them into the existing rollups tables. this is as if we changed
the nodes to report as the order expired rather than as soon as
possible, so the belief in correctness of the refactor is higher.
since we are able to process large batches of orders (typically
a day's worth), we can use the code to maximally batch inserts into
the rollup tables to make inserts as friendly as possible to
cockroach.
Change-Id: I25d609ca2679b8331979184f16c6d46d4f74c1a6
Remove starting up messages from peers. We expect all of them to start,
if they don't, then they should return an error why they don't start.
The only informative message is when a service is disabled.
When doing initial database setup then each migration step isn't
informative, hence print only a single line with the final version.
Also use shorter log scopes.
Change-Id: Ic8b61411df2eeae2a36d600a0c2fbc97a84a5b93
This change updates the three satellite report commands that accept date
ranges to parse and treat those dates uniformly.
- End dates are now uniformly exclusive. Exclusive end dates helps
operators avoid one-off errors on month boundaries, as in the operator
does not have to remember how many days are in that month and can just
run the report from the 1st (inclusive) through the 1st (exclusive).
- Fixed the date range validity check which only failed if the start
date came after the end date (it should have failed dates that were
equal since the check happened after adjusting for inclusivity).
Change-Id: Ib2ee1c71ddb916c6e1906834d5ff0dc47d1a5801
for storj-sim to work, we need to avoid schemas in cockroach urls
so we have storj-sim create namespaced databases instead of schemas
and we have the migrate command create the database in the same way
that it would create a schema for postgres. then it works!
a follow up commit will move the creation of the database/schemas
into storj-sim's setup step so that we can avoid doing these icky
creations during normal migration calls. it will also make the
pointerdb have an explicit call to migrate instead of just doing
it every time it's opened.
Change-Id: If69ef5cb96b6866b0438c761bd445afb3597ae5f
* change satellite.Peer name to Core
* change to Core in testplanet
* missed a few places
* keep shared stuff in peer.go to stay consistent with storj/docs
* separate sadb migration, add version check
* update checkversion to do same validation as migration
* changes per CR
* add sa migration to storj-sim
* add different debug port in storj-sim for migration
* add wait for exit for storj-sim migration
* update sa docker entrypoint to support migration
* storj-sim satellite parts all wait for migration
* upgrade golang-migrate/migrate to v4 because bug
* fix go mod tidy
* rm dup api code from sa peer, update storj-sim
* fix for backwards compat tests
* use env var instead of localhost
* changes per CR
* fix env var name
* skip peer for setup
* set up satellite repair run command
* add separated repair process to storj-sim
* add repairer peer to satellite in testplanet
* move api run cmd into api.go
* add satellite run repair to entrypoint
* update lock file and add comment
* add created at and bytes transferred
* cleanup
* rename db func to GetGracefulExitNodesByTimeFrame
* fix flag
* split into two overlay functions
* := to =
* fix test
* add node not found error class
* fix overlay test
* suggested test changes
* review suggestions
* get exit status from overlay.Get()
* check rows.Err
* fix panic when ExitFinishedAt is nil
* fix comments in cmdGracefulExit
* set up redis support in live accounting
* move live.Service interface into accounting package and rename to Cache, pass into satellite
* refactor Cache to store one int64 total, add IncrBy method to redis client implementation
* add monkit tracing to live accounting
What: Change cmd/uplink to use scopes
It moves the fields that will be subsumed by scopes into an explicit legacy section and hides their configuration flags.
Why: So that it can read scopes in from files and stuff
* rename pkg/linksharing to linksharing
* rename pkg/httpserver to linksharing/httpserver
* rename pkg/eestream to uplink/eestream
* rename pkg/stream to uplink/stream
* rename pkg/metainfo/kvmetainfo to uplink/metainfo/kvmetainfo
* rename pkg/auth/signing to pkg/signing
* rename pkg/storage to uplink/storage
* rename pkg/accounting to satellite/accounting
* rename pkg/audit to satellite/audit
* rename pkg/certdb to satellite/certdb
* rename pkg/discovery to satellite/discovery
* rename pkg/overlay to satellite/overlay
* rename pkg/datarepair to satellite/repair
* added satalite partner value attribution report. WIP
* WIP
* basic attribution report test completed. still a WIP
* cleanup
* fixed projectID conversion
* report display cleanup
* cleanup .added more test data
* added partnerID to query results
* fixed lint issues
* fix import order
* suggestions from PR review
* updated doc to reflect implementation
* clarification comments in the report SQL
* Changed based on PR suggestion
* More changes based on PR suggestions
* Changes based on PR suggestions
* reordered tests to make consistant with previous 2
* small comments cleanup
* More PR suggestions
* fixed lint issue and removed printf
* fixed var name
* Updates based on PR suggestions
* fixed message
* fixed test
* changes required after merge from master
* change BindSetup to be an option to Bind
* add process.Bind to allow composite structures
* hack fix for noprefix flags
* used tagged version of structs
Before this PR, some flags were created by calling `cfgstruct.Bind` and having their fields create a flag. Once the flags were parsed, `viper` was used to acquire all the values from them and config files, and the fields in the struct were set through the flag interface.
This doesn't work for slices of things on config structs very well, since it can only set strings, and for a string slice, it turns out that the implementation in `pflag` appends an entry rather than setting it.
This changes three things:
1. Only have a `Bind` call instead of `Bind` and `BindSetup`, and make `BindSetup` an option instead.
2. Add a `process.Bind` call that takes in a `*cobra.Cmd`, binds the struct to the command's flags, and keeps track of that struct in a global map keyed by the command.
3. Use `viper` to get the values and load them into the bound configuration structs instead of using the flags to propagate the changes.
In this way, we can support whatever rich configuration we want in the config yaml files, while still getting command like flags when important.
* tie defaults to releases
this change makes it so that by default, the flag defaults are
chosen based on whether the build was built as a release build or
an ordinary build. release builds by default get release defaults,
whereas ordinary builds by default get dev defaults.
any binary can have its defaults changed by specifying
--defaults=dev
or
--defaults=release
Change-Id: I6d216aa345d211c69ad913159d492fac77b12c64
* make release defaults more clear
this change extends cfgstruct structs to support either
a 'default' tag, or a pair of 'devDefault' and 'releaseDefault'
tags, but not both, for added clarity
Change-Id: Ia098be1fa84b932fdfe90a4a4d027ffb95e249c6
* clarify cfgstruct.DefaultsFlag
Change-Id: I55f2ff9080ebbc0ce83abf956e085242a92f883e
* internal/version: do version checks much earlier in the process initialization, take 2
Change-Id: Ida8c7e3757e0deea0ec7aea867d3d27ce97dc134
* linter and test failures
Change-Id: I45b02a16ec1c0f0981227dc842e68dbdf67fdbf4
* Initial Webserver Draft for Version Controlling
* Rename type to avoid confusion
* Move Function Calls into Version Package
* Fix Linting and Language Typos
* Fix Linting and Spelling Mistakes
* Include Copyright
* Include Copyright
* Adjust Version-Control Server to return list of Versions
* Linting
* Improve Request Handling and Readability
* Add Configuration File Option
Add Systemd Service file
* Add Logging to File
* Smaller Changes
* Add Semantic Versioning and refuses outdated Software from Startup (#1612)
* implements internal Semantic Version library
* adds version logging + reporting to process
* Advance SemVer struct for easier handling
* Add Accepted Version Store
* Fix Function
* Restructure
* Type Conversion
* Handle Version String properly
* Add Note about array index
* Set temporary Default Version
* Add Copyright
* Adding Version to Dashboard
* Adding Version Info Log
* Renaming and adding CheckerProcess
* Iteration Sync
* Iteration V2
* linting
* made LogAndReportVersion a go routine
* Refactor to Go Routine
* Add Context to Go Routine and allow Operation if Lookup to Control Server fails
* Handle Unmarshal properly
* Linting
* Relocate Version Checks
* Relocating Version Check and specified default Version for now
* Linting Error Prevention
* Refuse Startup on outdated Version
* Add Startup Check Function
* Straighten Logging
* Dont force Shutdown if --dev flag is set
* Create full Service/Peer Structure for ControlServer
* Linting
* Straighting Naming
* Finish VersionControl Service Layout
* Improve Error Handling
* Change Listening Address
* Move Checker Function
* Remove VersionControl Peer
* Linting
* Linting
* Create VersionClient Service
* Renaming
* Add Version Client to Peer Definitions
* Linting and Renaming
* Linting
* Remove Transport Checks for now
* Move to Client Side Flag
* Remove check
* Linting
* Transport Client Version Intro
* Adding Version Client to Transport Client
* Add missing parameter
* Adding Version Check, to set Allowed = true
* Set Default to true, testing
* Restructuring Code
* Uplink Changes
* Add more proper Defaults
* Renaming of Version struct
* Dont pass Service use Pointer
* Set Defaults for Versioning Checks
* Put HTTP Server in go routine
* Add Versioncontrol to Storj-Sim
* Testplanet Fixes
* Linting
* Add Error Handling and new Server Struct
* Move Lock slightly
* Reduce Race Potentials
* Remove unnecessary files
* Linting
* Add Proper Transport Handling
* small fixes
* add fence for allowed check
* Add Startup Version Check and Service Naming
* make errormessage private
* Add Comments about VersionedClient
* Linting
* Remove Checks that refuse outgoing connections
* Remove release cmd
* Add Release Script
* Linting
* Update to use correct Values
* Move vars private and set minimum default versions for testing builds
* Remove VersionedClient
* Better Error Handling and naked return removal
* Straighten the Regex and string conversion
* Change Check to allows testplanet and storj-sim to run without the
need to pass an LDFlag
* Cosmetic Change to Dashboard
* Cleanup Returns and remove commented code
* Remove Version Check if no build options are passed in
* Pass in Config Values instead of Pointers
* Handle missed Error
* Update Endpoint URL
* Change Type of Release Flag
* Add additional Logging
* Remove Versions Logging of other Services
* minor fixes
Change-Id: I5cc04a410ea6b2008d14dffd63eb5f36dd348a8b
* Wiring up DumpNodes response for Inspector
* Finalize everything and test that it works
* Get Count and DumpNodes working for Overlay Cache
* WIP updating payment rollup to check statDB instead of overlay
* FIrst pass at updating statDB to take wallet and email
* Passing tests
* use pb.NodeOperator instead of Meta struct
* remove TODO
* revert go.mod
* Get SQL migration working correctly
* Changes Meta to Operator in NodeStats struct
* Adds update operator logic for statDB
* Fix db migrate tests - added v5 snapshot
* User friendly msg for missing snapshot version
* Passing tests
* Change node update to happen in discovery instead of in overlay
* Fix logic and update function calls
* Update comment on UpdateOperator interface method
* Update name of parameter
* Change type of argument to UpdateOperator
* Updates statDB tests
* Adding dockerfile for running the web UI for Satellite
* Updating to work with Makefile and from root directory of repo
* Updating satellite ui build process to run in a more production like mode by generating the assets the pulling those into the satellite container
* Updates to allow external traffic to UI, updates to storagenode for identity creation, and logging for bug tracking
* Adding auto cert generation for storagenode
* removing satellite-ui-image from main images flow in Makefile and adding latest tag to docker build for it
* Adding solid defaults, tuning dockerfiles, and moving to standard logging methods
* Updating logging to be more standard
* Updating to logger.Debug
* Removing unused library and unused identity creation code
Change-Id: I956453037e303693ea37f94318180af0ab7984d5
* move payments command into satellite/main.go
* flag for db connection string in paymentsCmd
* refactor payments to satellite subcommand
* reports command, add payments arg descriptions
* report data prints to stdout unless --out is set
* fix small error in csv columns
* Consolidate identity management:
Move identity cretaion/signing out of storagenode setup command.
* fixes
* linters
* Consolidate identity management:
Move identity cretaion/signing out of storagenode setup command.
* fixes
* sava backups before saving signed certs
* add "-prebuilt-test-cmds" test flag
* linters
* prepare cli tests for travis
* linter fixes
* more fixes
* linter gods
* sp/sdk/sim
* remove ca.difficulty
* remove unused difficulty
* return setup to its rightful place
* wip travis
* Revert "wip travis"
This reverts commit 56834849dcf066d3cc0a4f139033fc3f6d7188ca.
* typo in travis.yaml
* remove tests
* remove more
* make it only create one identity at a time for consistency
* add config-dir for consitency
* add identity creation to storj-sim
* add flags
* simplify
* fix nolint and compile
* prevent overwrite and pass difficulty, concurrency, and parent creds
* goimports
apparently, the presence of an "Identity" attribute in a config struct
does not imply that anything will fill it in or use it.
i tested all of these executables manually and individually this time.
This change removes automatic metrics reporting for everything going
through process.Exec(), and re-adds metrics reporting for those commands
which are expected to be long-lived. Other commands (which may have been
intermittently sending metrics before this, if they ran unusually long)
will no longer send any metrics.
For commands where it makes sense, a node ID is used as the metrics ID.
now kad inspector features exist on every server that has
kademlia running. likewise, overlay and statdb.
this means kad inspection features are now available on
storage nodes
Change-Id: I343c873552341de13302bfb7a5d79cccc84fc6b8
* Adding other top level vars to be parsed into config
* Adding additional vars for parsing
* Fixing shell script conditionals in remaining entrypoints
* Adding null default for remaining env variable if statements