chore: fix typos in the documentation (#3959)

This commit is contained in:
Hector Fernandez 2020-11-09 21:00:34 +01:00 committed by GitHub
parent e6dd3ecaa7
commit dc5a5df7f5
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
10 changed files with 52 additions and 52 deletions

View File

@ -39,7 +39,7 @@ presents challenges when revoking a macaroon.
For example, if I hold the API key for Project A, I can create Macaroon A with a
caveat that it can only read and write files within Bucket A. I can
then share this macaroon with my own customer, Customer A. Customer A may then,
if they wish, create Macaroon B which is futher caveated -- for example,
if they wish, create Macaroon B which is further caveated -- for example,
restricted to read-only access in Bucket A -- and share Macaroon B with someone
else. This can occur without my knowledge.
@ -94,7 +94,7 @@ This approach was deemed best because it:
- Creates very little load on the database.
- Is backwards compatible, and allows us to revoke existing macaroons.
- Allows us to revoke an entire "macaroon tree" while maintaining the
distributive properies of macaroons.
distributive properties of macaroons.
Disadvantages to this approach:

View File

@ -30,7 +30,7 @@ When selecting nodes to store files to, the following criteria must be met:
- the node is not disqualified
- the node is not suspended
- the node has not exited
- the node has sufficent free disk space
- the node has sufficient free disk space
- the node has been contacted recently
- the node has participated in a sufficient number of audit
- the nodes has sufficient uptime counts
@ -102,7 +102,7 @@ For now, lets try out using an in-memory cache for the performanace gains. If we
3) Using a postgres materialized view for cached node data
Using a postgres materialized view to store all the vetted and unvetted nodes would allow us to implement this cache in the database layer instead of the application layer. This would require the application code to handle the refresh of the materialized view which could occur when one of the events from the #Update section happened.
Using a postgres materialized view to store all the vetted and unvetted nodes would allow us to implement this cache in the database layer instead of the application layer. This would require the application code to handle the refresh of the materialized view which could occur when one of the events from the #Update section happened.
Pro:
- allows the db to handle the logic instead of adding a cache at the application layer

View File

@ -13,7 +13,7 @@ The idea is to have two main components, like for Windows: the storage node bina
The parts we need:
- storagenode as a service
- a system for updating the storagenode binary aka. the updater (with rollout versioning support)
- a system for updating the updater
- a system for updating the updater
- a way to collect the configuration data from the user during the installation
- packaging to ship the above
@ -36,20 +36,20 @@ The installer will be a debian package. We choose to auto-update the binary, eve
- Email
- External address/port
- Advertised storage
- Identity directory
- Identity directory
- Storage directory
- Generate `config.yaml` file with the user configuration.
The default value for these directories can be defined using the [XDG Base Directory](https://wiki.archlinux.org/index.php/XDG_Base_Directory).
We choose to reuse the storagenode-updater and the recovery mechanism used in windows. They will be daemonized using systemd. The storagenode updater will auto-update. A recovery will be triggered if the updated updater service fails to restart.
We will use debconf to retrieve user data.
We will use debconf to retrieve user data.
The debian package will NOT contain the storagenode and storagenode-updater binaries. They will be downloaded as part of the post-installation script. A separate git repository will be created for holding the debian package.
Once we get a fully working debian package, we can convert it to the RPM format using the fpm tool. There are no debconf-like for RPMs, we will need to implement a post-install script to gather user inputs.
The debian package will be available by direct download and on a APT repository that users can add to their package manager source list. The repository will be managed using reprepro. Each time the repository is modified, it commits the static content to a dedicated git repository.
The debian package will be available by direct download and on a APT repository that users can add to their package manager source list. The repository will be managed using reprepro. Each time the repository is modified, it commits the static content to a dedicated git repository.
## Rationale
@ -62,7 +62,7 @@ Hence, we should use systemd for building our storagenode service.
Packaging in its simplest form would be tar.gz with an installation binary. This solution would be simple for us, but represents an annoyance for the user as our application would not be managed by their package manager.
#### Packages
A package is an archive file containing the application and metadata for indicating to the package manager how to install it.
A package is an archive file containing the application and metadata for indicating to the package manager how to install it.
Its format depends on the used package manager.
The most common formats are:
- deb for debian-based distributions
@ -79,7 +79,7 @@ The process for building a package is as follows:
- make a source package
- compile it to get binary packages.
Only the binary package is used by the user for installation. It is not a recommended pratice to directly integrate binaries.
Only the binary package is used by the user for installation. It is not a recommended practice to directly integrate binaries.
Building the source package is the most difficult part. But once it is done, we can use tools such as [fpm](https://github.com/jordansissel/fpm/wiki) to convert it to other package formats.
@ -100,7 +100,7 @@ There are [3 major agnostic packaging system](https://www.ostechnix.com/linux-pa
##### Snap
[Snaps](https://snapcraft.io/first-snap#go) are containerised software packages. They auto-update daily and work on a variety of Linux distributions. They also revert to the previous version if an update fails. This feature would make it necessary to find out how to implement the rollout versioning.
From the [snap documentation](https://snapcraft.io/docs/go-applications), it seems pretty straightforward to package an application. Snaps are defined in a yaml file. Running an application as a service is done only by specifying "daemon: simple" in the application description.
From the [snap documentation](https://snapcraft.io/docs/go-applications), it seems pretty straightforward to package an application. Snaps are defined in a yaml file. Running an application as a service is done only by specifying "daemon: simple" in the application description.
This would make us save the work of building a storage node service.
Snaps can then be published in the snapcraft [app store](https://snapcraft.io/). In the store, we would able to monitor the number of installed snaps. It is possible to [host our own store](https://ubuntu.com/blog/howto-host-your-own-snap-store) but that the snap daemon only handles one repository. Therefore, the use of Canonical's store seems mandatory. Snaps integrate well with [github](https://snapcraft.io/build).
@ -137,7 +137,7 @@ We are thinking of using native packaging for the following reasons:
- some linux users are reluctant to use snap
- covering deb and rpm packaging would make us cover most used distributions
- with proper packaging, we could directly be included in the distributions
## Implementation
### Debian package
- create a storj debian git
@ -184,4 +184,4 @@ We still need to support docker images. The Docker image we provide should make
## Wrapup
- As a first step and as part of the PoC, the git repository and the debian package skeleton will be created.
- The PoC will create the user and the directories, download a binary (will not check for the latest) and install a basic storagenode systemd service.
- The PoC will also contain first Dockerfile for the reprepro repository.
- The PoC will also contain first Dockerfile for the reprepro repository.

View File

@ -15,7 +15,7 @@ That leaves open the question for how a root key is created. Some requirements o
These requirements allow users to be in full control of their encryption, and don't require users safely transporting high entropy (hard to remember) secrets to bootstrap new uplinks.
This design accomodates more requirements that allow for additional features:
This design accommodates more requirements that allow for additional features:
3. A root key can be created for any encrypted path in a bucket, not just the bucket.
4. A table of root keys for low entropy passwords should not be possible. In other words, an attacker with knowledge of the algorithm should not be able to use a dictionary of common passwords and pre-compute what keys to check in the event of a data breach.

View File

@ -9,9 +9,9 @@ The satellite should repair files also using piece hashes to minimize CPU and ba
The white-paper states:
> Data repair is an ongoing, costly operation that will use significant bandwidth, memory, and processing power, often impacting a single operator. As a result, repair resource usage should be aggressively minimized as much as possible.
>
>
> For repairing a segment to be effective at minimizing bandwidth usage, only as few pieces as needed for reconstruction should be downloaded. Unfortunately, Reed-Solomon is insufficient on its own for correcting errors when only a few redundant pieces are provided. Instead, piece hashes provide a better way to be confident that were repairing the data correctly.
>
>
> To solve this problem, hashes of every piece will be stored alongside each piece on each storage node. A validation hash that the set of hashes is correct will be stored in the pointer. During repair, the hashes of every piece can be retrieved and validated for correctness against the pointer, thus allowing each piece to be validated in its entirety. This allows the repair system to correctly assess whether or not repair has been completed successfully without using extra redundancy for the same task.
Hash verification on the satellite requires understanding the current piece signing and verification workflow:
@ -41,7 +41,7 @@ Downloading for repair is significantly different enough from streaming as to wa
Using only the minimum number of pieces means that Reed-Solomon does not act as a check during repair. Hence hashing is used instead. While an uplink could potentially send signed bogus data to a storage node, the storage node would not be penalized by these actions. This requires that Audit implements a similar piece hash check instead of relying solely on Reed-Solomon encoding.
The size of all piece hashes downloaded should be roughly equal to a default maximum segment size : 64MiB. It seems preferable to keep this in memory over dealing with persistance to disk.
The size of all piece hashes downloaded should be roughly equal to a default maximum segment size : 64MiB. It seems preferable to keep this in memory over dealing with persistence to disk.
## Implementation

View File

@ -2,7 +2,7 @@
Satellite billing system combines stripe and coinpayments API for credit card and cryptocurrency processing. It uses `satellite/accounting` pkg for project accounting. Billing is set on account but that is the subject for future changes as we want billing to be on a project level. That requires decoupling stripe dependency to the level where we utilize only credit card processing and maintain all other stuff such as customer balances and invoicing internally. Every satellite should have separate stripe and coinpayments account to prevent collision of customer related data such as uuid and email.
# Stripe customer
Stripe operates on a basis of customers. Where customer is, from stripe doc: Customer objects allow you to perform recurring charges, and to track multiple charges, that are associated with the same customer. The API allows you to create, delete, and update your customers. You can retrieve individual customers as well as a list of all your customers. Satellite billing doesn't uses `customer` concern with public API, so it is treated as implementation detail. Stripe customer balance is automatically applied to invoice total before charging a credit card.
Stripe operates on a basis of customers. Where customer is, from stripe doc: Customer objects allow you to perform recurring charges, and to track multiple charges, that are associated with the same customer. The API allows you to create, delete, and update your customers. You can retrieve individual customers as well as a list of all your customers. Satellite billing doesn't uses `customer` concern with public API, so it is treated as implementation detail. Stripe customer balance is automatically applied to invoice total before charging a credit card.
Stripe billing system implementation stores a customer reference for every user:
```
@ -51,7 +51,7 @@ type Accounts interface {
```
# Customer setup
Every satellite user has a corresponding customer entity on stripe which holds credit cards, balance which reflects the ammount of STORJ tokens, and is used for invoicing. Every time a user visits billing page on the satellite UI we try to create a customer for him if one doesn't exists.
Every satellite user has a corresponding customer entity on stripe which holds credit cards, balance which reflects the amount of STORJ tokens, and is used for invoicing. Every time a user visits billing page on the satellite UI we try to create a customer for him if one doesn't exists.
```go
// Setup creates a payment account for the user.
// If account is already set up it will return nil.
@ -241,7 +241,7 @@ type Chore struct {
```
# STORJ tokens processsing
Unlike with credit cards billing system uses deposit model for STORJ tokens, user has to deposit some amount prior using satellite services.
Unlike with credit cards billing system uses deposit model for STORJ tokens, user has to deposit some amount prior using satellite services.
Public API of token related billing:
```go
@ -257,7 +257,7 @@ type StorjTokens interface {
```
# Making a deposit
STORJ cryptocurrency processing is done via coinpayments API. Every time a user wants to deposit some amount of STORJ token to his account balacne, new coinpayments transaction is created. Transaction amount is set in USD, and conversion rates is beeing locked(saved) after transaction is created and stored in the db.
STORJ cryptocurrency processing is done via coinpayments API. Every time a user wants to deposit some amount of STORJ token to his account balacne, new coinpayments transaction is created. Transaction amount is set in USD, and conversion rates is being locked(saved) after transaction is created and stored in the db.
```go
// Deposit creates new deposit transaction with the given amount returning
// ETH wallet address where funds should be sent. There is one
@ -403,7 +403,7 @@ func (tokens *storjTokens) ListTransactionInfos(ctx context.Context, userID uuid
```
# Transaction update cycle
There is a cycle that iterates over all `pending`(`pending` and `paid` statuses of coinpayments transaction respectively) transactions, list it's infos and updates tx status and received amount. If updated is status is set to `cancelled` or `completed`, that transactions won't take part in the next update cycle. When there is a status transation to `completed` along with the update `apply_balance_transaction_intent` is created. Transaction with status `completed` and present `apply_balance_transaction_intent` with state `unapplied` defines as `UnappliedTransaction` which is later processed in update balance cycle. If the received amount is greater that 50$ a promotional coupon for 55$ is created.
There is a cycle that iterates over all `pending`(`pending` and `paid` statuses of coinpayments transaction respectively) transactions, list it's infos and updates tx status and received amount. If updated is status is set to `cancelled` or `completed`, that transactions won't take part in the next update cycle. When there is a status transaction to `completed` along with the update `apply_balance_transaction_intent` is created. Transaction with status `completed` and present `apply_balance_transaction_intent` with state `unapplied` defines as `UnappliedTransaction` which is later processed in update balance cycle. If the received amount is greater that 50$ a promotional coupon for 55$ is created.
```go
// updateTransactions updates statuses and received amount for given transactions.
func (service *Service) updateTransactions(ctx context.Context, ids TransactionAndUserList) (err error) {
@ -501,7 +501,7 @@ func (service *Service) applyTransactionBalance(ctx context.Context, tx Transact
```
# Invoices
Invoices are statements of amounts owed by a customer, and are generated one-off.
Invoices are statements of amounts owed by a customer, and are generated one-off.
```go
// Invoice holds all public information about invoice.
type Invoice struct {
@ -526,7 +526,7 @@ type Invoices interface {
```
# Invoice creation
Invoice include project usage cost as well as any discounts applied. Coupons and credits applied as separate invoice line items, therefore it reduce total due amount. Next applied STORJ token amount which is repesented as credits on custmer balance if any. If invoice total amount is greater than zero after bonuses and STORJ tokens, default credit card at the moment of invoice creation will be charged. If total amount is less than 1$, then stripe won't try to charge credit card but increase debt on customer balance.
Invoice include project usage cost as well as any discounts applied. Coupons and credits applied as separate invoice line items, therefore it reduce total due amount. Next applied STORJ token amount which is repesented as credits on custmer balance if any. If invoice total amount is greater than zero after bonuses and STORJ tokens, default credit card at the moment of invoice creation will be charged. If total amount is less than 1$, then stripe won't try to charge credit card but increase debt on customer balance.
Invoice creation consist of few steps. First invoice project records have to be created. Each record consist of project id, usage and timestamps of the start and end of billing period. This way we ensure that usage is the same during all invoice creation steps and there won't be two or more invoices created for the same period(actually only invoice line items for certain billing period and project are ensured not to be created more than once). Coupon usages are also created during this step, which are later used to create coupon invoice line items.
@ -548,7 +548,7 @@ prepare-invoice-records Prepares invoice project records that will be used durin
```bash
inspector payments prepare-invoice-records [mm/yyyy]
```
Create project records for all projects for specified billing period. Billing period defined as `[0th nanosecond of the first day of the month; 0th nanosecond of the first day of the following month)`.
Create project records for all projects for specified billing period. Billing period defined as `[0th nanosecond of the first day of the month; 0th nanosecond of the first day of the following month)`.
Project record contains project usage for some billing period. Therefore, it is impossible to create project record for the same project and billing period.
```go
// ProjectRecord holds project usage particular for billing period.
@ -608,7 +608,7 @@ func (service *Service) PrepareInvoiceProjectRecords(ctx context.Context, period
return nil
}
```
If a project record already exists, project is skipped.
If a project record already exists, project is skipped.
```go
// createProjectRecords creates invoice project record if none exists.
func (service *Service) createProjectRecords(ctx context.Context, projects []console.Project, start, end time.Time) (err error) {
@ -696,31 +696,31 @@ Iterate over all project records, calculating price and creating invoice line it
// applyProjectRecords applies invoice intents as invoice line items to stripe customer.
func (service *Service) applyProjectRecords(ctx context.Context, records []ProjectRecord) (err error) {
defer mon.Task()(&ctx)(&err)
for _, record := range records {
if err = ctx.Err(); err != nil {
return err
}
proj, err := service.projectsDB.Get(ctx, record.ProjectID)
if err != nil {
return err
}
cusID, err := service.db.Customers().GetCustomerID(ctx, proj.OwnerID)
if err != nil {
if err == ErrNoCustomer {
continue
}
return err
}
if err = service.createInvoiceItems(ctx, cusID, proj.Name, record); err != nil {
return err
}
}
return nil
}
```
@ -763,46 +763,46 @@ Iterate over all customers and create invoice for each.
// CreateInvoices lists through all customers and creates invoices.
func (service *Service) CreateInvoices(ctx context.Context) (err error) {
defer mon.Task()(&ctx)(&err)
const limit = 25
before := time.Now()
cusPage, err := service.db.Customers().List(ctx, 0, limit, before)
if err != nil {
return Error.Wrap(err)
}
for _, cus := range cusPage.Customers {
if err = ctx.Err(); err != nil {
return Error.Wrap(err)
}
if err = service.createInvoice(ctx, cus.ID); err != nil {
return Error.Wrap(err)
}
}
for cusPage.Next {
if err = ctx.Err(); err != nil {
return Error.Wrap(err)
}
cusPage, err = service.db.Customers().List(ctx, cusPage.NextOffset, limit, before)
if err != nil {
return Error.Wrap(err)
}
for _, cus := range cusPage.Customers {
if err = ctx.Err(); err != nil {
return Error.Wrap(err)
}
if err = service.createInvoice(ctx, cus.ID); err != nil {
return Error.Wrap(err)
}
}
}
return nil
}
```
@ -833,4 +833,4 @@ func (service *Service) createInvoice(ctx context.Context, cusID string) (err er
return nil
}
```
```

View File

@ -274,7 +274,7 @@ and are left with the following:
The list of trusted Satellite URLs should be recalculated daily (with some jitter).
### Backwards Compatability
### Backwards Compatibility
The old piecestore configuration (i.e. `piecestore.OldConfig`) currently contains a
comma separated list of trusted Satellite URLs (`WhitelistedSatellites`). It
@ -296,7 +296,7 @@ a fixed set of trusted Satellite URLs.
* Implement a `trust.ListConfig` configuration struct which:
* Contains the list of entries (with a release default of a single list containing `https://www.tardigrade.io/trusted-satellites`)
* Contains a refresh interval
* Maintains backwards compatability with `WhitelistedSatellites` in `piecestore.OldConfig`
* Maintains backwards compatibility with `WhitelistedSatellites` in `piecestore.OldConfig`
* Implement `storj.io/storj/storagenode/trust.List` that:
* Consumes `trust.ListConfig` for configuration
* Performs the initial fetching and building of trusted Satellite URLs

View File

@ -87,7 +87,7 @@ Create `satellites_exit_progress` tables:
```
model satellite_exit_progress (
fk satellite_id
fk satellite_id
field initiated_at timestamp ( updateable )
field finished_at timestamp ( updateable )

View File

@ -8,7 +8,7 @@ This document describes how storage node transfers its pieces during Graceful Ex
## Background
During Graceful Exit a storage node needs to transfer pieces to other nodes. During transfering the storage node or satellite may crash, hence it needs to be able to continue after a restart.
During Graceful Exit a storage node needs to transfer pieces to other nodes. During transferring the storage node or satellite may crash, hence it needs to be able to continue after a restart.
Satellite gathers transferred pieces list asynchronously, which is described in [Gathering Pieces Document](pieces.md). This may consume a significant amount of time.
@ -28,11 +28,11 @@ The `worker` should continue to poll the satellite at a configurable interval un
The satellite should return pieces to transfer from the transfer queue if piece durability <= optimal. If durability > optimal, we remove the exiting node from the segment / pointer.
The storage node should concurrently transfer pieces returned by the satellite. The storage node should send a `TransferSucceeded` message as pieces are successfuly transfered. The Storage node should send a `TransferFailed`, with reason, on failure.
The storage node should concurrently transfer pieces returned by the satellite. The storage node should send a `TransferSucceeded` message as pieces are successfully transferred. The Storage node should send a `TransferFailed`, with reason, on failure.
The satellites should set the `finished_at` on success, and respond with a `DeletePiece` message. Otherwise increment `failed_count` and set the `last_failed_at` and `last_failed_code` for reprocessing.
The satellite should respond with an `ExitCompleted` message when all pieces have finished processing.
The satellite should respond with an `ExitCompleted` message when all pieces have finished processing.
If the storage node has failed too many transfers overall, failed the same piece over a certain threshold, or has sent incorrect data, the satellite will send an `ExitFailed` message. This indicates that the process has ended ungracefully.
@ -75,7 +75,7 @@ We could have a separate initiate graceful exit RPC, however this would complica
## Implementation
1. Add protobuf definitions.
2. Update node selection to ignore exiting nodes for repairs and uploads.
2. Update node selection to ignore exiting nodes for repairs and uploads.
3. Update repairer to repair segments for nodes that failed an exit.
4. Implement verifying a transfer on the satellite.
5. Implement transferring a single piece on storage node.
@ -143,7 +143,7 @@ when storage node prematurely exits
go func() {
for {
ensure we have only up to N inprogress at the same time
list transferred piece that is not in progress
if no pieces {
morepieces = false

View File

@ -67,11 +67,11 @@ docker run -p 8080:8080 storjlabs/satellite-ui:latest
- [unit](./unit "unit") folder: contains project unit tests.
### Configuration files
- **.env**: file for environment level variables.
- **.gitignore**: folders, files and extentions which are ignored for git.
- **.gitignore**: folders, files and extensions which are ignored for git.
- **babel.config.js**: [babel](https://babeljs.io/) configuration for javascript transcompilation.
- **index.html**: DOM entry point.
- **jestSetup.ts**: [jest](https://jestjs.io/) configuration for unit testing.
- **package.json**: file holds various metadata relevant to the project such as version, dependencies, scripts and configurations.
- **tsconfig.json**: holds [TypeScript](https://www.typescriptlang.org/) configurations.
- **tslint.json**: holds [TypeScript](https://www.typescriptlang.org/) linter configurations.
- **vue.config.js**: holds [Vue](https://vuejs.org/) configurations.
- **vue.config.js**: holds [Vue](https://vuejs.org/) configurations.