storj/satellite/satellitedb/testdata
Jeremy Wharton 0d8010e353 satellite/satellitedb: Implement multi-factor authentication db
Columns for MFA status, secret key, and JSON-encoded array of
recovery codes are added to the users table.

Change-Id: Ifed7e50ec9767c1670d9682df1575678984daa60
2021-07-13 18:18:36 -05:00
..
postgres.v103.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v104.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v105.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v106.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v107.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v108.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v109.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v110.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v111.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v112.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v113.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v114.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v115.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v116.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v117.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v118.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v119.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v120.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v121.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v122.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v123.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v124.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v125.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v126.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v127.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v128.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v129.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v130.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v131.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v132.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v133.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v134.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v135.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v136.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v137.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v138.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v139.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v140.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v141.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v142.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v143.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v144.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v145.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v146.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v147.sql satellite/satellitedb: faster test database setup 2021-02-22 21:13:00 +02:00
postgres.v148.sql satellite/satellitedb: remove bad migration step 2021-03-10 14:36:52 -05:00
postgres.v149.sql satellite/satellitedb: Add coupon_codes table and allow optional link to coupons table 2021-03-19 20:46:28 +00:00
postgres.v150.sql satellite/console: change default project and usage limits 2021-03-24 14:24:27 +00:00
postgres.v151.sql satellite/console: change default project and usage limits 2021-03-24 14:24:27 +00:00
postgres.v152.sql satellite/satellitedb: filter offline suspended nodes from selection 2021-03-27 23:36:46 +00:00
postgres.v153.sql satellite/payments: Convert coupon duration to be nullable 2021-04-01 16:28:23 +02:00
postgres.v154.sql satellite/payments: Update coupon defaults for free tier 2021-04-05 10:12:26 -04:00
postgres.v155.sql satellite/satellitedb: add have_sales_contact column on users 2021-04-26 13:37:26 +00:00
postgres.v156.sql satellite/satellitedb: Add missing indexes 2021-04-29 15:34:46 +00:00
postgres.v157.sql satellite/satellitedb: drop columns total_uptime_count and uptime_success_count 2021-05-14 11:49:12 -04:00
postgres.v158.sql satellite/satellitedb: Migrate non-expiring coupons to expire 2021-05-25 19:25:02 +00:00
postgres.v159.sql satellite/satellitedb: Migrate non-expiring coupons to expire 2021-05-25 19:25:02 +00:00
postgres.v160.sql satellite/orders: add egress_dead to project_bandwidth_daily_rollups 2021-06-03 16:05:56 +00:00
postgres.v161.sql satellite/satellitedb: Add reputation table 2021-06-23 13:26:57 +00:00
postgres.v162.sql satellite/gracefulexit: create table graceful_exit_segment_transfer_queue 2021-06-28 12:55:57 +02:00
postgres.v163.sql satellite/audit: migrate to new segment_pending_audit table 2021-06-28 13:19:49 +02:00
postgres.v164.sql satellitedb: add paid_tier column to users table 2021-06-30 15:00:38 +00:00
postgres.v165.sql satellite/repair: migrate to new repair_queue table 2021-06-30 17:12:24 +02:00
postgres.v166.sql satellite/accounting: add total for bytes and segments to tallies 2021-07-01 08:52:32 +00:00
postgres.v167.sql satellite/satellitedb: Implement multi-factor authentication db 2021-07-13 18:18:36 -05:00
README.md satellitedb: reorder migrations 140-144 2021-02-03 18:28:42 +00:00

Migrations and You

This document is best read in the voice of one of those 50s instructional videos. You know the ones, like this one. If you want a sound track to go with it, I would recommend playing this in the background.


Mechanics

In order to understand the right way to add a new migration with associated tests, a basic understanding of how the tests run and what they check is required. The migration tests work by having

  1. A single database kept for the duration of the test that the individual migrations are run on one at a time in order
  2. An expected snapshot for the database for every single step
  3. A sql file for each snapshot to generate it

For a given step, what happens is

  1. The -- OLD DATA -- section is just run on the database being migrated
  2. The main section and -- NEW DATA -- sections are run on the snapshot
  3. The migration is run against the main database
  4. The -- NEW DATA -- section is run on the database

Then, the snapshot and current database state are compared for schema and data differences.

How to Create a Migration

With that basic overview out of the way, the steps to create a new migration are

  1. Create an empty postgres.vN.sql file in this folder.
  2. Copy the satellitedb.dbx.pgx.sql file from the satellitedb/dbx folder. This ensures that the snapshot does not drift from the dbx sql file. We bootstrap tests from the dbx output, so the correctness of our tests depends on them matching.
  3. Copy the INSERT statements from the end of the previous migration. These lines are after the CREATE INDEX lines but before any -- NEW DATA -- or -- OLD DATA -- section.
  4. Copy the -- NEW DATA -- statements from the end of the previous migration into the main section. They are no longer -- NEW DATA --. You should only need to copy INSERT statements.

Depending on what your migration is doing, you should then do one of these:

  1. If your migration is creating new tables, add an INSERT for it in the -- NEW DATA -- section. This ensures that future migrations don't break data in that table. Our test coverage depends on these INSERTs existing and being complete. Help avoid problems in production, now. Smokey the Bear says only you can prevent production fires.
  2. If you are updating data in old tables or changing the table schema a. and there is previous data (which there should be, by point 1), change the existing row in the main section to be the new updated value. b. and there is no previous data, add an insert into the -- OLD DATA -- section as well as a corresponding updated row like in point a. Unfortunately, someone got away with adding a table without adding a row, but no longer. The tests must forever grow.
  3. Anything more complicated, look at the mechanics section again to see if there's a way to solve your problem. If it is unclear (it probably is), just ask for help. Maybe the brains of multiple people can figure out something to do.

Best Practices and Common Mistakes

  1. Don't do in two migrations what can be done in one. The more migrations we have, the longer test times are, and the more often we may have to collapse them. Unless you have a good reason to do two migrations, just do one.

  2. There is almost no reason to have an UPDATE statement in a sql snapshot file. Each snapshot is run independently, so you should just adjust the INSERT statement to reflect the rows you expect to exist, and leave the UPDATE to the migration. This helps test that the migration runs and does what we expect. One exception is when the migration does not have deterministic output. For example,

    {
    	DB:          db.DB,
    	Description: "Backfill vetted_at with time.now for nodes that have been vetted already (aka nodes that have been audited 100 times)",
    	Version:     99,
    	Action: migrate.SQL{
    		`UPDATE nodes SET vetted_at = date_trunc('day', now() at time zone 'utc') at time zone 'utc' WHERE total_audit_count >= 100;`,
    	},
    },
    

    The above migration sets a value to the current day. This can be solved in one of two ways:

    1. Have each of the INSERT statements for the matching nodes in the main section use date_trunc('day', now() at time zone 'utc') at time zone 'utc' for the vetted_at column
    2. Have an UPDATE "nodes" SET vetted_at = 'fixed date' where id = 'expected id' in the -- NEW DATA -- section (so that it runs in both the snapshot, and after the migration has run) and update the main section's INSERT for that node. Future snapshot files do not need to retain the UPDATE in that case, and the INSERT statements can just use the fixed date for the future.

    See migration 99 for the specifics, where it chose option 2.

  3. Cockroach does schema changes asynchronously with regards to a transaction. This means if you need to add a column and fill it with some data, then these need to have them in separate migrations steps with using SeparateTx:

    {
    	DB:          db.DB,
    	Description: "Add project bandwidth limit",
    	Version:     999,
    	Action: migrate.SQL{
    		`ALTER TABLE projects ADD COLUMN bandwidth_limit`,
    		`UPDATE projects SET bandwidth_limit = usage_limit`,
    	},
    },
    

    Will fail with column "bandwidth_limit" is missing. To make it work, it needs to be written as:

    {
    	DB:          db.DB,
    	Description: "add separate bandwidth column",
    	Version:     107,
    	Action: migrate.SQL{
    		`ALTER TABLE projects ADD COLUMN bandwidth_limit bigint NOT NULL DEFAULT 0;`,
    	},
    },
    {
    	DB:          db.DB,
    	Description: "backfill bandwidth column with previous limits",
    	Version:     108,
    	SeparateTx:  true,
    	Action: migrate.SQL{
    			`UPDATE projects SET bandwidth_limit = usage_limit;`,
    	},
    },
    
  4. Removing a DEFAULT value for a column can be tricky. Old values inserted in the snapshot files may not specify every column and relying on those defaults. In the migration that removes the DEFAULT value, you must also change any INSERT statements to include any unspecified columns, setting them to the dropped DEFAULT. This is because the main database will have inserted them while they had the DEFAULT, but the new snapshot will not be inserting while the columns have the DEFAULT.