Skip to content

ankane/strong_migrations

Repository files navigation

Strong Migrations

Catch unsafe migrations in development

  ✓  Detects potentially dangerous operations
  ✓  Prevents them from running by default
  ✓  Provides instructions on safer ways to do what you want

Supports PostgreSQL, MySQL, and MariaDB

🍊 Battle-tested at Instacart

Build Status

Installation

Add this line to your application’s Gemfile:

gem "strong_migrations"

And run:

bundle install
rails generate strong_migrations:install

Strong Migrations sets a long statement timeout for migrations so you can set a short statement timeout for your application.

How It Works

When you run a migration that’s potentially dangerous, you’ll see an error message like:

=== Dangerous operation detected #strong_migrations ===

Active Record caches attributes, which causes problems
when removing columns. Be sure to ignore the column:

class User < ApplicationRecord
  self.ignored_columns += ["name"]
end

Deploy the code, then wrap this step in a safety_assured { ... } block.

class RemoveColumn < ActiveRecord::Migration[8.0]
  def change
    safety_assured { remove_column :users, :name }
  end
end

An operation is classified as dangerous if it either:

  • Blocks reads or writes for more than a few seconds (after a lock is acquired)
  • Has a good chance of causing application errors

Checks

Potentially dangerous operations:

Postgres-specific checks:

Config-specific checks:

Best practices:

You can also add custom checks or disable specific checks.

Removing a column

Bad

Active Record caches database columns at runtime, so if you drop a column, it can cause exceptions until your app reboots.

class RemoveSomeColumnFromUsers < ActiveRecord::Migration[8.0]
  def change
    remove_column :users, :some_column
  end
end

Good

  1. Tell Active Record to ignore the column from its cache
class User < ApplicationRecord
  self.ignored_columns += ["some_column"]
end
  1. Deploy the code
  2. Write a migration to remove the column (wrap in safety_assured block)
class RemoveSomeColumnFromUsers < ActiveRecord::Migration[8.0]
  def change
    safety_assured { remove_column :users, :some_column }
  end
end
  1. Deploy and run the migration
  2. Remove the line added in step 1

Changing the type of a column

Bad

Changing the type of a column causes the entire table to be rewritten. During this time, reads and writes are blocked in Postgres, and writes are blocked in MySQL and MariaDB.

class ChangeSomeColumnType < ActiveRecord::Migration[8.0]
  def change
    change_column :users, :some_column, :new_type
  end
end

Some changes don’t require a table rewrite and are safe in Postgres:

Type Safe Changes
cidr Changing to inet
citext Changing to text if not indexed, changing to string with no :limit if not indexed
datetime Increasing or removing :precision, changing to timestamptz when session time zone is UTC in Postgres 12+
decimal Increasing :precision at same :scale, removing :precision and :scale
interval Increasing or removing :precision
numeric Increasing :precision at same :scale, removing :precision and :scale
string Increasing or removing :limit, changing to text, changing citext if not indexed
text Changing to string with no :limit, changing to citext if not indexed
time Increasing or removing :precision
timestamptz Increasing or removing :limit, changing to datetime when session time zone is UTC in Postgres 12+

And some in MySQL and MariaDB:

Type Safe Changes
string Increasing :limit from under 63 up to 63, increasing :limit from over 63 to the max (the threshold can be different if using an encoding other than utf8mb4 - for instance, it’s 85 for utf8mb3 and 255 for latin1)

Good

A safer approach is to:

  1. Create a new column
  2. Write to both columns
  3. Backfill data from the old column to the new column
  4. Move reads from the old column to the new column
  5. Stop writing to the old column
  6. Drop the old column

Renaming a column

Bad

Renaming a column that’s in use will cause errors in your application.

class RenameSomeColumn < ActiveRecord::Migration[8.0]
  def change
    rename_column :users, :some_column, :new_name
  end
end

Good

A safer approach is to:

  1. Create a new column
  2. Write to both columns
  3. Backfill data from the old column to the new column
  4. Move reads from the old column to the new column
  5. Stop writing to the old column
  6. Drop the old column

Renaming a table

Bad

Renaming a table that’s in use will cause errors in your application.

class RenameUsersToCustomers < ActiveRecord::Migration[8.0]
  def change
    rename_table :users, :customers
  end
end

Good

A safer approach is to:

  1. Create a new table
  2. Write to both tables
  3. Backfill data from the old table to the new table
  4. Move reads from the old table to the new table
  5. Stop writing to the old table
  6. Drop the old table

Creating a table with the force option

Bad

The force option can drop an existing table.

class CreateUsers < ActiveRecord::Migration[8.0]
  def change
    create_table :users, force: true do |t|
      # ...
    end
  end
end

Good

Create tables without the force option.

class CreateUsers < ActiveRecord::Migration[8.0]
  def change
    create_table :users do |t|
      # ...
    end
  end
end

If you intend to drop an existing table, run drop_table first.

Adding an auto-incrementing column

Bad

Adding an auto-incrementing column (serial/bigserial in Postgres and AUTO_INCREMENT in MySQL and MariaDB) causes the entire table to be rewritten. During this time, reads and writes are blocked in Postgres, and writes are blocked in MySQL and MariaDB.

class AddIdToCitiesUsers < ActiveRecord::Migration[8.0]
  def change
    add_column :cities_users, :id, :primary_key
  end
end

With MySQL and MariaDB, this can also generate different values on replicas if using statement-based replication.

Good

Create a new table and migrate the data with the same steps as renaming a table.

Adding a stored generated column

Bad

Adding a stored generated column causes the entire table to be rewritten. During this time, reads and writes are blocked in Postgres, and writes are blocked in MySQL and MariaDB.

class AddSomeColumnToUsers < ActiveRecord::Migration[8.0]
  def change
    add_column :users, :some_column, :virtual, type: :string, as: "...", stored: true
  end
end

Good

Add a non-generated column and use callbacks or triggers instead (or a virtual generated column with MySQL and MariaDB).

Adding a check constraint

🐢 Safe by default available

Bad

Adding a check constraint blocks reads and writes in Postgres and blocks writes in MySQL and MariaDB while every row is checked.

class AddCheckConstraint < ActiveRecord::Migration[8.0]
  def change
    add_check_constraint :users, "price > 0", name: "price_check"
  end
end

Good - Postgres

Add the check constraint without validating existing rows:

class AddCheckConstraint < ActiveRecord::Migration[8.0]
  def change
    add_check_constraint :users, "price > 0", name: "price_check", validate: false
  end
end

Then validate them in a separate migration.

class ValidateCheckConstraint < ActiveRecord::Migration[8.0]
  def change
    validate_check_constraint :users, name: "price_check"
  end
end

Good - MySQL and MariaDB

Let us know if you have a safe way to do this (check constraints can be added with NOT ENFORCED, but enforcing blocks writes).

Executing SQL directly

Strong Migrations can’t ensure safety for raw SQL statements. Make really sure that what you’re doing is safe, then use:

class ExecuteSQL < ActiveRecord::Migration[8.0]
  def change
    safety_assured { execute "..." }
  end
end

Backfilling data

Note: Strong Migrations does not detect dangerous backfills.

Bad

Active Record creates a transaction around each migration, and backfilling in the same transaction that alters a table keeps the table locked for the duration of the backfill.

class AddSomeColumnToUsers < ActiveRecord::Migration[8.0]
  def change
    add_column :users, :some_column, :text
    User.update_all some_column: "default_value"
  end
end

Also, running a single query to update data can cause issues for large tables.

Good

There are three keys to backfilling safely: batching, throttling, and running it outside a transaction. Use the Rails console or a separate migration with disable_ddl_transaction!.

class BackfillSomeColumn < ActiveRecord::Migration[8.0]
  disable_ddl_transaction!

  def up
    User.unscoped.in_batches do |relation|
      relation.update_all some_column: "default_value"
      sleep(0.01) # throttle
    end
  end
end

Note: If backfilling with a method other than update_all, use User.reset_column_information to ensure the model has up-to-date column information.

Adding an index non-concurrently

🐢 Safe by default available

Bad

In Postgres, adding an index non-concurrently blocks writes.

class AddSomeIndexToUsers < ActiveRecord::Migration[8.0]
  def change
    add_index :users, :some_column
  end
end

Good

Add indexes concurrently.

class AddSomeIndexToUsers < ActiveRecord::Migration[8.0]
  disable_ddl_transaction!

  def change
    add_index :users, :some_column, algorithm: :concurrently
  end
end

If you forget disable_ddl_transaction!, the migration will fail. Also, note that indexes on new tables (those created in the same migration) don’t require this.

With gindex, you can generate an index migration instantly with:

rails g index table column

Adding a reference

🐢 Safe by default available

Bad

Rails adds an index non-concurrently to references by default, which blocks writes in Postgres.

class AddReferenceToUsers < ActiveRecord::Migration[8.0]
  def change
    add_reference :users, :city
  end
end

Good

Make sure the index is added concurrently.

class AddReferenceToUsers < ActiveRecord::Migration[8.0]
  disable_ddl_transaction!

  def change
    add_reference :users, :city, index: {algorithm: :concurrently}
  end
end

Adding a foreign key

🐢 Safe by default available

Bad

In Postgres, adding a foreign key blocks writes on both tables.

class AddForeignKeyOnUsers < ActiveRecord::Migration[8.0]
  def change
    add_foreign_key :users, :orders
  end
end

or

class AddReferenceToUsers < ActiveRecord::Migration[8.0]
  def change
    add_reference :users, :order, foreign_key: true
  end
end

Good

Add the foreign key without validating existing rows:

class AddForeignKeyOnUsers < ActiveRecord::Migration[8.0]
  def change
    add_foreign_key :users, :orders, validate: false
  end
end

Then validate them in a separate migration.

class ValidateForeignKeyOnUsers < ActiveRecord::Migration[8.0]
  def change
    validate_foreign_key :users, :orders
  end
end

Adding a unique constraint

Bad

In Postgres, adding a unique constraint creates a unique index, which blocks reads and writes.

class AddUniqueConstraint < ActiveRecord::Migration[8.0]
  def change
    add_unique_constraint :users, :some_column
  end
end

Good

Create a unique index concurrently, then use it for the constraint.

class AddUniqueConstraint < ActiveRecord::Migration[8.0]
  disable_ddl_transaction!

  def up
    add_index :users, :some_column, unique: true, algorithm: :concurrently
    add_unique_constraint :users, using_index: "index_users_on_some_column"
  end

  def down
    remove_unique_constraint :users, :some_column
  end
end

Adding an exclusion constraint

Bad

In Postgres, adding an exclusion constraint blocks reads and writes while every row is checked.

class AddExclusionConstraint < ActiveRecord::Migration[8.0]
  def change
    add_exclusion_constraint :users, "number WITH =", using: :gist
  end
end

Good

Let us know if you have a safe way to do this (exclusion constraints cannot be marked NOT VALID).

Adding a json column

Bad

In Postgres, there’s no equality operator for the json column type, which can cause errors for existing SELECT DISTINCT queries in your application.

class AddPropertiesToUsers < ActiveRecord::Migration[8.0]
  def change
    add_column :users, :properties, :json
  end
end

Good

Use jsonb instead.

class AddPropertiesToUsers < ActiveRecord::Migration[8.0]
  def change
    add_column :users, :properties, :jsonb
  end
end

Setting NOT NULL on an existing column

🐢 Safe by default available

Bad

In Postgres, setting NOT NULL on an existing column blocks reads and writes while every row is checked.

class SetSomeColumnNotNull < ActiveRecord::Migration[8.0]
  def change
    change_column_null :users, :some_column, false
  end
end

Good

Instead, add a check constraint.

class SetSomeColumnNotNull < ActiveRecord::Migration[8.0]
  def change
    add_check_constraint :users, "some_column IS NOT NULL", name: "users_some_column_null", validate: false
  end
end

Then validate it in a separate migration. Once the check constraint is validated, you can safely set NOT NULL on the column and drop the check constraint.

class ValidateSomeColumnNotNull < ActiveRecord::Migration[8.0]
  def up
    validate_check_constraint :users, name: "users_some_column_null"
    change_column_null :users, :some_column, false
    remove_check_constraint :users, name: "users_some_column_null"
  end

  def down
    add_check_constraint :users, "some_column IS NOT NULL", name: "users_some_column_null", validate: false
    change_column_null :users, :some_column, true
  end
end

Adding a column with a volatile default value

Bad

Adding a column with a volatile default value to an existing table causes the entire table to be rewritten. During this time, reads and writes are blocked.

class AddSomeColumnToUsers < ActiveRecord::Migration[8.0]
  def change
    add_column :users, :some_column, :uuid, default: "gen_random_uuid()"
  end
end

Good

Instead, add the column without a default value, then change the default.

class AddSomeColumnToUsers < ActiveRecord::Migration[8.0]
  def up
    add_column :users, :some_column, :uuid
    change_column_default :users, :some_column, from: nil, to: "gen_random_uuid()"
  end

  def down
    remove_column :users, :some_column
  end
end

Then backfill the data.

Changing the default value of a column

Bad

Rails < 7 enables partial writes by default, which can cause incorrect values to be inserted when changing the default value of a column.

class ChangeSomeColumnDefault < ActiveRecord::Migration[6.1]
  def change
    change_column_default :users, :some_column, from: "old", to: "new"
  end
end

User.create!(some_column: "old") # can insert "new"

Good

Disable partial writes in config/application.rb. For Rails < 7, use:

config.active_record.partial_writes = false

For Rails 7+, use:

config.active_record.partial_inserts = false

Keeping non-unique indexes to three columns or less

Bad

Adding a non-unique index with more than three columns rarely improves performance.

class AddSomeIndexToUsers < ActiveRecord::Migration[8.0]
  def change
    add_index :users, [:a, :b, :c, :d]
  end
end

Good

Instead, start an index with columns that narrow down the results the most.

class AddSomeIndexToUsers < ActiveRecord::Migration[8.0]
  def change
    add_index :users, [:d, :b]
  end
end

For Postgres, be sure to add them concurrently.

Assuring Safety

To mark a step in the migration as safe, despite using a method that might otherwise be dangerous, wrap it in a safety_assured block.

class MySafeMigration < ActiveRecord::Migration[8.0]
  def change
    safety_assured { remove_column :users, :some_column }
  end
end

Certain methods like execute and change_table cannot be inspected and are prevented from running by default. Make sure what you’re doing is really safe and use this pattern.

Safe by Default

Make certain operations safe by default. This allows you to write the code under the "Bad" section, but the migration will be performed as if you had written the "Good" version.

  • adding and removing an index
  • adding a foreign key
  • adding a check constraint
  • setting NOT NULL on an existing column

Add to config/initializers/strong_migrations.rb:

StrongMigrations.safe_by_default = true

Custom Checks

Add your own custom checks with:

StrongMigrations.add_check do |method, args|
  if method == :add_index && args[0].to_s == "users"
    stop! "No more indexes on the users table"
  end
end

Use the stop! method to stop migrations.

Note: Since remove_column always requires a safety_assured block, it’s not possible to add a custom check for remove_column operations.

Opt-in Checks

Removing an index non-concurrently

Postgres supports removing indexes concurrently, but removing them non-concurrently shouldn’t be an issue for most applications. You can enable this check with:

StrongMigrations.enable_check(:remove_index)

Disable Checks

Disable specific checks with:

StrongMigrations.disable_check(:add_index)

Check the source code for the list of keys.

Skip Databases

Skip checks and other functionality for specific databases with:

StrongMigrations.skip_database(:catalog)

Note: This does not affect alphabetize_schema.

Down Migrations / Rollbacks

By default, checks are disabled when migrating down. Enable them with:

StrongMigrations.check_down = true

Custom Messages

To customize specific messages, create an initializer with:

StrongMigrations.error_messages[:add_column_default] = "Your custom instructions"

Check the source code for the list of keys.

Migration Timeouts

It’s extremely important to set a short lock timeout for migrations. This way, if a migration can’t acquire a lock in a timely manner, other statements won’t be stuck behind it. We also recommend setting a long statement timeout so migrations can run for a while.

Create config/initializers/strong_migrations.rb with:

StrongMigrations.lock_timeout = 10.seconds
StrongMigrations.statement_timeout = 1.hour

Or set the timeouts directly on the database user that runs migrations. For Postgres, use:

ALTER ROLE myuser SET lock_timeout = '10s';
ALTER ROLE myuser SET statement_timeout = '1h';

Note: If you use PgBouncer in transaction mode, you must set timeouts on the database user.

App Timeouts

We recommend adding timeouts to config/database.yml to prevent connections from hanging and individual queries from taking up too many resources in controllers, jobs, the Rails console, and other places.

For Postgres:

production:
  connect_timeout: 5
  variables:
    statement_timeout: 15s
    lock_timeout: 10s

Note: If you use PgBouncer in transaction mode, you must set the statement and lock timeouts on the database user as shown above.

For MySQL:

production:
  connect_timeout: 5
  read_timeout: 5
  write_timeout: 5
  variables:
    max_execution_time: 15000 # ms
    lock_wait_timeout: 10 # sec

For MariaDB:

production:
  connect_timeout: 5
  read_timeout: 5
  write_timeout: 5
  variables:
    max_statement_time: 15 # sec
    lock_wait_timeout: 10 # sec

For HTTP connections, Redis, and other services, check out this guide.

Invalid Indexes

In Postgres, adding an index non-concurrently can leave behind an invalid index if the lock timeout is reached. Running the migration again can result in an error.

To automatically remove the invalid index when the migration runs again, use:

StrongMigrations.remove_invalid_indexes = true

Note: This feature is experimental.

Lock Timeout Retries

Note: This feature is experimental.

There’s the option to automatically retry statements for migrations when the lock timeout is reached. Here’s how it works:

  • If a lock timeout happens outside a transaction, the statement is retried
  • If it happens inside the DDL transaction, the entire migration is retried (only applicable to Postgres)

Add to config/initializers/strong_migrations.rb:

StrongMigrations.lock_timeout_retries = 3

Set the delay between retries with:

StrongMigrations.lock_timeout_retry_delay = 10.seconds

Existing Migrations

To mark migrations as safe that were created before installing this gem, create an initializer with:

StrongMigrations.start_after = 20250101000000

Use the version from your latest migration.

Target Version

If your development database version is different from production, you can specify the production version so the right checks run in development.

StrongMigrations.target_version = 10 # or 8.0, 10.5, etc

The major version works well for Postgres, while the major and minor version is recommended for MySQL and MariaDB.

For safety, this option only affects development and test environments. In other environments, the actual server version is always used.

If your app has multiple databases with different versions, you can use:

StrongMigrations.target_version = {primary: 13, catalog: 15}

Analyze Tables

Analyze tables automatically (to update planner statistics) after an index is added. Create an initializer with:

StrongMigrations.auto_analyze = true

Faster Migrations

Only dump the schema when adding a new migration. If you use Git, add to config/environments/development.rb:

config.active_record.dump_schema_after_migration = `git status db/migrate/ --porcelain`.present?

Schema Sanity

Columns can flip order in db/schema.rb when you have multiple developers. One way to prevent this is to alphabetize them. Add to config/initializers/strong_migrations.rb:

StrongMigrations.alphabetize_schema = true

Permissions

We recommend using a separate database user for migrations when possible so you don’t need to grant your app user permission to alter tables.

Smaller Projects

You probably don’t need this gem for smaller projects, as operations that are unsafe at scale can be perfectly safe on smaller, low-traffic tables.

Additional Reading

Credits

Thanks to Bob Remeika and David Waller for the original code and Sean Huber for the bad/good readme format.

History

View the changelog

Contributing

Everyone is encouraged to help improve this project. Here are a few ways you can help:

To get started with development:

git clone https://github.com/ankane/strong_migrations.git
cd strong_migrations
bundle install

# Postgres
createdb strong_migrations_test
bundle exec rake test

# MySQL and MariaDB
mysqladmin create strong_migrations_test
ADAPTER=mysql2 bundle exec rake test