4.7 C
New York

Goose Migrations for Smooth Database Changes

Published:

Hello, mate!

Today, let’s talk about what database migrations are and why they’re so important. In today’s world, it’s no surprise that any changes to a database should be done carefully and according to a specific process. Ideally, these steps would be integrated into our CI/CD pipeline so that everything runs automatically.

Here’s our agenda:

  1. What’s the problem?
  2. How do we fix it?
  3. A simple example
  4. A more complex example
  5. Recommendations
  6. Results
  7. Conclusion

What’s the Problem?

If your team has never dealt with database migrations and you’re not entirely sure why they’re needed, let’s sort that out. If you already know the basics, feel free to skip ahead.

Main Challenge

When we make “planned” and “smooth” changes to the database, we need to maintain service availability and meet SLA requirements (so that users don’t suffer from downtime or lag). Imagine you want to change a column type in a table with 5 million users. If you do this “head-on” (e.g., simply run ALTER TABLE without prep), the table could get locked for a significant amount of time — and your users would be left without service.

To avoid such headaches, follow two rules:

  1. Apply migrations in a way that doesn’t lock the table (or at least minimizes locks).
  2. If you need to change a column type, it’s often easier to create a new column with the correct type first and then drop the old one afterward.

Another Problem: Version Control and Rollbacks

Sometimes you need to roll back a migration.

Doing this manually — going into the production database and fiddling with data — is not only risky but also likely impossible if you don’t have direct access. That’s where dedicated migration tools come in handy. They let you apply changes cleanly and revert them if necessary.

How Do We Fix It? Use the Right Tools

Each language and ecosystem has its own migration tools:

  • For Java, Liquibase or Flyway are common.
  • For Go, a popular choice is goose (the one we’ll look at here).
  • And so on.

Goose: What It Is and Why It’s Useful

A goose

Goose is a lightweight Go utility that helps you manage migrations automatically. It offers:

  • Simplicity. Minimal dependencies and a transparent file structure for migrations.
  • Versatility. Supports various DB drivers (PostgreSQL, MySQL, SQLite, etc.).
  • Flexibility. Write migrations in SQL or Go code.

Installing Goose

go install github.com/pressly/goose/v3/cmd/goose@latest

How It Works: Migration Structure

By default, Goose looks for migration files in db/migrations. Each migration follows this format:

NNN_migration_name.(sql|go)

  • NNN is the migration number (e.g., 001, 002, etc.).
  • After that, you can have any descriptive name, for example init_schema.
  • The extension can be .sql or .go.

Example of an SQL Migration

File: 001_init_schema.sql:

-- +goose Up
CREATE TABLE users (
    id SERIAL PRIMARY KEY,
    username VARCHAR(255) NOT NULL,
    created_at TIMESTAMP NOT NULL DEFAULT now()
);

-- +goose Down
DROP TABLE users;

Our First Example

Changing a Column Type (String → Int)

Suppose we have a users table with a column age of type VARCHAR(255). Now we want to change it to INTEGER. Here’s what the migration might look like (file 005_change_column_type.sql):

-- +goose Up
ALTER TABLE users ALTER COLUMN age TYPE INTEGER USING (age::INTEGER);

-- +goose Down
ALTER TABLE users ALTER COLUMN age TYPE VARCHAR(255) USING (age::TEXT);

What’s happening here:

  1. Up migration

    • We change the age column to INTEGER. The USING (age::INTEGER) clause tells PostgreSQL how to convert existing data to the new type.
    • Note that this migration will fail if there’s any data in age that isn’t numeric. In that case, you’ll need a more complex strategy (see below).
  2. Down migration

    • If we roll back, we return age to VARCHAR(255).
    • We again use USING (age::TEXT) to convert from INTEGER back to text.

The Second and Complex Cases: Multi-Step Migrations

If the age column might contain messy data (not just numbers), it’s safer to do this in several steps:

  1. Add a new column (age_int) of type INTEGER.
  2. Copy valid data into the new column, dealing with or removing invalid entries.
  3. Drop the old column.
-- +goose Up
-- Step 1: Add a new column
ALTER TABLE users ADD COLUMN age_int INTEGER;

-- Step 2: Try to move data over
UPDATE users 
SET age_int = CASE
    WHEN age ~ '^[0-9]+$' THEN age::INTEGER
    ELSE NULL
END;

-- (optional) remove rows where data couldn’t be converted
-- DELETE FROM users WHERE age_int IS NULL;

-- Step 3: Drop the old column
ALTER TABLE users DROP COLUMN age;

-- +goose Down
-- Step 1: Recreate the old column
ALTER TABLE users ADD COLUMN age VARCHAR(255);

-- Step 2: Copy data back
UPDATE users 
SET age = age_int::TEXT;

-- Step 3: Drop the new column
ALTER TABLE users DROP COLUMN age_int;

To allow a proper rollback, the Down section just mirrors the actions in reverse.

Automation is Key

To save time, it’s really convenient to add migration commands to a Makefile (or any other build system). Below is an example Makefile with the main Goose commands for PostgreSQL.

Let’s assume:

  • The DSN for the database is postgres://user:password@localhost:5432/dbname?sslmode=disable.
  • Migration files are in db/migrations.
# File: Makefile

DB_DSN = "postgres://user:password@localhost:5432/dbname?sslmode=disable"
MIGRATIONS_DIR = db/migrations

# Install Goose (run once)
install-goose:
	go install github.com/pressly/goose/v3/cmd/goose@latest

# Create a new SQL migration file
new-migration:
ifndef NAME
	$(error Usage: make new-migration NAME=your_migration_name)
endif
	goose -dir $(MIGRATIONS_DIR) create $(NAME) sql

# Apply all pending migrations
migrate-up:
	goose -dir $(MIGRATIONS_DIR) postgres $(DB_DSN) up

# Roll back the last migration
migrate-down:
	goose -dir $(MIGRATIONS_DIR) postgres $(DB_DSN) down

# Roll back all migrations (be careful in production!)
migrate-reset:
	goose -dir $(MIGRATIONS_DIR) postgres $(DB_DSN) reset

# Check migration status
migrate-status:
	goose -dir $(MIGRATIONS_DIR) postgres $(DB_DSN) status

How to Use It?

1. Create a new migration (SQL file). This generates a file db/migrations/002_add_orders_table.sql.

make new-migration NAME=add_orders_table

2. Apply all migrations. Goose will create a schema_migrations table in your database (if it doesn’t already exist) and apply any new migrations in ascending order.

3. Roll back the last migration. Just down the last one.

4. Roll back all migrations (use caution in production). Full reset.

5. Check migration status.

Output example:

$ goose status
$   Applied At                  Migration
$   =======================================
$   Sun Jan  6 11:25:03 2013 -- 001_basics.sql
$   Sun Jan  6 11:25:03 2013 -- 002_next.sql
$   Pending                  -- 003_and_again.go

Summary

By using migration tools and a Makefile, we can:

  1. Restrict direct access to the production database, making changes only through migrations.
  2. Easily track database versions and roll them back if something goes wrong.
  3. Maintain a single, consistent history of database changes.
  4. Perform “smooth” migrations that won’t break a running production environment in a microservices world.
  5. Gain extra validation — every change will go through a PR and code review process (assuming you have those settings in place).

Another advantage is that it’s easy to integrate all these commands into your CI/CD pipeline. And remember — security above all else. 

For instance:

jobs:
  migrate:
    runs-on: ubuntu-latest

    steps:
      - name: Install Goose
        run: |
          make install-goose

      - name: Run database migrations
        env:
          DB_DSN: ${{ secrets.DATABASE_URL }}
        run: |
          make migrate-up

Conclusion and Tips

The main ideas are so simple:

  • Keep your migrations small and frequent. They’re easier to review, test, and revert if needed.
  • Use the same tool across all environments so dev, stage, and prod are in sync.
  • Integrate migrations into CI/CD so you’re not dependent on any one person manually running them.

In this way, you’ll have a reliable and controlled process for changing your database structure — one that doesn’t break production and lets you respond quickly if something goes wrong. 

Good luck with your migrations!

Thanks for reading!

Source link

Related articles

Recent articles