Tutorial - Fleet rollout with migrations-based deployment
Published 12 May 2026
This tutorial walks through setting up a Flyway project that uses Flyway Desktop's schema-model authoring flow and deploys versioned V__ migrations to a fleet of production databases. The migrations may be authored by hand or generated from changes to the schema model — the rollout itself is identical either way.
For the principles and rollout strategy behind each step, see the parent guide: Rolling out updates from a single schema to multiple production databases.
If you want the schema model to be the deploy source of truth and have deployment scripts generated per target, follow the state-based tutorial instead.
What you'll have at the end
- A single Flyway project with a
schema-model/directory and amigrations/directory of versioned scripts. - A
flyway.tomldefining every production target as a named environment. - An orchestrator that runs
flyway migrateagainst each target in turn. - A pre-flight, canary, and wave rollout sequence wired into your pipeline.
Prerequisites
- A version-control repository for the project.
- Two or more reachable production target databases of the same engine.
- A staging database that mirrors production topology.
- A secrets store for per-target credentials (Vault, AWS Secrets Manager, Azure Key Vault, etc.).
Step 1: Initialise the project
Create a new project using FLyway Desktop or flyway init. The scaffold looks like this:
flyway-project/
├── flyway.toml
├── schema-model/ # declarative DDL maintained by Flyway Desktop
└── migrations/ # versioned scripts (V__) and repeatables (R__)
Commit it to version control.
Step 2: Define each target as an environment
Edit flyway.toml. Under [flyway], set the project-wide defaults that are identical across the fleet. Under [environments.<name>], add one entry per target with its connection details:
[flyway]
locations = ["filesystem:migrations"]
validateMigrationNaming = true
[environments.eu-west-1]
url = "jdbc:postgresql://eu-west-1.db.example.com:5432/app"
schemas = ["app"]
user = "${env.EU_WEST_1_DB_USER}"
password = "${env.EU_WEST_1_DB_PASSWORD}"
[environments.us-east-1]
url = "jdbc:postgresql://us-east-1.db.example.com:5432/app"
schemas = ["app"]
user = "${env.US_EAST_1_DB_USER}"
password = "${env.US_EAST_1_DB_PASSWORD}"
[environments.ap-southeast-2]
url = "jdbc:postgresql://ap-southeast-2.db.example.com:5432/app"
schemas = ["app"]
user = "${env.AP_SOUTHEAST_2_DB_USER}"
password = "${env.AP_SOUTHEAST_2_DB_PASSWORD}"Give each environment its own pair of environment variables for user and password. Resolve them at runtime from your secrets store and inject them as the named variables. Never commit credentials to version control. For more information, see Storing and retrieving credentials.
Per-environment variables keep each target's credentials in their own scope — a leak or misconfiguration on one target can't accidentally authenticate as another, and a single Flyway invocation only ever has the one target's credentials in its environment.
Step 3: Author a change
In Flyway Desktop, connect to a development database and make the change you want — add a column, create a table, alter a view. Flyway Desktop will:
- Update the
schema-model/files with the new desired state. - Diff the model against the migration baseline.
- Generate a versioned migration in
migrations/, e.g.V002__add_invoice_status.sql.
Review the generated script and commit both the schema-model/ changes and the new migration in the same commit.
You can also hand-author V__ files directly if you prefer; Flyway treats them identically.
Step 4: Test in dev and staging
Before considering production, the new migration should have been applied successfully to:
- A scratch database in the build pipeline (catches syntax errors early).
- A test database seeded with production-like data.
- A staging database that mirrors production topology.
The same flyway -environment=<name> migrate command runs against each of these.
Step 5: Wire up the orchestrator
Flyway is invoked once per target. For a sequential rollout, a small shell loop works. Populate each target's credentials as its own named pair of environment variables first, then iterate.
Each invocation chains info validate migrate in a single call so that the history-table validation and the apply happen together as one deployment unit. info is included to log the target's state into the pipeline output as an audit trail; validate halts the deploy on checksum or ordering mismatches before any change is applied.
Using the official Flyway actions:
# .github/workflows/deploy.yml
name: Deploy to production fleet
on:
workflow_dispatch:
jobs:
deploy:
strategy:
fail-fast: true
max-parallel: 1 # sequential rollout
matrix:
target: [eu-west-1, us-east-1, ap-southeast-2]
runs-on: ubuntu-latest
environment: ${{ matrix.target }} # per-target secrets and approvals
steps:
- uses: actions/checkout@v4
- uses: red-gate/setup-flyway@v3
with:
edition: enterprise
i-agree-to-the-eula: true
email: ${{ secrets.FLYWAY_EMAIL }}
token: ${{ secrets.FLYWAY_TOKEN }}
- name: Deploy to ${{ matrix.target }}
uses: red-gate/flyway-actions/migrations/deploy@v2
with:
target-environment: ${{ matrix.target }}
target-user: ${{ secrets.DB_USER }}
target-password: ${{ secrets.DB_PASSWORD }}
working-directory: .environment: ${{ matrix.target }} attaches each matrix job to a GitHub environment of the same name. Configure each environment with its own scoped DB_USER/DB_PASSWORD secrets and required-reviewer rules — this gives you per-target credential isolation and a manual approval gate between waves without changing the workflow.
targets=("eu-west-1" "us-east-1" "ap-southeast-2")
for target in "${targets[@]}"; do
prefix=$(echo "$target" | tr 'a-z-' 'A-Z_')
export "${prefix}_DB_USER=$(get-secret "$target/db-user")"
export "${prefix}_DB_PASSWORD=$(get-secret "$target/db-password")"
done
for target in "${targets[@]}"; do
echo "Deploying $target..."
if ! flyway "-environment=$target" info validate migrate; then
echo "Deployment failed for $target; halting rollout." >&2
exit 1
fi
doneIn a CI/CD pipeline the credential loop usually isn't needed — your secret store integration (e.g. CI variables backed by AWS Secrets Manager or Vault) injects the per-environment variables directly.
$targets = @("eu-west-1", "us-east-1", "ap-southeast-2")
foreach ($target in $targets) {
$prefix = ($target -replace "-", "_").ToUpper()
Set-Item "env:${prefix}_DB_USER" (Get-Secret "$target/db-user")
Set-Item "env:${prefix}_DB_PASSWORD" (Get-Secret "$target/db-password")
}
foreach ($target in $targets) {
Write-Host "Deploying $target..."
flyway "-environment=$target" info validate migrate
if ($LASTEXITCODE -ne 0) {
throw "Deployment failed for $target; halting rollout."
}
}In a CI/CD pipeline the credential loop usually isn't needed — your secret store integration (e.g. CI variables backed by AWS Secrets Manager or Vault) injects the per-environment variables directly.
For non-GitHub CI systems, the parent guide's Driving the fleet section links to the matrix-job documentation for every major CI provider — Azure DevOps, GitLab, Jenkins, CircleCI, Bitbucket Pipelines, TeamCity, Harness, and Octopus Deploy. Configure the matrix to fail-fast unless you have a strong reason not to.
Step 6: Pre-deployment checks
Before any change is applied, the pipeline runs a suite of flyway check commands to verify the fleet's state and catch issues that wouldn't show up until deploy time otherwise. The official Flyway template is documented at Common migrations-based deployment scripts; the script below adapts it for a fleet rollout.
check -code is project-wide — it analyses the migration scripts themselves and doesn't need a target environment. The other three (check -drift, check -changes, check -dryrun) run per target.
Using the migrations/checks action:
# .github/workflows/pre-deployment-checks.yml
name: Pre-deployment checks
on:
workflow_dispatch:
workflow_call:
jobs:
checks:
strategy:
fail-fast: false # surface findings across the fleet
matrix:
target: [eu-west-1, us-east-1, ap-southeast-2]
runs-on: ubuntu-latest
environment: ${{ matrix.target }}
steps:
- uses: actions/checkout@v4
- uses: red-gate/setup-flyway@v3
with:
edition: enterprise
i-agree-to-the-eula: true
email: ${{ secrets.FLYWAY_EMAIL }}
token: ${{ secrets.FLYWAY_TOKEN }}
- name: Pre-deployment checks against ${{ matrix.target }}
uses: red-gate/flyway-actions/migrations/checks@v2
with:
target-environment: ${{ matrix.target }}
target-user: ${{ secrets.DB_USER }}
target-password: ${{ secrets.DB_PASSWORD }}
build-environment: build
build-user: ${{ secrets.BUILD_DB_USER }}
build-password: ${{ secrets.BUILD_DB_PASSWORD }}
working-directory: .fail-fast: false lets every target's checks run to completion, so you see all findings across the fleet in a single workflow run rather than just the first. The build credentials are repository-level secrets (not environment-scoped); GitHub Actions falls back to repo-level secrets when an environment-scoped value isn't found.
targets=("eu-west-1" "us-east-1" "ap-southeast-2")
flyway check -code "-code.failOnError=true"
for target in "${targets[@]}"; do
flyway "-environment=$target" check -drift -failOnDrift=true
flyway "-environment=$target" check -changes -buildEnvironment=build
flyway "-environment=$target" check -dryrun
doneIn a CI/CD pipeline the credential loop usually isn't needed — your secret store integration (e.g. CI variables backed by AWS Secrets Manager or Vault) injects the per-environment variables directly.
$targets = @("eu-west-1", "us-east-1", "ap-southeast-2")
flyway check -code "-code.failOnError=true"
foreach ($target in $targets) {
flyway "-environment=$target" check -drift -failOnDrift=true
flyway "-environment=$target" check -changes -buildEnvironment=build
flyway "-environment=$target" check -dryrun
}What each check does:
check -code— static analysis of the migration scripts. Project-wide, no target environment needed.-code.failOnError=truehalts the pipeline on any rule violation. Customise the rule set under[flyway.check]inflyway.toml.check -drift— compares each target's live state to its schema history.-failOnDrift=truehalts on any finding, so a single drifted target can't ride into the fleet via the next migration.check -changes— reports what migrations will run against each target by comparing it to a build environment representing the schema state about to be deployed. Define[environments.build]inflyway.tomlpointing at an ephemeral database the build stage has applied the migrations to. The pending set should be identical across the fleet in a healthy rollout; anything else is worth investigating.check -dryrun— simulates applying each target's pending migrations without committing, catching issues that only show up when the script meets the target's live data and schema. A non-zero exit halts the pipeline.
Save the generated report for each check as a pipeline artefact — together they're the audit trail for the rollout. The Flyway Github Actions upload the report natively.
Step 7: Canary
Apply the migration to a single low-risk production target first, using the same chained invocation as the rest of the rollout so validation runs before the apply:
flyway "-environment=eu-west-1" info validate migrate
Smoke-test the application against the canary. Watch logs and dashboards for at least one full monitoring window before continuing.
Step 8: Wave rollout
Apply to the remaining targets in waves — for example 10%, 50%, 100% — with a pause between waves long enough to surface problems via monitoring and customer reports. The same loop or matrix job from step 5 can be partitioned across waves.
Step 9: Post-flight verification
Run flyway info against every target after the rollout. Every target should report the same Current Version. Save the output as a pipeline artefact — it's the cheapest audit trail you have when a question comes up six months later.
Variants: targets that need different migrations
If a small number of targets need migrations the others don't (a regional column for one tenant, for example), layer in variant scripts via Flyway's locations mechanism rather than branching the project:
migrations/
├── common/ # applied everywhere
└── variants/
├── eu/ # applied only to EU targets
└── regulated/ # applied only to regulated tenants
Note that locations lives under [flyway], not directly on the environment, so the override goes in the environment's .flyway sub-namespace:
[environments.eu-west-1]
url = "..."
[environments.eu-west-1.flyway]
locations = ["filesystem:migrations/common", "filesystem:migrations/variants/eu"]
The same rule applies to any other [flyway] setting you want to vary per target (placeholders, validation rules, etc.) — override them under [environments.<name>.flyway]. Environment-level keys like url, schemas, user, and password stay directly on [environments.<name>].
The schema history table records exactly which migrations ran against each database, so drift remains visible. Resist the urge to introduce variants for cosmetic differences — every variant multiplies the test matrix.
When something goes wrong
- Failed migration. If the migration is transactional (PostgreSQL, SQL Server, Oracle), the failed target stays at the previous version. Halt the rollout, fix the issue forward as a new
V__, test it, and resume. MySQL and MariaDB cannot roll back DDL — design migrations there to be small and individually recoverable. - Need to roll a change back. Pair each non-trivial migration with an undo script (
U__, Teams/Enterprise feature) authored at the same time. Test the undo path in a non-production environment before you need it. - Single-target drift. If a target is out of sync because of an emergency hotfix applied manually, capture the change as a new versioned migration, apply it across the fleet, and
flyway repairthe affected target's history table to realign checksums. Don't edit the history table directly.
See the parent guide's Handling failures and Proactive drift checks sections for the strategy behind these.