Tutorial - Fleet rollout with state-based deployment

This tutorial walks through setting up a Flyway project where a declarative schema model is the deploy source of truth. At deploy time, Flyway compares the model against the live state of each target and generates a target- specific deployment script. Different targets may receive different scripts.

For the principles and rollout strategy behind each step, see the parent guide: Rolling out updates from a single schema to multiple production databases.

State-based deployment is more flexible than migrations-based but less forgiving of drift, because any drift on a target directly changes the script that runs against it. Don't start here unless you have:

  • Continuous drift detection running across the fleet.
  • A review process — both human and automated — for the generated scripts.
  • Automated guards that fail the pipeline on destructive operations in generated scripts.

If you're new to fleet rollouts, follow the migrations-based tutorial instead.

What you'll have at the end

  • A Flyway project where schema-model/ is the deploy source of truth.
  • A flyway.toml defining every production target as a named environment.
  • A pipeline that, for each target, runs a drift check, generates a deployment script, reviews it, and applies it.
  • A drift-detection sweep that runs before any deployment script is generated.

Prerequisites

  • Flyway Enterprise edition
  • A version-control repository for the project.
  • Two or more reachable production target databases of the same engine.
  • A staging database that mirrors production topology.
  • A secrets store for per-target credentials.
  • Continuous drift detection wired up — covered in step 4.

Step 1: Initialise the project

Create a new Flyway project with a schema-model/ directory.

flyway-project/
├── flyway.toml
└── schema-model/                # declarative DDL — the deploy source of truth

Step 2: Define each target as an environment

Edit flyway.toml. Under [flyway], set the project-wide defaults that are identical across the fleet. Under [environments.<name>], add one entry per target with its connection details:

[flyway]
locations = ["filesystem:migrations"]
validateMigrationNaming = true

[environments.eu-west-1]
url = "jdbc:postgresql://eu-west-1.db.example.com:5432/app"
schemas = ["app"]
user = "${env.EU_WEST_1_DB_USER}"
password = "${env.EU_WEST_1_DB_PASSWORD}"

[environments.us-east-1]
url = "jdbc:postgresql://us-east-1.db.example.com:5432/app"
schemas = ["app"]
user = "${env.US_EAST_1_DB_USER}"
password = "${env.US_EAST_1_DB_PASSWORD}"

[environments.ap-southeast-2]
url = "jdbc:postgresql://ap-southeast-2.db.example.com:5432/app"
schemas = ["app"]
user = "${env.AP_SOUTHEAST_2_DB_USER}"
password = "${env.AP_SOUTHEAST_2_DB_PASSWORD}"

Give each environment its own pair of environment variables for user and password. Resolve them at runtime from your secrets store and inject them as the named variables. Never commit credentials to version control. For more information, see Storing and retrieving credentials.

Per-environment variables keep each target's credentials in their own scope — a leak or misconfiguration on one target can't accidentally authenticate as another, and a single Flyway invocation only ever has the one target's credentials in its environment.

Step 3: Make a schema change

Make the change in a development database and either either update the schema-model/ using the flyway model command, or use Flyway Desktop to update it. Commit the change.

You're committing the desired state, not a procedural script — the script that runs in production will be generated per target at deploy time.

Step 4: Pre-deployment pipeline

Before any change is applied, the pipeline runs a sequence of flyway commands that verify the fleet's state, generate the per-target deployment scripts, and check those scripts before they run. The official Flyway template is documented at Common state-based deployment scripts; the script below adapts it for a fleet rollout.

The check sequence is interleaved with prepare: drift and changes are checked against the live target, then prepare generates the deployment script, then check -code and check -dryrun run against the generated script. None of these commands need a target environment except via the -environment argument — check -code here runs with -scope=script and points at a specific generated file rather than the project's locations.


Using the state/prepare action:

# .github/workflows/prepare.yml
name: Generate deployment scripts

on:
  workflow_dispatch:
  workflow_call:

jobs:
  prepare:
    strategy:
      fail-fast: false
      matrix:
        target: [eu-west-1, us-east-1, ap-southeast-2]
    runs-on: ubuntu-latest
    environment: ${{ matrix.target }}
    steps:
      - uses: actions/checkout@v4
      - uses: red-gate/setup-flyway@v3
        with:
          edition: enterprise
          i-agree-to-the-eula: true
          email: ${{ secrets.FLYWAY_EMAIL }}
          token: ${{ secrets.FLYWAY_TOKEN }}
      - name: Prepare deployment for ${{ matrix.target }}
        uses: red-gate/flyway-actions/state/prepare@v2
        with:
          target-environment: ${{ matrix.target }}
          target-user: ${{ secrets.DB_USER }}
          target-password: ${{ secrets.DB_PASSWORD }}
          working-directory: .
      - uses: actions/upload-artifact@v4
        with:
          name: deployment-script-${{ matrix.target }}
          path: ./generated/${{ matrix.target }}/

environment: ${{ matrix.target }} attaches each matrix job to a GitHub environment of the same name. Configure each environment with its own scoped DB_USER/DB_PASSWORD secrets and required-reviewer rules — this gives you per-target credential isolation and a manual approval gate between waves without changing the workflow.

targets=("eu-west-1" "us-east-1" "ap-southeast-2")

for target in "${targets[@]}"; do
    flyway "-environment=$target" check -drift -failOnDrift=true
    flyway "-environment=$target" check -changes -changesSource=schemaModel

    flyway "-environment=$target" \
        prepare -source=schemaModel \
        "-scriptFilename=./generated/$target/D__deployment.sql"

    flyway "-environment=$target" \
        check -code -scope=script \
        "-scriptFilename=./generated/$target/D__deployment.sql" \
        -failOnError=true
done



$targets = @("eu-west-1", "us-east-1", "ap-southeast-2")

foreach ($target in $targets) {
    flyway "-environment=$target" check -drift -failOnDrift=true
    flyway "-environment=$target" check -changes -changesSource=schemaModel

    flyway "-environment=$target" `
        prepare -source=schemaModel `
        "-scriptFilename=./generated/$target/D__deployment.sql"

    flyway "-environment=$target" `
        check -code -scope=script `
        "-scriptFilename=./generated/$target/D__deployment.sql" `
        -failOnError=true
}






The uploaded artefacts are consumed by the deploy workflow in step 6 — that's the boundary between the automated pre-deployment pipeline and the gated deployment.

What each command does:

  • check -drift — compares each target's live state to the schema model. Drift is especially dangerous in state-based deployment because the diff is regenerated against the live state, so any drift directly changes what would be applied. -failOnDrift=true halts on any finding. This check should also be running on a schedule outside of deployments — nightly across the whole fleet at minimum.
  • check -changes — reports what the schema-model deployment will do to each target. The output is what the human reviewer in step 5 scans first before reading the generated SQL.
  • prepare — generates the deployment script for each target, saved under ./generated/<target>/. Two targets that have differed slightly will get slightly different scripts; that's expected.
  • check -code — static analysis of each generated script (-scope=script -scriptFilename=...). Catches destructive statements, rule violations, and policy breaches that the schema model itself wouldn't surface. -failOnError=true halts on any violation. Customise the rule set under [flyway.check] in flyway.toml.

Save the generated report for each check as a pipeline artefact — together they're the audit trail for the rollout. The Flyway Github Actions upload the artifacts natively.

Step 5: Review

Each generated script gets a human review before it runs against production, informed by the automated check outputs from step 4:

  • Code analysis flags rule violations in the SQL.
  • Changes preview summarises what each target will receive.
  • Dry run confirms the script can actually run against the target's current state.

A reviewer signs off on each target's script. A surprising DROP COLUMN or large data movement usually indicates either drift on the target or an unintended change in the model — investigate before approving.

Step 6: Apply to the canary

Apply the canary target's generated script:

flyway "-environment=eu-west-1" migrate "-locations=filesystem:./generated/eu-west-1"

Smoke-test the application against the canary. Watch logs and dashboards for at least one full monitoring window before continuing.

In GitHub Actions, the same workflow handles the canary and the subsequent waves — sequential matrix execution plus required-reviewer rules on each GitHub environment provide the per-target approval gate:

# .github/workflows/deploy.yml
name: Apply deployment scripts

on:
  workflow_dispatch:

jobs:
  deploy:
    strategy:
      fail-fast: true
      max-parallel: 1                       # sequential canary then waves
      matrix:
        target: [eu-west-1, us-east-1, ap-southeast-2]
    runs-on: ubuntu-latest
    environment: ${{ matrix.target }}       # per-target approval gate
    steps:
      - uses: actions/checkout@v4
      - uses: red-gate/setup-flyway@v3
        with:
          edition: enterprise
          i-agree-to-the-eula: true
          email: ${{ secrets.FLYWAY_EMAIL }}
          token: ${{ secrets.FLYWAY_TOKEN }}
      - uses: actions/download-artifact@v4
        with:
          name: deployment-script-${{ matrix.target }}
          path: ./generated/${{ matrix.target }}/
      - name: Deploy to ${{ matrix.target }}
        uses: red-gate/flyway-actions/state/deploy@v2
        with:
          target-environment: ${{ matrix.target }}
          target-user: ${{ secrets.DB_USER }}
          target-password: ${{ secrets.DB_PASSWORD }}
          working-directory: .

Set the matrix order so the canary target is first. Configure required-reviewer protection rules on every GitHub environment — max-parallel: 1 plus those rules means each target's deploy waits for the previous target's reviewer to approve.

Step 7: Wave rollout

Apply the remaining targets' scripts in waves — for example 10%, 50%, 100%. If the previous wave took long enough that further drift could have appeared on the remaining targets, re-run the drift check and re-generate those targets' scripts before continuing.

The deploy workflow shown in step 6 also handles the wave rollout — the matrix iterates through every target, gated by each environment's approval rules.

When something goes wrong

State-based deployment has a different recovery model from migrations-based:

  • Drift discovered during pre-flight. Stop the rollout for the affected target. Either capture the unexpected change into the schema model (if it should be kept) or revert it on the target (if it shouldn't), then re-generate and re-check before resuming.
  • Generated script does something unexpected. Don't edit the script to "fix" it — fix the schema model or the live state and re-generate. The script is derived, not authoritative; the next regeneration would undo any hand edits anyway.
  • Partial failure mid-deployment. Don't try to complete a failed script manually. Restore the affected target from backup or PITR, fix the cause (in the model or by resolving drift), re-generate, and re-apply.
  • Need to revert a change. Revert the change in the schema model and re-run the rollout. The generated script for each target will produce the inverse.

See the parent guide's Handling failures and Proactive drift checks sections for the strategy behind these.


Didn't find what you were looking for?