Rolling out updates from a single schema to multiple production databases
Published 12 May 2026
This page describes how to use Flyway to deliver schema changes consistently to a fleet of production databases — for example, a per-tenant SaaS estate, regional replicas of the same application, or a group of environments that must all stay in lockstep.
The goal is the same in every case: every target database ends up at the same schema version, derived from the same source of truth, with the same result. The differences between targets live in configuration, not in what's being applied.
This page is the how-to: the principles, controls, configuration model, and rollout shape that apply regardless of how you author and deploy schema changes. Two child tutorials walk through the specific mechanics end-to-end:
- Tutorial: Fleet rollout with migrations-based deployment — versioned
V__migrations are the deploy artefact, applied in order to every target. - Tutorial: Fleet rollout with state-based deployment — the schema model is the deploy source of truth; per-target deployment scripts are generated, reviewed, and applied at deploy time.
The two flows have the same strategic shape — same principles, same rollout sequence, same drift discipline — but differ in the artefact that runs against production and where the review gate sits. If you're new to fleet rollouts, prefer the migrations-based flow.
Before you start: target consistency
The rollout mechanics described below assume every target database starts from a known, consistent state. How you achieve that depends on which flow you're using.
Migrations-based relies on the flyway_schema_history table to track what's been applied. Every target needs that table with a known baseline version before the rollout makes sense. If you're rolling Flyway out to an existing fleet — some targets at different versions, some not Flyway-managed at all — baseline them first. See Baselining your downstream environments for the full process, including the case where targets are not in sync with each other and need separate baselines.
State-based doesn't use the schema history table — every deployment diffs the live state of each target against the schema model and generates a per-target script. There's no baseline step. But target consistency matters just as much: if your fleet's targets have drifted from each other or from the model, the generated scripts will include those differences and may run destructive statements you didn't author. Drift-check the fleet before your first state-based deployment and either pull unexpected changes into the model (if they should be kept) or revert them on the target (if they shouldn't).
Until the fleet is in a consistent starting state, none of the rollout mechanics described below will behave as expected.
Core principles
Before you set up a fleet rollout, make sure these are in place. They are not fleet-specific, but a fleet amplifies the cost of getting them wrong.
- One source of truth. A single Flyway project, stored in version control, owns the definition of the schema. Every target database — dev, test, staging, and each production node — derives its state from the same source.
- Immutability of applied changes. Once a change has been applied to any production database, never edit the artefact that produced it; add a new change instead. Editing an applied migration causes a checksum mismatch on the next
validate; editing an already-applied generated script is just as confusing six months later. - Forward-compatible schema changes. Changes should be safe to apply while the previous version of the application is still running. Use the expand/contract pattern (add new columns nullable, dual-write, backfill, switch reads, then drop) so a partial rollout — application v1 against schema v2 — keeps working.
- Configuration, not code, varies between targets. Connection details and per-target overrides belong in
flyway.tomlenvironments. The schema definition itself is identical across the fleet.
Project layout
A typical project for a fleet rollout looks like this:
flyway-project/
├── flyway.toml
│ [flyway] ← shared defaults
│ [environments.eu-west-1] ─┐
│ [environments.us-east-1] ├─ every target as a named environment, overriding only what it needs to
│ [environments.ap-southeast-2] │
│ … ─┘
├── schema-model/ ← declarative DDL — source of truth or authoring aid, depending on flow
│ Tables/ Views/ …
└── migrations/ ← versioned scripts and repeatables (migrations-based deployment only)
V001__initial_schema.sql
…
The role of schema-model/ and migrations/ depends on which deployment flow you use:
- In the migrations-based flow,
migrations/is the deploy artefact.schema-model/is maintained alongside it as an authoring aid — Flyway Desktop diffs the model when you make a change and generates the correspondingV__script. - In the state-based flow,
schema-model/is the deploy source of truth.migrations/typically contains only a baseline; the script that runs against each target is generated at deploy time.
The shared [flyway] block holds settings that are identical across the fleet — locations, encoding, placeholder defaults, validation rules. Each target is a named entry under [environments] in the same file, overriding only the settings it needs to (typically url, schemas, and credentials):
[flyway]
locations = ["filesystem:migrations"]
validateMigrationNaming = true
[environments.eu-west-1]
url = "jdbc:postgresql://eu-west-1.db.example.com:5432/app"
schemas = ["app"]
user = "${env.EU_WEST_1_DB_USER}"
password = "${env.EU_WEST_1_DB_PASSWORD}"
[environments.us-east-1]
url = "jdbc:postgresql://us-east-1.db.example.com:5432/app"
schemas = ["app"]
user = "${env.US_EAST_1_DB_USER}"
password = "${env.US_EAST_1_DB_PASSWORD}"
[environments.ap-southeast-2]
url = "jdbc:postgresql://ap-southeast-2.db.example.com:5432/app"
schemas = ["app"]
user = "${env.AP_SOUTHEAST_2_DB_USER}"
password = "${env.AP_SOUTHEAST_2_DB_PASSWORD}"Select the target at runtime, e.g. flyway -environment=eu-west-1 migrate.
Credentials never go in version control. Give each environment its own pair of environment variables — as above — and resolve them at runtime from your secret store (Vault, AWS Secrets Manager, Azure Key Vault, etc.). Per-environment variables keep each target's credentials in their own scope, so a leak or misconfiguration on one target can't accidentally authenticate as another. If you prefer, you can also override credentials on the command line, which takes precedence over the file.
Per-target overrides for non-environment settings
Environment-level keys like url, schemas, user, and password sit directly on [environments.<name>]. Anything that normally lives under [flyway] — locations, placeholders, validation rules — must be overridden under the environment's flyway sub-namespace, [environments.<name>.flyway]. The migrations-based tutorial shows a worked example for the locations case.
Resist the urge to introduce per-target variants for cosmetic differences — every variant multiplies the test matrix.
Handling deliberate differences between targets
For genuine schema differences between targets — a regional table on one tenant, an object that exists only in one region — the override mechanism depends on the flow:
- Migrations-based: override
locationsper environment inflyway.tomlto point at a different combination of migration directories. See the migrations tutorial for a worked example. - State-based: create a filter file per target and override the filter file used on the command line. The orchestrator maps each target to its filter and threads the override into the relevant Flyway invocations.
Resist the urge to introduce per-target variants for cosmetic differences — every variant multiplies the test matrix and the diagnostic surface when a rollout misbehaves.
Driving the fleet
Flyway is invoked once per target database. There is no built-in "deploy to many" command; you wrap Flyway in an orchestrator — a shell script, a CI/CD pipeline matrix job, or a small program.
Sequential rollout is the safe default. A failure on the first node halts the rollout before the rest of the fleet diverges. For a small fleet, a foreach loop over your environment names is enough; the tutorials show a worked example.
Parallel rollout suits larger fleets. Use your CI/CD system's matrix or parallel-job feature so each job runs against one target:
- GitHub Actions — matrix strategy
- Azure DevOps —
strategy: matrix - GitLab CI/CD —
parallel:matrix - Jenkins — declarative pipeline matrix
- CircleCI — matrix jobs
- Bitbucket Pipelines — parallel steps
- TeamCity — matrix build
- Harness — matrix looping strategy
- Octopus Deploy — tenanted deployments
Configure the matrix to fail-fast unless you have a strong reason not to.
In migrations-based deployments, Flyway acquires a database-level lock on the schema history table for the duration of any change it applies, so concurrent invocations against the same database are safe — only one will proceed. State-based deployments don't use the history table, so coordinate concurrent runs to the same target at the pipeline level (e.g. GitHub environments with serialised matrix execution). Across different databases, invocations run independently in both flows.
Proactive drift checks
Drift is the gap between what the project says should be on a database and what's actually there. In a single-database project drift is annoying; in a fleet it's the single most likely cause of a failed rollout. Catch it continuously, not just when you're about to deploy.
There are two distinct things to check, and they catch different failures:
- History drift (migrations-based only) — the schema history table disagrees with the project's recorded changes (a checksum changed, an entry is missing, entries appear out of order).
flyway validateis the cheap, fast check for this and should be part of every pipeline run. State-based deployments don't use the history table, so this check doesn't apply. - Schema drift (both flows) — the live database object definitions disagree with what the project's source of truth describes (a column was added by hand, an index was dropped during an incident, a stored procedure was patched in prod). Catch it with
flyway check -drift. This matters in both flows, but is especially critical for state-based: the deployment script is regenerated from the live state every time, so any drift directly changes what would be applied.
What to run, and when
| Check | Command | What it catches | Run it… |
|---|---|---|---|
| History validation | flyway validate | Edited or missing entries, checksum mismatches, out-of-order applies | Every CI build; every pre-flight against every production target |
| Fleet version survey | flyway info | Targets stuck on an old version; unexpected pending changes on one node | Before every rollout; on a schedule (hourly or daily) |
| Schema drift | flyway check -drift | Hand-edited tables, hotfixed procedures, indexes added by a DBA | Nightly across the whole fleet; before every rollout |
Treat drift findings as alerts
A drift check is only useful if someone sees the result. Wire the output into the same alerting channel you'd use for a failed build:
- Fail the pipeline on any
validateerror. - Post a summary to your team chat when the nightly drift job finds anything — including which target, which object, and the diff.
- Keep the previous run's report so you can tell new drift from drift that's already been triaged.
When drift is found
The triage is the same regardless of deployment style:
- Stop the rollout to the affected target. Don't apply further changes until drift is resolved.
- Identify the change. Use a schema diff to see exactly which objects differ. Check change-management logs and recent incident tickets for who touched it and why.
- Decide whether to keep or discard the change. If the change should live on, capture it into the source of truth and converge the rest of the fleet to match. If not, revert the target to match the source of truth.
- Re-run the full pre-flight before resuming the rollout.
The mechanics of the resolution differ by flow. The migrations-based tutorial covers flyway repair and corrective V__ migrations; the state-based tutorial covers re-syncing via the schema model.
For more detail on drift detection and resolution see Checking production environments for drift
Make drift hard to introduce
Detection is the safety net; the goal is to make drift rare in the first place.
- Remove direct production write access for everyone except a small break-glass group. Day-to-day changes go through the project and the pipeline.
- Treat any emergency hotfix as a debt to be paid the same day: capture the follow-up change into the project before the incident ticket is closed.
- Block manual edits to the schema history table in your access policy (migrations-based only — state-based doesn't read from the history table, but locking it down does no harm).
- Run the same pipeline against staging that you run against production, so staging stays a faithful preview rather than slowly drifting itself.
A safe rollout sequence
Roll out in waves rather than to the whole fleet at once.
- Pre-flight on every target. Before applying anything, run the full drift check sweep —
flyway info,flyway validate, andflyway check -drift— against every production target (see Proactive drift checks). Fail the rollout if any target reports drift or an unexpected pending state. Don't paper over drift findings to meet a release window. - Backup. Trigger your standard backup or snapshot mechanism on each target. For managed cloud databases, a point-in-time-recovery window is usually enough; for self-managed, take an explicit snapshot.
- Canary. Apply the change to a single, low-risk production target (often an internal tenant or the smallest region). Smoke-test the application against it.
- Wave rollout. Apply to the remaining targets in waves — for example, 10%, 50%, 100% — with a pause between waves long enough to surface problems via monitoring and customer reports.
- Post-flight. Run
flyway infoagainst every target after the rollout. Every target should report the sameCurrent Version. Anything else indicates a partial rollout that needs investigating before the next change.
Wrap all of this in your CI/CD pipeline so the sequence is reproducible and the artefacts (logs, info output) are retained.
Handling failures
A failure on one target while others have already succeeded is the hardest case to recover from. Plan for it before it happens.
- Prefer transactional changes. Most databases (PostgreSQL, SQL Server, Oracle) can run a change inside a transaction, so a failure leaves the database at the previous version. MySQL and MariaDB cannot roll back DDL — design changes there to be small and individually recoverable.
- Have a rollback path for any non-trivial change. Test it in a non-production environment before you need it. The mechanics differ by flow — paired
U__undo scripts for migrations-based; reverting the schema model and re-generating for state-based — so see the tutorial for your flow. For more information on rollbacks, see Implementing a roll back strategy. - Decide your stop rule up front. If a change fails on target 3 of 20, do you halt and roll back targets 1 and 2, or hold the rollout and fix forward? The right answer depends on the change; agree it before the release window, not during the incident.
- Capture single-target drift into the source of truth. If a target ends up out of sync because of an emergency hotfix applied manually, pull the change back into the project and converge the fleet — don't let one target stay an exception. The tutorial for your flow covers the specific commands.
Backward-compatible changes (expand/contract)
A fleet rollout takes longer than a single deployment, and during the rollout some targets will be on the new schema while others are still on the old. The application servers in front of those databases may also be on mixed versions. Design every change to tolerate that mix.
The expand/contract pattern splits a breaking change into a sequence of non-breaking ones, each released independently:
- Expand. Add the new structure (column, table, index) without removing the old. Make it nullable or default-valued so existing writes still work.
- Migrate behaviour. Update application code to write to both old and new structures, then to read from the new one. Backfill historical data with a one-off change.
- Contract. Once every application instance is on the new code path, release a change that removes the old structure.
Renames, type changes, and NOT NULL additions on existing columns should all follow this pattern. Avoid scheduling expand and contract in the same release.
CI/CD integration
A production fleet rollout should be a pipeline run, not an operator typing commands. A workable pipeline looks like:
- Build stage. Validate the project. Apply the change to an ephemeral database to catch syntax errors and dependency mistakes before any real target is touched.
- Test stage. Apply the change to a database seeded with production-like data. Run application integration tests against the result.
- Staging stage. Apply to a staging environment that mirrors production topology — same database engine, same version, same extensions. Run a full regression and any performance checks for large-object changes.
- Production stage. Gated by manual approval. Runs the canary + wave rollout described above, with explicit approval between waves for changes you've flagged as risky.
Store the flyway info output from each target as a pipeline artefact. It's the cheapest audit trail you have when a question comes up six months later. For state-based deployments, also store every generated deployment script — that's the only record of what actually ran against each target.
Pre-deployment checklist
Before triggering a fleet rollout:
- The change has been applied successfully in staging against production-like data.
flyway validatepasses on every production target.flyway infoon every production target shows the sameCurrent Versionand the same set of pending changes.- The most recent schema drift check is clean on every target, or any drift found has been resolved and re-verified.
- Backups or PITR coverage are confirmed for every target.
- A rollback path has been tested for any change that isn't trivially reversible by a forward fix.
- The change is backward-compatible with the application version that will still be running during the rollout.
- The rollout order (canary, waves) and the stop rule are written down.
- The pipeline that will run the rollout is the same one that ran successfully against staging.
Tutorials
- Fleet rollout with migrations-based (hybrid) deployment — end-to-end walkthrough using versioned
V__migrations as the deploy artefact. - Fleet rollout with state-based deployment — end-to-end walkthrough using the schema model as the deploy source of truth, with per-target deployment scripts generated at deploy time.
Related reading
- Baselining your downstream environments — the one-off process for bringing an existing fleet under Flyway management, including out-of-sync targets.
- Using Flyway as a multi-database migration system — orchestrating Flyway across heterogeneous targets.
- Supporting monolithic database deployment with Flyway — sequencing federated deployments and undo strategy.
- Handling multiple schemas in the same database with Flyway — applies directly to per-tenant schema fleets.
- Exploring the Flyway schema history table — what the history table records and how to interpret drift.