SQL Change Automation 4

Rollback guidance

Assessing rollback complexity

In his blog post on this subject, Paul Stovell uses the example of a column rename to illustrate the potential complexity you might face with handling database rollbacks.

Certainly a tool can assist with providing a down script to revert a column back to its original name (that is alongside the up script that performed the rename to begin with), however this adds another software asset that must be peer reviewed and tested prior to Production deployment.

Presuming that your team has the capacity to test both up and down scenarios for all of your migrations, imagine the scenario where you have a migration that drops a column during deployment. If you’re dealing with a very large table, your deployment could potentially take hours because your up script would need to first take a copy of the data, so that your down script could recover it in the event of a rollback.

And if you do decide to rollback, what if you only want to revert a subset of deployed changes? Will you know in advance how to organize your migrations so that individual changes can be cherry picked for removal, without getting tripped up by inter-object dependencies?

And how do you close-the-loop in version control to make sure your database doesn’t drift from your trunk baseline?

The questions seem endless.

A rollback alternative: focus on moving forward

When contingency planning for your deployments, in consideration of the broader goal of maintaining compatibility between application and database, perhaps a better question to ask might be: How do I release/rollback new versions of my app while mitigating the risk of data loss?

Rather than investing time and energy into rollback planning, an alternative is to follow an approach that keeps you moving forward, like the deprecating schema with transition period technique from Pramod Sadalage and Peter Schuh (2002).

 
Source: Scott W. Amber & Associates

Michelle Steel summarizes the benefits of this technique nicely:

Instead of having to manage a load of backward migration scripts in addition to the forward ones, you have transition period where both schemas exist together. At some defined point in the future, the old schema is deprecated. This gives time to bed all applications.

(Just a side note: when we talk about schema we don’t necessarily mean the entire schema, for example,[dbo]. This can mean simply preserving the structural veneer of the specific objects included in the migration.)

However, as Michelle rightly points out, it does take a certain level of commitment from a project/team perspective to ensure that deprecated objects are removed in subsequent releases.

Additionally, techniques that employ views or triggers to enforce system behavior can carry a level of operational overhead of their own.

Create a deployment pipeline

In the spirit of continuous delivery, consider putting in place a process that encourages frequent, incremental change to your systems, rather than falling into the trap of doing big bang releases.

Big releases tend to heighten the anxiety around deployments, prompting the call for heavy-handed rollback plans to be implemented.

A good way to mitigate the risk of deployment failure is to test your release in full via one or more intermediate environments, for example,

Dev → Test → Staging → Production

Tooling provides one piece of the puzzle, but only a team working closely together can make a process work.

Keep focused on removing the friction in your processes so that moving forward becomes the default option in any of your contingency plans.


Didn't find what you were looking for?