T-SQL Tuesday #147 – SQL Server Upgrade Strategies

SQLWATCH sqlwatch tsqltuesday 147 2
SQLWATCH iu 2

Thanks to Steve Jones for hosting February month of T-SQL Tuesday! This month T-SQL Tuesday is all about SQL Server upgrades.

Thank you, Kevin and Deepthi for the reminder. I had forgotten about T-SQL Tuesday this time but hopefully, the below, although written in a rush, will make sense and may even help you!


SQL Server Upgrade is like getting a new car. You ditch your old rusty one that is now out of warranty and keeps breaking a lot for a new shiny one. A new car server will ensure smooth operation for the years to come.

All the things that we had in the old glove box will likely be fine in the new glove box, but there could be a few that don’t quite fit. Analogically, some of the things (databases and data) we had on our old SQL Server will be fine, but some will not. How do we find those that will not be fine?

The Risks of SQL Server Upgrades

There are a few significant risks associated with upgrades:

Behavioral and Breaking changes

The database may behave differently because of underlying changes to the SQL Server Engine. The code may start returning different results. Or, if there have been changes to the Optimiser, the query may start generating completely different execution plans with degraded performance. This is quite difficult to capture as it will not fail but will essentially return different data or may run much slower,

Discontinued features

These are the features that had been deprecated in the past and are now being removed. If we have a piece of code that relies on such a feature, it will stop working immediately.

Deprecated features

Deprecated features mean the feature is no longer being actively developed and, whilst still available in the current version, will be removed in future versions. These don’t immediately break our code, but we should not be deploying production application that relies on deprecated features.

Compatibility Levels

Features are linked to Compatibility Levels. We can “trick” the new SQL Server into behaving like the old one by setting the old compatibility level. It’s like buying a new car but putting the old engine in it. It is a database level setting and does not always work with server-level settings.

Using an old compatibility level is a bit like shuffling dirt under the carpet. If you are migrating to the new version should make sure your application is compatible. In a few years time, your old compatibility level may no longer be supported, and your technical debt will have built up over time. You will face a big development task to bring your database up to speed.

SQL Server Upgrade Strategies

Ok, let’s focus on the subject. We have talked about the risks, so let’s talk about how to avoid them.

In-place upgrades

This is simply running the new installer on the old server and upgrading the old SQL Server to the new one. This is for people who don’t get enough adrenaline out of Bungee jumping. In fact, this is like Bungee jumping but without the Bungee.

There is no way back, and you will be testing after you have upgraded. What if you find a problem or a bunch of breaking changes? You can keep the old compatibility level to start with, but it will cause you a headache trying to figure it out whilst the production database is down.

That’s a no-no for me because of the risks involved.

Upgrade by migration

The safest way is to provision a new instance of SQL Server running the latest version and migrate the database to it. If something goes wrong, we can go back to the old server and the drawing board.

This method also allows for zero-downtime migration as we can set up replication such as Log shipping, Availability Groups or an actual transactional replication – for the Bungee jumpers. We can have both databases old and now running in parallel with the same data for testing.

The way you upgrade SQL Server is dictated by three things:

1. Downtime Requirements

This means how much your databases can be down for. Some companies will require no more than 1 minute, 5 minutes, maybe 1 hour. Some will be happy with one day or even five days.

For those that require a few minutes, the best migration path will be by setting up data replication to keep the new instances up to date with the old ones. Once the business has switched over to the new server, you can disable data replication and decommission the old server.

2. Your Environment

If you only have one physical server and absolutely no way of provisioning a second SQL Server, you will have no choice but to do an in-place upgrade. However, if that’s the case, you will also have a long RTO which means you can be down for longer and therefore even restore the whole server from a backup should something go wrong.

3. Testing Approach

How you are making sure that the upgraded databases will work in the new environment.

Testing is the key

No matter how we do the upgrade, we need a solid way of testing it. Sadly, simply deploying the database and “running” the application is not good enough. We need to make sure that every piece of code works and performs well. This is how I like to test upgrade compatibility:

1. Compatibility testing

First of all, make sure that the code in the database (procedures and functions) can run on the new version. I like to use Visual Studio for that. Generate DacPac from the existing database, load it into Visual Studio Database Project, change deployment target version to your new SQL Server version and let Visual Studio run analysis on the database schema. It will highlight anything that will not work in the new world. As the analysis happens offline, against the generated DacPac, there is no impact on the production database.

The next step is to run Database Migration Assistant (DMA) and generate a compatibility report.

Technically you should only need one, either Visual Studio or DMA, but I like to use both.

If you find blockers, you have to sort those first before moving on to the next phase.

2. Workload testing

The first test will only capture the code that sits in the database. It will not capture dynamically generated SQL (ORM). We need to analyse the actual workload to capture these, as well as any data-related problems.

We can use Database Experimentation Assistant for that or the fantastic WorkloadTools by Spaghettidba.

These tools will create a baseline workload (just so if you get any errors, we will be able to tell whether these pre-existing issues and bugs or issues caused by the new version). It will then replay this workload on a new server capturing any errors and problems.

I like to run the analysis for a few days to cover a good,optimisertative sample of the usual business workload.

3. Performance Baseline

Before you do any migration, you need a performance baseline to make sure you can filter out any issues caused by changes to the optimiser or the SQL engine in general. The free WorkloadTools can do this for you, but you can also use SQLWATCH for this purpose.

4. Environment Configuration

Get a snapshot of the configuration of both the old and the new and make sure the configuration does not influence any of the above scenarios.

Once you are satisfied with all of the above, you can then prepare for the migration and switch over to the new service.

Going forward

Ideally, you want to make sure all your databases are ready for the next version when it comes out. I recommend using Visual Studio for database development, this way you can let VS flag any issues right away. Contrary to what the common opinion is, moving to the cloud will not solve any of that. Sure, PaaS databases such as Azure SQL can do automated patching and upgrades but if you write a code that Microsoft later decides to break, they will simply let you know about it and disable it in the future (making your application to break) or lock you in in a deprecated compatibility.

Patching is not the same as upgrades.

Patching is like getting new tires for your old car but upgrade is a brand new car.

Leave A Comment

Recent Posts