Delayed Durability - What is it?
Published: 16th March 2015
This is one of my favourite new features of SQL Server 2014 as it can make an absolutely huge difference to your servers when performing inserts, updates, and deletes.

And best of all, this is an ALL Edition feature rather than Enterprise only.

There are a couple of downsides though, and therefore this isn’t something that you can automatically run in every environment, but I’m currently working in Data Warehousing and that makes it superb.

So what is Delayed Durability?

Well, in a nutshell SQL Server is allowing you to mess with the ACID properties of the database by changing the way that Durability works.

SQL Server ensures transaction durability by using write ahead logging, which means that when you perform an Insert, for example, SQL Server will record the Insert in the log and only then can it commit the transaction and proceed to the next.

Therefore, if you were streaming 1 million rows into SQL Server and performing inserts one at a time, then you can quickly find that your system slows down as SQL Server has to write each insert to the log, wait for that to complete, and then move on to the next insert.

So how does Delayed Durability help? Well instead of writing to the log file SQL Server simply writes the log record to memory and can instantly return a “carry on” command. This is obviously MUCH faster and, in turn, your inserts suddenly speed up as a result.

Then, every so often, SQL Server will flush these log records to disk. This means that, instead of 1 million log records written individually to disk as per my contrived example above, SQL would asynchronously perform just a few writes, each containing batches of log records.

Hopefully that all made sense.

Now for an example…

We’ll create a very simple and thin table and then fire 100,000 individual inserts at it. Whilst this happens we can watch Perfmon to see the Log Flushes per Second against the Transactions per Second.

Half way through the test we’ll turn on Delayed Durability and see what happens:

use AdventureWorks2012

-- Ensure we're in 2014 Compatibility
alter database AdventureWorks2012 set compatibility_level = 120

-- Make sure Delayed Durability is off
alter database AdventureWorks2012 set delayed_durability = disabled

if object_id('testDelayedDurability') is not null drop table testDelayedDurability

create table testDelayedDurability
id int identity,
myValue char(10) default('a'),
myGUID uniqueidentifier default(newID()),
constraint pk_testDelayedDurability primary key clustered(id)

insert into testDelayedDurability default values
go 1000000

Now we’ll use another window to turn on Delayed Durability:

alter database AdventureWorks2012 set delayed_durability = forced

So what was the result?

You can clearly see that my transactions were constrained by my log flushes but when I turned on Delayed Durability my log flushes dropped as log was written to memory and then written out in batches, and my transactions per second rose accordingly.

Note that this is just my single disk test machine and a single inserting application… I’ve seen much more dramatic increase in transaction throughput on production servers with good RAID and with multiple parallel connections therefore I urge you to try this yourself.

To activate this we have the following options:

-- Turn Delayed Durability Off
alter database AdventureWorks2012 set delayed_durability = disabled

-- Force every transaction to use Delayed Durability
alter database AdventureWorks2012 set delayed_durability = forced

-- Allow the use of Delayed Durability at a statement level
alter database AdventureWorks2012 set delayed_durability = allowed

-- Activate at the statement level (combined with ALLOWED above)
begin transaction
   insert into
select 'myValues'
from anotherTable
commit transaction
(delayed_durability = on)

So what are the downsides?

Well hopefully that’s obvious from the explanation… SQL Server is not hardening every transaction to disk.

Therefore, let’s say that you’re running an insert with Delayed Durability ON and suddenly your server goes down… what happens?

Well you can only recover what SQL Server can find recorded as Committed in the transaction log. Therefore all your inserts, updates, and deletes which were written only in memory at the time your server crashed are lost.

Because of this I would be VERY unlikely to ever recommend this in an OLTP, but the reason I like it so much is, as mentioned above, because I’m working in a Data Warehouse environment at the moment and that means ETL.

Within the current ETL we have a lot of data being streamed in, manipulated, and pumped out again… BUT, and this is the key part here… this is ALL re-runnable and therefore I really don’t mind if data loss occurs and that’s the bit you NEED to consider. If you have re-runnable datasets or you can afford data loss then this is a superb feature, but if that doesn’t apply to you then please don’t turn this on or you could end up needing a new job!!!
NB: Comments will only appear once they have been moderated.