Shrink Log Without Breaking a Backup Chain
Published: 10th December 2014
This is something I see asked all over the internet and it’s also something for which there are a LOT of incorrect responses, dodgy solutions, and poor advice (including on the MSDN website). Therefore as I’ve had to resize a few transaction logs recently I thought I’d share the code and process I’ve used.

Let’s presume that your database usage has changed and your transaction log has grown and become fragmented. Therefore you now want to shrink your log to remove the fragmentation and then resize accordingly to accommodate for future use.

So how can we do this without breaking the backup chain?

Well, if we’re going to demo this from scratch we’ll need a demo database (purposely created with poor log settings and growth)…

create database testLogShrink on primary
(
      
name = 'testLogShrinkData',
      
filename = 'D:\Data\testLogShrink.mdf',
      
size = 100MB,
      
maxsize = unlimited,
      
filegrowth = 100MB
)
log on
(
      
name = 'testLogShrinkLog',
      
filename = 'E:\Logs\testLogShrinkLog.ldf',
      
size = 1MB,
      
maxsize = unlimited,
      
filegrowth = 1MB
)
go


And we need to place this into the FULL recovery model…

alter database testLogShrink set recovery full
go


We need to ensure this is actually in FULL recovery and not pseudo-simple by performing a full backup…

backup database testLogShrink to disk = 'Z:\backups\testLogShrink.bak'
go


We’ll also take a log backup just to give us a chain we don’t want to break.

backup log testLogShrink to disk = 'Z:\backups\testLogShrink.trn'
go


Now we need to generate some transaction log data, forcing the log to grow and leaving us with excessive VLF numbers and fragmentation…

use testLogShrink
go

create table testTable
(
      
id uniqueidentifier default(newID()),
      
id1 uniqueidentifier default(newID()),
      
id2 uniqueidentifier default(newID()),
      
id3 uniqueidentifier default(newID()),
      
id4 uniqueidentifier default(newID()),
      
id5 uniqueidentifier default(newID()),
      
id6 uniqueidentifier default(newID()),
      
id7 uniqueidentifier default(newID()),
      
id8 uniqueidentifier default(newID()),
      
id9 uniqueidentifier default(newID()),
      
id10 uniqueidentifier default(newID()),
      
id11 uniqueidentifier default(newID())
)
go

set nocount on

insert into
testTable
default values
go 25000


This took about 30 seconds to complete on my test machine, but times may vary.

Anyway, once this is done we can have a look at the new file sizes in the database.

select db_name(m.database_id) dbName, convert(varchar(10), m.size/128) + 'MB' dbSize,
              
case when m.is_percent_growth = 1 then convert(varchar(10), m.growth) + '%' else convert(varchar(10), m.growth/128) + ' MB' end dbGrowth,
              
convert(varchar(10), m1.size/128) + 'MB' logSize, case when m1.is_percent_growth = 1 then convert(varchar(10), m1.growth) + '%' else convert(varchar(10), m1.growth/128) + ' MB' end logGrowth,
              
recovery_model_desc
from sys.master_files m
join sys.master_files m1
on m.database_id = m1.database_id
join sys.databases d
on m.database_id = d.database_id
where m.file_id != m1.file_id
and d.name = 'testLogShrink'
and m.file_id = 1
order by m.size desc
go


We can see immediately that the log has grown a good few times to get from 1MB to 26MB and therefore we should have some good fragmentation.

dbcc loginfo
go


Running the above against my database shows there are 104 VLFs which is, for the database size we’re running, far too many.

We now have our starting position. We have a database in the FULL recovery model with a log that we want to resize and defrag.

So let’s get started.

Firstly we want to take a log backup so that SQL Server clears the unused portion of the log.

backup log testLogShrink to disk = 'Z:\backups\testLogShrink1.trn'
go


Next we need to remove any empty space at the end of the log. This uses a shrink, but does not move any data around, it literally just removes empty space and therefore does NOT break a log chain.

dbcc shrinkfile('testLogShrinkLog', truncateonly)
go


Once done we know that the very next transaction to run will cause the log to wrap around, starting at the beginning again.

Therefore we are going to run a transaction to achieve just that.

create table dbo.wombleTest(id int)
go
insert into wombleTest select 1
go
drop table dbo.wombleTest
go


Now that the log has wrapped around we take a log backup.

backup log testLogShrink to disk = ’Z:\backups\testLogS


Once again SQL Server clears the unused portion of the log (which happens to be nearly all of it as we have literally just wrapped around back to the beginning of the log file).

Therefore we now shrink again and remove the unused space.

dbcc shrinkfile('testLogShrinkLog', truncateonly)
go


Let’s check on our file sizes and you’ll see that the log is now tiny.

select db_name(m.database_id) dbName, convert(varchar(10), m.size/128) + 'MB' dbSize,
              
case when m.is_percent_growth = 1 then convert(varchar(10), m.growth) + '%' else convert(varchar(10), m.growth/128) + ' MB' end dbGrowth,
              
convert(varchar(10), m1.size/128) + 'MB' logSize, case when m1.is_percent_growth = 1 then convert(varchar(10), m1.growth) + '%' else convert(varchar(10), m1.growth/128) + ' MB' end logGrowth,
              
recovery_model_desc
from sys.master_files m
join sys.master_files m1
on m.database_id = m1.database_id
join sys.databases d
on m.database_id = d.database_id
where m.file_id != m1.file_id
and d.name = 'testLogShrink'
and m.file_id = 1
order by m.size desc
go


All that’s left is to resize the log to our new preference (I’m going to choose 1GB).

alter database testLogShrink modify file (name = 'testLogShrinkLog', size = 1000MB)
go


NB - This will take a few seconds as SQL Server has to zero initialise the log… therefore you may want to schedule this out of hours if you can’t accept the delay it will cause. It took 7 seconds on my test machine.

Quick look at our VLF count.

dbcc loginfo
go


We’re now down to just 12 VLFs which is much better.

One final backup and we’re good to go. We now have a newly resized log file, minimal VLF fragmentation, and a complete backup chain.

Additional note - If you’re running this in a live environment then you might find that the wrap around then shrink doesn’t necessarily work… this will likely be due to long running transactions stopping the log from being able to shrink. Therefore you may need to run these steps a few times, in a period in which there are no long running transactions, or during scheduled down time.

Either way, I still find this safer and more acceptable than any option which breaks the chain.

Hope this helps.
Comments:
NB: Comments will only appear once they have been moderated.