What a Difference Caching Makes
Published: 7th November 2013
I was visited the other day by an irate developer who was complaining that he couldn’t get consistent results from his testing and queries were running in anything from 5 to 30 seconds and he thought there was something wrong with SQL Server.

As it turns out, there was nothing wrong with SQL Server, it was simply down to caching. He was running his tests on a variety of servers including a few which have incredible data throughput and relatively little RAM… therefore some servers were retaining his test data in cache, others simply couldn’t.

He was sceptical and insisted that it must be something more than that and that he couldn’t tune without consistent data so I knocked up a simple demo for him…

This was easy enough to do using my version of AdventureWorks with enlarged Sales Order tables.

Firstly I suggested turning off the results grid so that we could ensure that the grid didn’t interfere with our timings…

Next I’m going to seriously reduce the available RAM in my test machine (do NOT do this in production) to better demonstrate this:

sp_configure 'max server memory (MB)', 1500

Now I’ll wipe the plan cache and current buffer cache (data in memory) so that we start with a blank canvas (again, NOT in production please):

use AdventureWorks2012

dbcc freeproccache
dbcc dropcleanbuffers

In my enlarged AdventureWorks database my largest table is just over 1GB in size, therefore simply selecting the entire table will bring the whole table into memory…

select *
from Sales.SalesOrderDetailEnlarged

As you can see, my Management Studio reported it completed in 9 seconds.

So now let’s run that exact same query again now that the whole table is in cache:

Immediately this knocks our query time down to just 6 seconds.

Now let’s run a couple of other random queries just to pollute out cache a little bit:

select *
from sales.SalesOrderHeaderEnlarged

select *
from person.person

Now we’ll re-run our original query:

8 seconds… well this is because not ALL of the data from our large table was removed from cache by the 2 other queries we ran and therefore this was not as long as an empty cache, but slower than with the entire table in memory.

As such, it’s pretty easy to see the difference in timings that could occur when tuning very large queries including numerous tables and joins, especially on a busy and not isolated system.

I would always suggest testing on as isolated system as you can and always wiping the buffer cache completely prior to running so that you are obtaining a benchmark for the worst case scenario for your query.
NB: Comments will only appear once they have been moderated.