John McCormack DBA

SQL Server Databases and Cloud

  • Personal
    • About
  • Free Training
    • SQL Server on Amazon RDS (Free Course)
    • Free practice questions to help you pass DP-900
  • Save money in Azure
    • Azure IaaS SQL Backups – Stop burning money
    • Your Azure SQL Database and Managed Instance is too big
    • Turn the cloud off at bedtime to save 70%
    • Your Azure SQL Virtual Machine might be too big
    • Save money with Azure SQL DB serverless
    • Save up to 73% with reserved instances
    • Delete unused instances to save money in Azure
  • Hire me
    • 60 minute cost optimization
    • Let me solve your SQL Server problems
    • Take a look at my Sessionize speaker’s profile

John McCormack's blogs

Optimising a slow stored procedure

14th July 2020 By John McCormack 1 Comment

Optimising a slow stored procedure

t-sql tuesday logo

In this blog post, I describe optimising a slow stored procedure. It’s part of a series of posts by members of the SQL Server community in relation to T-SQL Tuesday. For this month’s T-SQL Tuesday, Kerry Tyler asks:

Tell me (us all, obviously) about something recently that broke or went wrong, and what it took to fix it. Of course, the intent here would be for this to be SQL Server-related, but it doesn’t have to be. We can all learn from something going wrong in infrastructure-land, or how there was a loophole in some business process that turned around and bit somebody’s arm. It doesn’t even have to be all that recent–maybe you’ve got a really good story about modem banks catching on fire and that’s how you found out the fire suppression system hadn’t been inspected in years. Just spitballin’ here. If you’ve got an incident whose resolution can help someone else avoid the same problem in the future or improve a policy as a preventative measure, let us hear about it.

The situation

I received a call out of hours that a key web page in the application was suffering from timeouts. It was a page for managing important finance data and our team in the US were unable to work. I needed to work out what had changed and how to fix it?

Identifying the problem

All http requests are logged so I could took a quick look at the logs to see if there were any problems. It stood out that one stored procedure in particular was timing out. I quickly ran a trace (sorry xe fans) and found that one particular proc was running for 30 seconds every time. (This is the application timeout value). I took the proc name and parameters and ran from SSMS and found that the procedure took 4 minutes to complete.

How to fix

So why had this procedure suddenly gone bad? Well the fact is it was poorly performing anyway and finance users were frustrated on the whole. My first thought was recompile it quickly and see what happens. Like flicking a switch, it was back to around 20 seconds and the finance page would load again, albeit slowly. So, the issue here was that a bad execution plan had been cached and was being used for each subsequent execution.

This is however a very unsatisfactory fix. First of all, you are not stopping the problem from reoccurring. Secondly, you are only improving the situation from broken to slow. Hardly something for the CV.

The next morning, I took a look at the stored procedure code and could see it needed to be optimised. I ran the proc on a test system, and collected the key metrics such as logical reads and cpu duration by running [SQL]SET STATISTICS TIME,IO ON[/SQL]. To simplify the output, I always copy this into Richie Rump’s Statistics Parser.

This output showed me logical reads into the millions for multiple tables. Starting from the tables with the highest reads, I worked through all of the high ones, looking at which columns were being queried and which predicates were being used. I used this information to design and test alternative indexes, each time remeasuring the output from SET STATISTICS TIME,IO ON. Once I had the correct indexes in place, I was able to submit a PR with the changes. Once it went to production, it resulted in sub second page loads which made the finance team a lot happier.

One other thing that I did that is worth mentioning is I used Sentry One Plan Explorer (It’s free). The query plan was one of those intimidating ones, with hundreds of nodes. When I look at these in SSMS, it’s sometime difficult to know where to start. However in Plan Explorer, there is an option to ‘Show Cumulative Costs’ which helps you can see which branches of the plan can be minimised as the have little cumulative impact, rather than the impact of each node within the plan. This makes reading the rest of the plan much easier because it gets a lot smaller.

SentryOne plan explorer cumulative costs

Whilst you’re here, you may find the other posts interesting.

How DBATools can help with performance tuning

A successful performance tuning project

Filed Under: front-page, Guides Tagged With: Performance tuning, t-sql, t-sql tuesday

Protected: Fast 800 diet

22nd June 2020 By John McCormack

This content is password protected. To view it please enter your password below:

Filed Under: Uncategorised

A successful performance tuning project

5th June 2020 By John McCormack Leave a Comment

Performance tuning project

I’m coming to the end of what has been a successful performance tuning project for SQL Server. I wanted to share some of the things that made it a success.

Corporate buy in

The company had a goal to improve the page load times of a number of key pages within our application. This was to improve the user experience for our customers. They acknowledged that the database code, both indexes and stored procedures needed optimsation but so too did aspects of the application code. It was good to see this acknowledged as I’ve been in many situations where the database takes all the blame.

The company approved a considerable amount of resource in terms of personnel to work solely on this stability and optimisation project. It included senior developers, testers and a project manager. I was brought in to look at the database performance. Whilst some business as usual (BAU) priorities did come in from time to time, a large core of the team was protected from this and allowed to get on with the work of making the system go faster, thus improving the customer experience.

Daily standups

We held daily standups where we covered what we had done since the last meeting, what we were working on and if anything was blocking our progress. These were kept short so as to not get in the way of the development work, but allowed everyone an overview of what the other team members were working on. Often, as a result of these, side conversations spun up and team members helped out others who were looking for a bit of assistance. (Or simply to bounce ideas around)

Collaboration

The team were willing to help each other. When Pull Requests (PRs) were submitted, these were swiftly approved where there were no objections, or challenged in a positive way which helped get the best overall result. When the API calls were showing as slow, but nothing was obvious on the SQL server, heads were put together to use the tools at our disposal to get to the root cause. This often included Azure App Insights which I had not previously used, and this helped us get the end to end transaction details. We could pull out the SQL for any areas which were slow and work on making it perform better.

Measuring improvements

The Azure Instance class for the SQL Server had previously been doubled so there was no appetite to scale it up again. The hope was that we may eventually be able to scale back down after a period of stability.

The system previously had issues with blocking, high CPU and slow durations so I wanted to reduce page reads, CPU and duration for all of the SQL calls I was working on. I wouldn’t consider a PR if at least 2 of these metrics were not improved. My main focus was on reducing duration of calls but I didn’t want to improve one thing, and make others worse as a consequence. In my own tickets, I always made sure to document the before and after metrics from my standalone testing so as to give confidence that they would be included in upcoming releases.

CPU graph showing performance over time.

We also used Apdex which is a standardised way of measuring application performance. It ranks page views on whether the user is satisfied, tolerating or frustrated. The more we move users out of the frustrated and tolerating groups, and in to satisfied, the higher the Apdex score will be. As our project moved through release cycles, we were able to see steady increases in our Apdex scores. Apdex also allowed us to identify what was hurting us most and create tickets based on this information.

Top Quality Load Test Environment

We had a top quality load test environment which used production masked backups for the databases. I set up the availability groups to match production, the servers were all sized the same as production and had the same internal settings such as tempdb size, sp_configure settings and trace flags etc. We were able to replay the same tests over and over again using Gatling, and our testers made really useful reports available to help us analyse the performance of each hotfix. If it was a proven fix, it was promoted to a release branch, if it wasn’t, it was binned.

End Game

This intensity was kept up for the almost 2 months and it was ultimately transformative for the business. Whilst there are still many further improvements that can be made, the specialised squad is being disbanded and team members are being reallocated to other squads. Performance should be a way of life now, rather than an afterthought or another performance tuning project.

We can be happy that we improved the Apdex scores, sped up a huge number of regularly used SQL transactions, and removed the large CPU peaks that dominated our core business hours.

If you enjoyed this, you may also enjoy some of these other posts.

  • https://johnmccormack.it/2020/05/how-dbatools-can-help-with-performance-tuning/
  • https://johnmccormack.it/2019/03/put-tempdb-files-on-d-drive-in-azure-iaas/

Filed Under: front-page, Guides, SQL Server Tagged With: Performance tuning, project, Scrum, sql, SQL Performance, SQL server

How DBATools can help with performance tuning

20th May 2020 By John McCormack 3 Comments

How DBATools can help with performance tuning.

DBATools is well known in the SQL Server community for the ease at which it allows you to automate certain tasks but did you know that DBATools can help with performance tuning your SQL Server. As my job recently changed to have more of a performance tilt, I wanted to investigate which commands would be useful in helping me with performance tuning. It turned out there are quite a few.

Compression

Page and row compression is a hidden gem when it comes to performance tuning SQL Server. Simply compressing some large indexes can sometimes be enough on it’s own to get you over the line in terms of performance improvement. With compression, it shouldn’t be a guessing game. SQL Server has built in stored procedures to let you know whether a table or index would benefit from page,row or no compression.

[sql]

EXEC sp_estimate_data_compression_savings ‘dbo’, ‘DB1’, NULL, NULL, ‘ROW’ ;

EXEC sp_estimate_data_compression_savings ‘dbo’, ‘DB1’, NULL, NULL, ‘PAGE’ ;

[/sql]

But if you want to test a whole DB or instance at a time, DBA tools makes this a lot easier.
[Shell]
Test-DbaDbCompression -SqlInstance localhost -Database WideWorldImporters | Select-Object TableName, IndexName, IndexId, PercentScan, PercentUpdate, RowEstimatePercentOriginal, PageEstimatePercentOriginal, CompressionTypeRecommendation, SizeCurrent, SizeRequested, PercentCompression | Out-GridView
[/Shell]

Piping to Out-Gridview is optional but I love the way the output is tabulated and easy to read. I use this output to create a list of objects to be compressed. I then do some load testing, before and after the changes and compare the results. This gives me the confidence to roll out the changes to production.

Hint: Having a way of replaying a consistent load and measuring results before and after is essential when performance tuning by making wholesale changes. If you don’t have a way of doing this, I recommend looking into WorkloadTools.

Test-DBACompression example output
Output of Test-DBADBCompression

Find unwanted code hints

When I took over a new environment, there was A LOT of blocking. Part of this was down to the code which was in need of optimsation, but it was also due to a lot of hints peppered throughout the stored procedures that made blocking inevitable. I found several examples of REPEATABLE READ and SERIALIZABLE throughout the code base. Asking developers about why we needed this level of isolation, no one thought we did and said this was done by developers who had moved on.

So, what is the easiest way to find every example of REPEATABLE READ in your code base, DBATools of course.

[shell]
Find-DbaStoredProcedure -SqlInstance SQL123 -Database WideWorldImporters -Pattern ‘REPEATABLE READ’ | Select-Object * | out-gridview
[/shell]

Just like before, if you can get a replay running that simulates production, then you can measure whether your blocking and response times go down.

Find-DBAStoredProcedure example output
Output of Find-DBAStoredProcedure

Check disk speeds

Knowing that your disks are performing well gives you the confidence to go out and work on your code to seek performance improvements. But if you have slow disks, this can also be a bottleneck. Knowing you have a problem and rectifying it means you can see performance improvements before you have even looked at your indexes or stored procedures.

A common bottleneck can be tempdb, it’s always good to know you have tempdb on high performing disks. (If using Azure VMs, you can utilise the D drive for tempdb)

[shell]
Test-DBADiskSpeed -SQlInstance SQL123 -Database tempdb | Out-Gridview
[/shell]

Test-DBADiskSpeed example output
Output of Test-DBADiskSpeed

Community tools

Easily install some of the best free community tools around which will help you evaluate and monitor your SQL Server.

[shell]

# Install the First responder kit from Brent Ozar Unlimited. Includes sp_blitz, sp_blitzcache and sp_blitzIndex which all help with performance tuning. See https://www.brentozar.com/ for more info.
Install-DbaFirstResponderKit -SqlInstance SQL123 -Database dba

# Installs sp_whoisactive by Adam Machanic which is a priceless tool in terms of evaluating what is running on your instance right now. See http://whoisactive.com/ for more info.
Install-DbaWhoIsActive -SqlInstance SQL123 -Database dba

# Install the free community monitoring tool SQLWatch. See https://sqlwatch.io/ for more info.
Install-DbaSqlWatch -SqlInstance SQL123

[/shell]

External Links

A full list of commands can be reviewed on https://dbatools.io/commands/.

  • First Responder Kit
  • sp_whoisactive
  • SQLWatch

Filed Under: front-page, Performance Tuning Tagged With: DBATools, Performance, Performance tuning, SQL server

When should I stripe SQL Server database backups

23rd March 2020 By John McCormack Leave a Comment

What is a striped backup?

First of all a definition of what we mean when we say stripe SQL Server database backups. When we talk about striping a backup, it simply means the overall backup is distributed across several files, often on different drives. Microsoft’s description.

What are the benefits?

  1. Time
    1. Writing to multiple files can save time. This is because the files can be written to the file system in parallel. This is useful if your backups are taking too long and you would like/need them to complete faster.
  2. Storage
    1. If a backup drive is getting full, you can split your backup across multiple drives thus saving space on each drive. The total space used will be similar to keeping the backup on a single file but you may see a bit more overall space taken up.

Striped vs Single File

It’s a balancing act. My advice is stick with the defaults until you have an issue or would like something to perform better. Then it’s a good time to experiment and see how you get on. As a rule, the bigger your backups become, the more useful you may find striping your database backups.

How to stripe my backups?

We will discuss T-SQL and Powershell. For Powershell, I’m using the DBATools PowerShell module. DBATools covers far more scenarios than Microsoft’s own default module for SQL Server and is being used and tested all day every day. If you wish to run the powershell commands below, you will need to have dbatools installed. For the T-SQL commands, these all run natively in SSMS.

T-SQL

-- Split file across two drives as not enough space on E:
BACKUP DATABASE StackOverFlow2010
TO DISK = N'E:\Backup\StackOverFlow2010\StackOverFlow2010_1.bak',
DISK = N'F:\Backup\StackOverFlow2010\StackOverFlow2010_2.bak'
WITH COMPRESSION;

T-SQL (Using Ola Hallengren’s DatabaseBackup procedure)

EXECUTE dbo.DatabaseBackup
@Databases = 'StackOverFlow2010',
@Directory = 'E:\Backup\StackOverFlow2010\,F:\Backup\StackOverFlow2010\',
@BackupType = 'FULL',
@Compress = 'Y',
@NumberOfFiles = 2

Powershell (DBATools)

Backup-DbaDatabase -SqlInstance localhost -Database StackOverflow2010 -Path E:\Backup\StackOverflow2010\,F:\Backup\StackOverflow2010\ -CompressBackup

Restoring striped backups

It is therefore worth testing your backups and restore scripts every so often. A backup that cannot be restored is not really a backup.

T-SQL

RESTORE DATABASE StackOverFlow2010
FROM DISK = N'E:\Backup\StackOverFlow2010\StackOverFlow2010_1.bak',
DISK = N'F:\Backup\StackOverFlow2010\StackOverFlow2010_2.bak'

Powershell (DBATools)

Restore-DbaDatabase -SqlInstance localhost -DatabaseName StackOverflow2010 -Path E:\Backup\StackOverflow2010\,F:\Backup\StackOverflow2010\ -WithReplace

Filed Under: front-page, Guides, PowerShell, T-SQL

  • « Previous Page
  • 1
  • …
  • 10
  • 11
  • 12
  • 13
  • 14
  • …
  • 22
  • Next Page »
Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

John McCormack · Copyright © 2025

 

Loading Comments...