John McCormack DBA

SQL Server Databases and Cloud

  • Personal
    • About
  • Free Training
    • SQL Server on Amazon RDS (Free Course)
    • Free practice questions to help you pass DP-900
  • Save money in Azure
    • Azure IaaS SQL Backups – Stop burning money
    • Your Azure SQL Database and Managed Instance is too big
    • Turn the cloud off at bedtime to save 70%
    • Your Azure SQL Virtual Machine might be too big
    • Save money with Azure SQL DB serverless
    • Save up to 73% with reserved instances
    • Delete unused instances to save money in Azure
  • Hire me
    • 60 minute cost optimization
    • Let me solve your SQL Server problems
    • Take a look at my Sessionize speaker’s profile

Your SQL Server indexes can’t always be perfect

27th November 2020 By John McCormack 1 Comment

Perfection is not the aim

Your SQL Server indexes can’t always be perfect. What I mean by that is you can’t index for every query on a busy SQL server. In this post I’m going to describe just one table from a server I have been tuning recently. It had 26 indexes, and that was not unusual for this database. This most likely happened over time as a result of developers creating an index per query using the table, as well as following some perceived best practices (with the best of intentions).

Did you say 26 indexes?

Yes, one of the heavily used tables on this database had 26 indexes. It had one clustered index on an ID column which is fairly standard. There were 25 further non clustered indexes including an index per foreign key. A shared code base runs across several customers. Some customer’s use all the functionality, some only use parts of it, but they all get the same database schema. And all of those indexes.

Current usage and how to fix

Remove unused indexes

9 of the non clustered indexes on this table were read from less than once per day but were written to up to 18,000 times per day. The servers had been online for 180 days at the time of checking and some of the index uses were in single or double figures. These needed to be dropped as they were not bringing anything to the party.

I got this info by running sp_blitzindex and comparing the index usage stats against the server uptime. (I took care to run on all production servers for all customers, including read only secondaries).

Combine similar indexes

This left 16 remaining non clustered indexes to review.

  • 9 of which had been used several million times with a maximum of 32 Million seeks.
    • 5 out of those 9 appeared to be heavily overlapping other indexes.
      • I needed to see if these can be combined into 1 or 2 indexes.
    • 4 out of those 9 are fairly unique.
      • Those 4 will be retained.
  • 7 have been used hundreds of thousands of times and don’t appear to have much overlap.

This leaves us with 4 non clustered indexes being retained, 9 being dropped and 12 to be reviewed in more detail. This is the most difficult stage of the process as I need to decide which indexes can be merged and which can be dropped. Its always best to be cautious with this part, because removig the wrong index could slow down an important process.

Of the 12 indexes that needed a more detailed look, I decided to drop 8, and create 3 new ones. A net reduction of 5 indexes for this stage. Multiple groups of indexes had the same keys but different include columns. Some of the include column ranges were very wide and included large data types such as large NVARCHARs. Replacement indexes used the same keys, in the same order, but had a more targeted range of include columns.

Perfection? No. Better? Yes.

In total, we have a net reduction of 14 non clustered indexes. It still leaves us with 11 non clustered indexes. Your SQL Server indexes can’t always be perfect but 11 is better than 25. I’d really have loved to get that count down to 5 but it is a lot better than it was. This process should be repeated after a month of the new index structure being in place with the aim of further consolidation if possible. The new indexes won’t be perfect, some queries may not be served as well by the new consolidated indexes or by having a rarely used index removed but I was trying to improve the overall health of the SQL Server. Reducing the number of writes that were hammering the storage is a step in the right direction.

How did we get so many indexes?

Index creation was all developer led so when new functionality was rolled out, indexes were created for each new process. This resulted in a lot of overlapping indexes. I don’t blame the developers, there should just have been a process for reviewing index requests, and comparing them to what we already had.

How to avoid in future

I see two key approaches to helping avoid a repeat of this death by indexing.

  1. Don’t hoard the analysis. Sharing the results with the development team will help increase awareness of this death by indexing scenario.
  2. Implementing an approval process for pull requests where a DBA should be in the approvers list if the change includes a new index.

Verify improvements

Index changes can affect multiple queries all using the same table so just testing one query or stored procedure is not enough to release with confidence. Where possible, use a load testing environment which mirrors production, and replay a repeatable workload. By baselining a typical workload, then making the changes and taking a new benchmark, you can measure the improvements overall and for specific database calls.

With this client, there was a load testing environment with synthetic API calls to the application that attempted to mirror production. Not everyone has this luxury when testing but capturing and replaying a workload, and measuring the differences is vital for this type of performance tuning. It can help you catch any serious regressions as well as giving you the confidence that your performance will improve.

If you don’t alreqady have a suitable load testing environment, the tool I recommend to get you going is an open source project called WorkloadTools by Gianluca Sartori.

More handy links

A successful performance tuning project
How DBATools can help with performance tuning

Filed Under: front-page, Guides, Performance Tuning Tagged With: indexes, Performance tuning, SQL Performance, SQL server

Optimising a slow stored procedure

14th July 2020 By John McCormack 1 Comment

Optimising a slow stored procedure

t-sql tuesday logo

In this blog post, I describe optimising a slow stored procedure. It’s part of a series of posts by members of the SQL Server community in relation to T-SQL Tuesday. For this month’s T-SQL Tuesday, Kerry Tyler asks:

Tell me (us all, obviously) about something recently that broke or went wrong, and what it took to fix it. Of course, the intent here would be for this to be SQL Server-related, but it doesn’t have to be. We can all learn from something going wrong in infrastructure-land, or how there was a loophole in some business process that turned around and bit somebody’s arm. It doesn’t even have to be all that recent–maybe you’ve got a really good story about modem banks catching on fire and that’s how you found out the fire suppression system hadn’t been inspected in years. Just spitballin’ here. If you’ve got an incident whose resolution can help someone else avoid the same problem in the future or improve a policy as a preventative measure, let us hear about it.

The situation

I received a call out of hours that a key web page in the application was suffering from timeouts. It was a page for managing important finance data and our team in the US were unable to work. I needed to work out what had changed and how to fix it?

Identifying the problem

All http requests are logged so I could took a quick look at the logs to see if there were any problems. It stood out that one stored procedure in particular was timing out. I quickly ran a trace (sorry xe fans) and found that one particular proc was running for 30 seconds every time. (This is the application timeout value). I took the proc name and parameters and ran from SSMS and found that the procedure took 4 minutes to complete.

How to fix

So why had this procedure suddenly gone bad? Well the fact is it was poorly performing anyway and finance users were frustrated on the whole. My first thought was recompile it quickly and see what happens. Like flicking a switch, it was back to around 20 seconds and the finance page would load again, albeit slowly. So, the issue here was that a bad execution plan had been cached and was being used for each subsequent execution.

This is however a very unsatisfactory fix. First of all, you are not stopping the problem from reoccurring. Secondly, you are only improving the situation from broken to slow. Hardly something for the CV.

The next morning, I took a look at the stored procedure code and could see it needed to be optimised. I ran the proc on a test system, and collected the key metrics such as logical reads and cpu duration by running [SQL]SET STATISTICS TIME,IO ON[/SQL]. To simplify the output, I always copy this into Richie Rump’s Statistics Parser.

This output showed me logical reads into the millions for multiple tables. Starting from the tables with the highest reads, I worked through all of the high ones, looking at which columns were being queried and which predicates were being used. I used this information to design and test alternative indexes, each time remeasuring the output from SET STATISTICS TIME,IO ON. Once I had the correct indexes in place, I was able to submit a PR with the changes. Once it went to production, it resulted in sub second page loads which made the finance team a lot happier.

One other thing that I did that is worth mentioning is I used Sentry One Plan Explorer (It’s free). The query plan was one of those intimidating ones, with hundreds of nodes. When I look at these in SSMS, it’s sometime difficult to know where to start. However in Plan Explorer, there is an option to ‘Show Cumulative Costs’ which helps you can see which branches of the plan can be minimised as the have little cumulative impact, rather than the impact of each node within the plan. This makes reading the rest of the plan much easier because it gets a lot smaller.

SentryOne plan explorer cumulative costs

Whilst you’re here, you may find the other posts interesting.

How DBATools can help with performance tuning

A successful performance tuning project

Filed Under: front-page, Guides Tagged With: Performance tuning, t-sql, t-sql tuesday

A successful performance tuning project

5th June 2020 By John McCormack Leave a Comment

Performance tuning project

I’m coming to the end of what has been a successful performance tuning project for SQL Server. I wanted to share some of the things that made it a success.

Corporate buy in

The company had a goal to improve the page load times of a number of key pages within our application. This was to improve the user experience for our customers. They acknowledged that the database code, both indexes and stored procedures needed optimsation but so too did aspects of the application code. It was good to see this acknowledged as I’ve been in many situations where the database takes all the blame.

The company approved a considerable amount of resource in terms of personnel to work solely on this stability and optimisation project. It included senior developers, testers and a project manager. I was brought in to look at the database performance. Whilst some business as usual (BAU) priorities did come in from time to time, a large core of the team was protected from this and allowed to get on with the work of making the system go faster, thus improving the customer experience.

Daily standups

We held daily standups where we covered what we had done since the last meeting, what we were working on and if anything was blocking our progress. These were kept short so as to not get in the way of the development work, but allowed everyone an overview of what the other team members were working on. Often, as a result of these, side conversations spun up and team members helped out others who were looking for a bit of assistance. (Or simply to bounce ideas around)

Collaboration

The team were willing to help each other. When Pull Requests (PRs) were submitted, these were swiftly approved where there were no objections, or challenged in a positive way which helped get the best overall result. When the API calls were showing as slow, but nothing was obvious on the SQL server, heads were put together to use the tools at our disposal to get to the root cause. This often included Azure App Insights which I had not previously used, and this helped us get the end to end transaction details. We could pull out the SQL for any areas which were slow and work on making it perform better.

Measuring improvements

The Azure Instance class for the SQL Server had previously been doubled so there was no appetite to scale it up again. The hope was that we may eventually be able to scale back down after a period of stability.

The system previously had issues with blocking, high CPU and slow durations so I wanted to reduce page reads, CPU and duration for all of the SQL calls I was working on. I wouldn’t consider a PR if at least 2 of these metrics were not improved. My main focus was on reducing duration of calls but I didn’t want to improve one thing, and make others worse as a consequence. In my own tickets, I always made sure to document the before and after metrics from my standalone testing so as to give confidence that they would be included in upcoming releases.

CPU graph showing performance over time.

We also used Apdex which is a standardised way of measuring application performance. It ranks page views on whether the user is satisfied, tolerating or frustrated. The more we move users out of the frustrated and tolerating groups, and in to satisfied, the higher the Apdex score will be. As our project moved through release cycles, we were able to see steady increases in our Apdex scores. Apdex also allowed us to identify what was hurting us most and create tickets based on this information.

Top Quality Load Test Environment

We had a top quality load test environment which used production masked backups for the databases. I set up the availability groups to match production, the servers were all sized the same as production and had the same internal settings such as tempdb size, sp_configure settings and trace flags etc. We were able to replay the same tests over and over again using Gatling, and our testers made really useful reports available to help us analyse the performance of each hotfix. If it was a proven fix, it was promoted to a release branch, if it wasn’t, it was binned.

End Game

This intensity was kept up for the almost 2 months and it was ultimately transformative for the business. Whilst there are still many further improvements that can be made, the specialised squad is being disbanded and team members are being reallocated to other squads. Performance should be a way of life now, rather than an afterthought or another performance tuning project.

We can be happy that we improved the Apdex scores, sped up a huge number of regularly used SQL transactions, and removed the large CPU peaks that dominated our core business hours.

If you enjoyed this, you may also enjoy some of these other posts.

  • https://johnmccormack.it/2020/05/how-dbatools-can-help-with-performance-tuning/
  • https://johnmccormack.it/2019/03/put-tempdb-files-on-d-drive-in-azure-iaas/

Filed Under: front-page, Guides, SQL Server Tagged With: Performance tuning, project, Scrum, sql, SQL Performance, SQL server

How DBATools can help with performance tuning

20th May 2020 By John McCormack 3 Comments

How DBATools can help with performance tuning.

DBATools is well known in the SQL Server community for the ease at which it allows you to automate certain tasks but did you know that DBATools can help with performance tuning your SQL Server. As my job recently changed to have more of a performance tilt, I wanted to investigate which commands would be useful in helping me with performance tuning. It turned out there are quite a few.

Compression

Page and row compression is a hidden gem when it comes to performance tuning SQL Server. Simply compressing some large indexes can sometimes be enough on it’s own to get you over the line in terms of performance improvement. With compression, it shouldn’t be a guessing game. SQL Server has built in stored procedures to let you know whether a table or index would benefit from page,row or no compression.

[sql]

EXEC sp_estimate_data_compression_savings ‘dbo’, ‘DB1’, NULL, NULL, ‘ROW’ ;

EXEC sp_estimate_data_compression_savings ‘dbo’, ‘DB1’, NULL, NULL, ‘PAGE’ ;

[/sql]

But if you want to test a whole DB or instance at a time, DBA tools makes this a lot easier.
[Shell]
Test-DbaDbCompression -SqlInstance localhost -Database WideWorldImporters | Select-Object TableName, IndexName, IndexId, PercentScan, PercentUpdate, RowEstimatePercentOriginal, PageEstimatePercentOriginal, CompressionTypeRecommendation, SizeCurrent, SizeRequested, PercentCompression | Out-GridView
[/Shell]

Piping to Out-Gridview is optional but I love the way the output is tabulated and easy to read. I use this output to create a list of objects to be compressed. I then do some load testing, before and after the changes and compare the results. This gives me the confidence to roll out the changes to production.

Hint: Having a way of replaying a consistent load and measuring results before and after is essential when performance tuning by making wholesale changes. If you don’t have a way of doing this, I recommend looking into WorkloadTools.

Test-DBACompression example output
Output of Test-DBADBCompression

Find unwanted code hints

When I took over a new environment, there was A LOT of blocking. Part of this was down to the code which was in need of optimsation, but it was also due to a lot of hints peppered throughout the stored procedures that made blocking inevitable. I found several examples of REPEATABLE READ and SERIALIZABLE throughout the code base. Asking developers about why we needed this level of isolation, no one thought we did and said this was done by developers who had moved on.

So, what is the easiest way to find every example of REPEATABLE READ in your code base, DBATools of course.

[shell]
Find-DbaStoredProcedure -SqlInstance SQL123 -Database WideWorldImporters -Pattern ‘REPEATABLE READ’ | Select-Object * | out-gridview
[/shell]

Just like before, if you can get a replay running that simulates production, then you can measure whether your blocking and response times go down.

Find-DBAStoredProcedure example output
Output of Find-DBAStoredProcedure

Check disk speeds

Knowing that your disks are performing well gives you the confidence to go out and work on your code to seek performance improvements. But if you have slow disks, this can also be a bottleneck. Knowing you have a problem and rectifying it means you can see performance improvements before you have even looked at your indexes or stored procedures.

A common bottleneck can be tempdb, it’s always good to know you have tempdb on high performing disks. (If using Azure VMs, you can utilise the D drive for tempdb)

[shell]
Test-DBADiskSpeed -SQlInstance SQL123 -Database tempdb | Out-Gridview
[/shell]

Test-DBADiskSpeed example output
Output of Test-DBADiskSpeed

Community tools

Easily install some of the best free community tools around which will help you evaluate and monitor your SQL Server.

[shell]

# Install the First responder kit from Brent Ozar Unlimited. Includes sp_blitz, sp_blitzcache and sp_blitzIndex which all help with performance tuning. See https://www.brentozar.com/ for more info.
Install-DbaFirstResponderKit -SqlInstance SQL123 -Database dba

# Installs sp_whoisactive by Adam Machanic which is a priceless tool in terms of evaluating what is running on your instance right now. See http://whoisactive.com/ for more info.
Install-DbaWhoIsActive -SqlInstance SQL123 -Database dba

# Install the free community monitoring tool SQLWatch. See https://sqlwatch.io/ for more info.
Install-DbaSqlWatch -SqlInstance SQL123

[/shell]

External Links

A full list of commands can be reviewed on https://dbatools.io/commands/.

  • First Responder Kit
  • sp_whoisactive
  • SQLWatch

Filed Under: front-page, Performance Tuning Tagged With: DBATools, Performance, Performance tuning, SQL server

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

John McCormack · Copyright © 2025

 

Loading Comments...