John McCormack DBA

SQL Server Databases and Cloud

  • Personal
    • About
  • Free Training
    • SQL Server on Amazon RDS (Free Course)
    • Free practice questions to help you pass DP-900
  • Save money in Azure
    • Azure IaaS SQL Backups – Stop burning money
    • Your Azure SQL Database and Managed Instance is too big
    • Turn the cloud off at bedtime to save 70%
    • Your Azure SQL Virtual Machine might be too big
    • Save money with Azure SQL DB serverless
    • Save up to 73% with reserved instances
    • Delete unused instances to save money in Azure
  • Hire me
    • 60 minute cost optimization
    • Let me solve your SQL Server problems
    • Take a look at my Sessionize speaker’s profile

Search Results for: sp_whoisactive

Automate Your Stress Away – Run sp_whoisactive on a schedule

8th September 2020 By John McCormack 1 Comment

Automate collection of sp_whoisactive on a schedule

t-sql tuesday logoThis post is part of the September 2020 edition of #tsql2sday. Thanks to Elizabeth Noble for hosting (blog|twitter) and for coming up with a great topic. SQL Server has a great tool for scheduling repeatable tasks called SQL Agent. SQL Server also has a great community, who have written SQL Scripts so established and well known that some might think they come with SQL Server (or at least should). This post combines using SQL Agent with one of those invaluable community scripts, to help solve a real world problem.

When I took over a fairly troublesome SQL Server, I was always being asked what was running when the SQL Server was slow. That was fine when it was during the day and I could look. But often I would be asked, why was the server slow at 9pm last Friday? Or some other time. It’s not always the easiest thing to do as SQL Server doesn’t natively keep this information for you.

A quick but effective way to automate this is to download and schedule Adam Machanic’s sp_whoisactive stored procedure. I run this every 10 minutes but I have seen it scheduled as often as every 30-60 seconds. It depends on your system and your needs.

How to automate running sp_whoisactive on a schedule

  1. Download sp_whoisactive from GitHub if you don’t have it already
  2. Run the code to create the stored procedure on your SQL Server
  3. Use my t-sql below to create an agent job that collects the data (and deletes older rows)
  4. Change any of the parameters to suit your own needs
  5. Review data next time you are asked “What was running at X time”? by simply querying the table

Create SQL Agent Job

! Important ! I use a utility database called DBA for stored procedures such as sp_whoisactive. If you don’t have a database called DBA, please either create one or change the reference to the name of the database you want to create your table in.

USE [msdb]

GO

BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0

IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'[Uncategorized (Local)]' AND category_class=1)
BEGIN
EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'[Uncategorized (Local)]'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

END

DECLARE @jobId BINARY(16)
EXEC @ReturnCode = msdb.dbo.sp_add_job @job_name=N'sp_whoisactive_log_to_table',
@enabled=1,
@notify_level_eventlog=0,
@notify_level_email=0,
@notify_level_netsend=0,
@notify_level_page=0,
@delete_level=0,
@description=N'Log output of sp_whoisactive every x minutes',
@category_name=N'[Uncategorized (Local)]',
@owner_login_name=N'sa', @job_id = @jobId OUTPUT
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'Write output to table',
@step_id=1,
@cmdexec_success_code=0,
@on_success_action=3,
@on_success_step_id=0,
@on_fail_action=2,
@on_fail_step_id=0,
@retry_attempts=0,
@retry_interval=0,
@os_run_priority=0, @subsystem=N'TSQL',
@command=N'IF NOT EXISTS ( SELECT 1 FROM dba.sys.tables WHERE NAME = ''sp_whoisactive_log'')
BEGIN
CREATE TABLE dba.dbo.sp_whoisactive_log
(
id INT IDENTITY(1,1) PRIMARY KEY,
[dd hh:mm:ss:mss] VARCHAR(20) NULL,
session_id SMALLINT NOT NULL,
sql_text XML NULL,
login_name sysname NULL,
wait_info NVARCHAR(4000) NULL,
CPU VARCHAR(30) NULL,
tempdb_allocations VARCHAR(30) NULL,
tempdb_current VARCHAR(30) NULL,
blocking_session_id SMALLINT NULL,
blocked_session_count VARCHAR(30) NULL,
reads VARCHAR(30) NULL,
writes VARCHAR(30) NULL,
physical_reads VARCHAR(30) NULL,
query_plan XML NULL,
locks XML NULL,
used_memory VARCHAR(30) NULL,
[status] VARCHAR(30) NULL,
open_tran_count VARCHAR(30) NULL,
percent_complete VARCHAR(30) NULL,
[host_name] sysname NULL,
[database_name] sysname NULL,
[program_name] sysname NULL,
start_time datetime NULL,
login_time datetime NULL,
request_id SMALLINT NULL,
collection_time datetime NULL
)
CREATE INDEX idx_collection_time ON dba.dbo.sp_whoisactive_log (collection_time)
END

/* Load data into table.
If you want to change parameters, you will likely need to add columns to table
*/
EXEC sp_whoisactive @get_locks = 1, @find_block_leaders = 1, @get_plans = 1, @destination_table = ''sp_whoisactive_log''',
@database_name=N'DBA',
@flags=0
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'Delete old rows',
@step_id=2,
@cmdexec_success_code=0,
@on_success_action=1,
@on_success_step_id=0,
@on_fail_action=2,
@on_fail_step_id=0,
@retry_attempts=0,
@retry_interval=0,
@os_run_priority=0, @subsystem=N'TSQL',
@command=N'DELETE dba.dbo.sp_whoisactive_log
WHERE collection_time < DATEADD(MONTH,-1,GETDATE())',
@database_name=N'master',
@flags=0
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobschedule @job_id=@jobId, @name=N'Every x minutes',
@enabled=1,
@freq_type=4,
@freq_interval=1,
@freq_subday_type=4,
@freq_subday_interval=10,
@freq_relative_interval=0,
@freq_recurrence_factor=0,
@active_start_date=20200209,
@active_end_date=99991231,
@active_start_time=0,
@active_end_time=235959
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:
GO

This job will run sp_whoisactive on a schedule every 10 minutes and record the output in the table dba.dbo.sp_whoisactive_log. It also deletes any rows older than 1 month.

Hope this helps
John

Filed Under: front-page, T-SQL Tagged With: #tsql2sday, sp_whoisactive, tsqltuesday

Firefighting – When your SQL Server is in big trouble

6th January 2021 By John McCormack Leave a Comment

When your SQL Server is in big trouble

It’s so important to stay calm when your SQL Server is in big trouble. I’m talking about when blocking is through the roof, when CPU is pinned and people are yelling from all different directions. Staying calm isn’t just about a state of mind, you need to have a process that you work through, that you have practised and you know inside out.

https://youtu.be/oWSj_Jy0YMY
Part 1

How bad is it?

Silhouette of a firefighter in front of a blaze

In this post, I want to describe what I call a priority 2 problem. It is serious and business impacting but the server is online. It needs immediate attention however or it could escalate to a P1.

P1 – Business critical system(s) are unavailable
P2 – Serious impairment of functionality on a critical system
P3 – Performance is slower than usual but requests are completing
P4 – Performance is generally sub optimal and should be improved

Get a colleague on comms

One thing that is guaranteed to happen is that people who are not directly involved in fixing the issue, will want updates. They have good reason to want updates and are well meaning, but the constant need to reply to emails or Teams messages will seriously impact the speed at which you are working.

Back in the good old pre-covid days of office working, this could easily be achieved by someone sitting near you and the two of you communicating verbally. With remote working being more and more common now, I recommend you have some kind of audio channel open that you can speak when you need to and silence is ok too. This could be a phone call on speaker, or a teams call. The technology isn’t the main thing here, the idea is that you can express updates vocally to someone capable of communicating with the wider organisation.

Where possible, your colleague should be technically capable of answering related questions. They should open a chat in Teams or Slack or whatever software your organisation prefers and provide regular updates there. They should answer questions, keep to the update schedule and most importantly, divert people away from you.

A pre practiced routine

Now the scenarios can be different, in this one I want to talk about how I would handle a SQL Server which appears to be functioning (barely) but is extremely slow. In other words, it is a priority 2 incident as defined above. Helpdesk supervisors are calling tech support to advise and customers are calling them as the website is unusable. On top of that, the MI team and data scientists are contacting the DBAs directly because their queries won’t run.

Have a script or checklist

In my script, I tend to use well established community stored procedures. The sort of ones that most DBAs know about and many use. If you start trying to write new queries in the middle of a slow down, that is going to cost you time. Stick with what works, what is established and what you have used before and know well.

I’ve mentioned these in previous posts but the main things I am going to run are:

  1. sp_whoisactive – https://github.com/amachanic/sp_whoisactive
  2. sp_blitzfirst – https://www.brentozar.com/first-aid/
  3. sp_blitzlock – https://www.brentozar.com/first-aid/

sp_whoisactive

I always run this first because it it designed specifically for showing you current database activity, and it has saved my bacon so many times in the past. Simply running this parameter free will show you what is running on your SQL Server right now. It is ordered by query duration descending so the long runners are at the top. It can give you blocking information, information on wait types and on open transactions as well. For advanced features, you need to use some parameters.

sp_blitzfirst

If you can find what you need with sp_whoisactive, you may not even need to use this stored procedure, Where this procedure comes into its own is it tells you when certain things have recently changed or when they out of the ordinary.

e.g. sp_blitzfirst will tell you if:

  1. The plan cache has been recently erased
  2. You have high CPU utilization from a program other than SQL Server
  3. How long each wait type has been waiting during the previous 5 seconds

If something stands out that you don’t understand, there will be data in the URL column that you can copy/paste into a web browser. The web page will give you an explanation of the problem.

sp_blitzlock

This proc is all about deadlocks. You might not need to run it if the first two have given you enough information to fix the issue in hand. However, if deadlocks are at the root of your problems, this will tell you which queries are involved and when. It’s also easier to read than deadlock graphs.

There are more things that I’ll run in the fullness of time but we’ll leave them for later. I only want to focus on these 3 things for now to zero in on the immediate problem.

The script

USE dba

/*
	To run this, you also need to install:
	- sp_whoisactive
	- sp_blitzfirst
	- sp_blitzlock
*/

-- Raises an error if you run the whole script in error
RAISERROR ('Dont run it all at once',20,-1) WITH LOG

-- sp_whoisactive with various parameters
EXEC sp_whoisactive @find_block_leaders = 1 --,@get_locks = 1
EXEC sp_whoisactive @sort_order = 'sql_text' -- Tells at a glance if you have a lot of the same query running. For the F5 report runner troublemakers


-- What has been hurting us in the last 5 seconds. Look for wait stats, and anything out of the ordinary, such as the plan cache has been recently erased.
EXEC dbo.sp_BlitzFirst @expertmode = 1


-- Are we experiencing deadlocks 
EXEC sp_BlitzLock

-- Deadlocks in last hour
DECLARE	@StartDateBlitz datetime = (SELECT DATEADD(HH,-1,GETDATE())), @EndDateBlitz DATETIME = (SELECT GETDATE())
EXEC sp_BlitzLock @EndDate = @EndDateBlitz, @StartDate = @StartDateBlitz
GO



/*  Some other things to consider

	Have the usual optimisation jobs run as expected. Stats/indexes etc
	If one proc has regressed badly, could it help to clear only that plan from the cache or to recompile it.
	EXEC sp_blitz -- Although not as handy as the others for real time issues. Better used as a health check
	EXEC sp_blitzcache -- More useful for helping you identify the resource hungry queries, allowing you 
	EXEC sp_readerrorlog 0,1, 'memory'
*/

The calm after the storm

I will dedicate a full post to this however for now, it is important to say that slow queries or resource intensive queries should be identified and added to a backlog for fixing. If one particular query is likely to go bad again, it should be treated as a priority to fix, in order that we don’t see the same issue escalating again.

You should also do a SQL Server Health Check in order to satisfy that you don’t have an sub optimal configuration causing you issues.

IT Certification Category (English)728x90

Related to: When your SQL Server is in big trouble

https://johnmccormack.it/2020/12/locks-blocks-and-deadlocks-in-sql-server/
https://johnmccormack.it/2020/10/zero-budget-dba-sqlbits-2020/

Filed Under: front-page, Performance Tuning, SQL Server Tagged With: firefighting, sql server troubleshooting, triage

3 things a new DBA should learn first

6th November 2019 By John McCormack Leave a Comment

3 things a new DBA should learn first

Why would I put a post together about the 3 things a new DBA should learn first? Well, I was asked by a colleague what I thought she should be focusing on learning and mastering as newish SQL DBA. She was looking to become more experienced and confident in her role. Obviously this is an opinion piece but I felt it was worth sharing. The 3 suggestions are not in any particular order, and of course I’m sure not everyone will agree but it’s an opinion based on my own experience as a SQL DBA. (Feel free to disagree in the comments, I’m sure people will strongly about some item’s I’ve omitted)

  • What is happening on the server right now
  • Backup and restore of SQL Databases
  • Scripting i.e T-SQL and PowerShell

What is happening on the SQL server right now

Help, the SQL Server is on fire and my really important process is not working. Why is the server so slow?

This scenario will happen to you. Invariably, the person (often a manager) will stand over you and expect you to knock out one or two lines of t-sql wizardry to get things running along smoothly again. First of all, I should say that in a perfect world, you will ask them to raise a ticket and you will work on it according to its priority against your other tasks. Then you can let them know what the issue was and what can be done to prevent it happening again. But we rarely work in a perfect world. In this scenario, you need one primary line of t-sql to get started.

EXEC sp_whoisactive

The procedure sp_whoisactive doesn’t come with SQL Server. It is a community script created by Adam Machanic. You can download it from GitHub. If you don’t have it installed on your SQL Servers, It is something your should really consider as it gives much more useful and readable information that sp_who2 and it’s a lot easier than pulling together your own code. It’s mature and very safe and has been installed on SQL Servers worldwide since 2007.

sp_whoisactive offers loads of optional parameters that allow you to customise and sort the output according to your own preference but just running the proc on its own without parameters will give you an ordered list of everything that is executing, at that point in time. (Ordered by duration descending). If you see things running in the minutes that usually take seconds, maybe you need to see if they are blocking other transactions.

example output from sp_whoisactive

One parameter I find really useful during an incident is @find_block_leaders. By running [sql]EXEC sp_whoisactive @find_block_leaders = 1[/sql] , you can see exactly how many sessions are being blocked from each blocking session. In the example below, you can see that an INSERT transaction in session_id 52 is blocking 5 other sessions. Each of these are trying to read from the table with an open insert transaction so they are blocked. You either need to wait for 52 to finish or you need to kill it in order for the other transactions to move on.

example output from sp_whoisactive @find_block_leadersA quick note on killing spids. I really only recommend this if you know what the process is and you have an idea of how long it will take to rollback. (Remember those other 5 spids are still blocked until the rollback completes and this is a single threaded process)

Of course, it might not be blocking and in that case, you will need more scripts to analyse what is running regularly, how queries are performing and if any optimisations are needed. This will need to be in another blog as I want to keep this post fairly succinct.

Backup and restore of SQL Databases

Knowing how to back up a database is an essential skill of a DBA, certainly one of the top required skills. Equally important is knowing how to restore your database backups. This is something you should regularly practice to ensure that when/if the time comes that you need to act quickly, you are well rehearsed. Trying to restore a database to a point in time in the middle of a P1 emergency, is the stuff of nightmares if you haven’t done it before.

Learn the different types of backups available and how often you should be doing each of them. This will vary depending on your business needs. Even on the same instance, some databases may need point in time recovery and others wont. e.g. It might be fairly acceptable to back up your small master database once per day but you cannot afford to lose more than 5 minutes of orders in the event of your instance going down. In the case of your orders database, you will need a transaction log backup every 5 minutes. Depending on the size of your database, you will be looking at a combination of regular full and differential backups or just regular full backups (as well as your transaction log backups of course)

If you are on a PAAS database such as Azure SQL Database or AWS RDS, the backups can be done automatically without any administration effort but you will still want to practice restoring to a new instance, possibly into a new region or availability zone in the event of a disaster.

Other backup related topics to look into are compression, encryption, striping and retention management.

Scripting i.e T-SQL and PowerShell

This may be a strong opinion but I believe point and click DBAs are going the way of the dinosaurs. SQL Server Management Studio (SSMS) still makes it possible to do lots of work with knowing much T-SQL or PowerShell but this does not scale. If you end up managing tens or even hundreds of SQL Servers, scripting will be your friend. The more tedious work you can automate, the more time you have to work on more interesting tasks.

Learn T-SQL

Despite me saying that point and click is not the way to go, SSMS is very useful for providing the T-SQL code for how to do something. Say I wanted to create a new sql login that doesn’t expire and I want to give it sysadmin permissions. The first time I do this, I can step through the GUI but instead of clicking OK, I can click Script and the code is opened out in SSMS for me to review and run. I can also save this as a .sql script and either use it as a template for creating future sql logins, or refer to it often enough that I learn the syntax.

script out create login from ssms

USE [master]
GO
CREATE LOGIN [john] WITH PASSWORD=N'ae34bhijkfgcd5', DEFAULT_DATABASE=[master], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF
GO
ALTER SERVER ROLE [sysadmin] ADD MEMBER [john]
GO

Learn Powershell

If this seems daunting, please don’t worry. There is a huge community project called dbatools that has more functionality for managing SQL Server than the official Microsoft SQL Server module. It’s a great way to start running Powershell commands and building your knowledge as you go.

You can start with commands as an when you need them and build up your knowledge from there. The website is extremely useful and by tagging @psdbatools on twitter, you will usually get help pretty quickly from someone involved. As your skills and confidence increase, you may even choose to add code to the project by making a pull request. As I write this post, dbatools has over 15,000 commits from 175 contributors. (I’m one of them in a VERY small way)

Summary

There are more than 3 things a new DBA should learn, a lot more but these 3 items will help you remain calm in a crisis and let you automate work, meaning you can be more efficient and have time to spend on interesting work.

Further reading

  • A brief history of activity monitoring
  • DBATools in a month of lunches
  • Backup and restore of SQL Server Databases

Filed Under: front-page, Guides, SQL Server

Locks, blocks and deadlocks in SQL Server

8th December 2020 By John McCormack 1 Comment

Terminology matters: Locks, blocks and deadlocks

I’ve lost count of how many times people have told me there is deadlocking in the database, only to check and see no deadlocks have occurred. In this scenario, it is usually blocking they are trying to describe. As DBAs or developers, it is important to know the difference between locks, blocks and deadlocks.

Read on, or just skip to the video

https://www.youtube.com/watch?v=nzehJ00Be5c
Locks block and deadlocks YouTube video

What are SQL Server locks

Locks are essential for ensuring the ACID properties of a transaction. Various SELECT, DML and DDL commands generate locks on resources. e.g. In the course of updating a row within a table, a lock is taken out to ensure the same data cannot be read or modified at the same time. This ensures that only data that is committed to the database can be read or modified. A further update can take place after the initial one, but they cannot be concurrent. Each transaction must complete in full or roll back, there are no half measures.

It should be noted that isolation levels can have an impact on the behaviour of reads and writes, but this is generally how it works when the default isolation level is in use.

Lock types

I don’t want to write a full post about lock types, mainly because the ultimate guide already exists, along with a matrix showing lock compatibility across all possible lock combinations. For a simple explanation of the basics:

  1. If data is not being modified, concurrent users can read the same data.
    1. As long as the isolation level is the SQL Server default (Read Committed)
    2. This behaviour changes however if a higher isolation level such as serializable is being used.
  2. If data is being modified, the select query will have to wait on acquiring the shared lock it needs to read data.

What is blocking

Blocking is the real world impact of locks being taken on resources and other lock types being requested which are incompatible with the existing lock. You need to have locks in order to have blocking. In the scenario where a row is being updated, the lock type of IX or X means that a simultaneous read operation will be blocked until the data modification lock has been released. Similarly, data being read blocks data from being modified. Again, there are exceptions to these based on the isolation level used.

Blocking then is a perfectly natural occurrence within SQL Server. In fact, it is vital to maintain ACID transactions. On a well optimised system, it can be hard to notice and doesn’t cause problems.

Problems occur when blocking is sustained for a longer period of time, as this leads to slower transactions. A typical connection timeout from a web app is 30 seconds so anything above this leads to lots of exceptions. Even at 10 or 15 seconds, it can lead to frustrated users. Very long blocking can bring full servers to a stand still until the lead blockers have cleared.

Identifying blocking

I simply use Adam Machanic’s sp_whoisactive stored procedure. You could use sp_who2 if you absolutely can’t use 3rd party scripts, but this proc is pure t-sql so argue your case.

EXEC sp_whoisactive @find_block_leaders = 1

To kill or not to kill

Sometimes you may have no option but to kill spids in order to clear blocking but it is not desirable. I’m generally a bit happier killing a select query if it is causing blocking, because it won’t result in a DML transaction failing. It might just mean that a report or user query fails.

Multiple identical blockers

If you have multiple blockers and they are all similar or identical, it could mean that an end user is rerunning something that keeps timing out on the app layer. These app timeouts don’t correlate to SQL timeouts so it can be the case that user just keeps hitting f5, oblivious that this is making the problem worse. I’m a lot happier killing these spids, but it’s important to say to the end user where possible, so they don’t keep doing the same thing.

It could also be that a piece of code which is called regularly has regressed and no longer completes quickly. You’ll need to fix this or the blocking headache won’t go away.

What are deadlocks?

Deadlocks occurs when two or more processes are waiting on the same resource as well as waiting on the other process to finish before they can move on. With a scenario like this, something has got to give or they will be in a stand off until the end of time. They are resolved by SQL Server picking a victim, usually the least expensive transaction to roll back. This is like having one of your blocking queries automatically killed to get things moving again. It’s far from ideal, leads to exceptions and may mean that some data intended for your database never got there.

How to check for deadlocks

I like to use sp_blitzlock from Brent Ozar’s first responder kit. If I’m in firefighting mode, I’ll just check for the previous hour. You can also pick out deadlocks from the SQL Server Error Log, or you can set up extended events to capture them.

-- Deadlocks in last hour
DECLARE @StartDateBlitz datetime = (SELECT DATEADD(HH,-1,GETDATE())),@EndDateBlitz DATETIME = (SELECT GETDATE())
EXEC sp_BlitzLock @EndDate = @EndDateBlitz, @StartDate = @StartDateBlitz

Simulating blocking

If you want to simulate blocking, you can try this on the the Wide World Importers database.

/*
   Run each of these, in order, in a different SSMS window.
*/
-- Query 1 (This naughty person went to lunch and never committed their update)
BEGIN TRANSACTION
UPDATE [WorldWideImporters].[Sales].[Customers]
SET CustomerName = 'SpinTail Toys (Head Office)'
WHERE customerID  = 1
-- COMMIT
-- Only run the commit above after all the queries have been run and you have observed blocking. Query 2 will finish instantly.

-- Query 2 (I just want my select results, but there is an uncommitted transaction blocking me)
SELECT *
  FROM [WorldWideImporters].[Sales].[Customers]
WHERE customerID  = 1

-- Query 3 (Check the wait_info)
USE DBA

EXEC sp_whoisactive @find_block_leaders = 1

-- You should see a wait type of LCK_M_S on your select query. This means the thread is waiting to acquire a shared lock.

The image below shows the output of the 3 queries side by side. Query 1 completes quickly, but notice it is uncommitted. Query 2 will not complete until Query 1 is committed or rolled back. Running Query 3 (sp_whoisactive) lets you know which spids are causing the blocking and which are being blocked.

t-sql blocking example

I’ve tried to keep the post on locks, blocks and deadlocks about the differences. I haven’t gone too technical with isolation levels, wait types or lock compatibility. The post is aimed at newer DBAs and developers to help them grasp the technology and understand the distinct purposes of locks, blocks and deadlocks.

IT Certification Category (English)728x90

Popular posts on johnmccormack.it

https://johnmccormack.it/2020/10/how-do-i-set-up-database-mail-for-azure-sql-db-managed-instance/
https://johnmccormack.it/2019/03/put-tempdb-files-on-d-drive-in-azure-iaas/

Filed Under: front-page, Guides, SQL Server Tagged With: blocking, blocks, deadlocks, locking, locks, SQL server, sql server blocking

How DBATools can help with performance tuning

20th May 2020 By John McCormack 3 Comments

How DBATools can help with performance tuning.

DBATools is well known in the SQL Server community for the ease at which it allows you to automate certain tasks but did you know that DBATools can help with performance tuning your SQL Server. As my job recently changed to have more of a performance tilt, I wanted to investigate which commands would be useful in helping me with performance tuning. It turned out there are quite a few.

Compression

Page and row compression is a hidden gem when it comes to performance tuning SQL Server. Simply compressing some large indexes can sometimes be enough on it’s own to get you over the line in terms of performance improvement. With compression, it shouldn’t be a guessing game. SQL Server has built in stored procedures to let you know whether a table or index would benefit from page,row or no compression.

[sql]

EXEC sp_estimate_data_compression_savings ‘dbo’, ‘DB1’, NULL, NULL, ‘ROW’ ;

EXEC sp_estimate_data_compression_savings ‘dbo’, ‘DB1’, NULL, NULL, ‘PAGE’ ;

[/sql]

But if you want to test a whole DB or instance at a time, DBA tools makes this a lot easier.
[Shell]
Test-DbaDbCompression -SqlInstance localhost -Database WideWorldImporters | Select-Object TableName, IndexName, IndexId, PercentScan, PercentUpdate, RowEstimatePercentOriginal, PageEstimatePercentOriginal, CompressionTypeRecommendation, SizeCurrent, SizeRequested, PercentCompression | Out-GridView
[/Shell]

Piping to Out-Gridview is optional but I love the way the output is tabulated and easy to read. I use this output to create a list of objects to be compressed. I then do some load testing, before and after the changes and compare the results. This gives me the confidence to roll out the changes to production.

Hint: Having a way of replaying a consistent load and measuring results before and after is essential when performance tuning by making wholesale changes. If you don’t have a way of doing this, I recommend looking into WorkloadTools.

Test-DBACompression example output
Output of Test-DBADBCompression

Find unwanted code hints

When I took over a new environment, there was A LOT of blocking. Part of this was down to the code which was in need of optimsation, but it was also due to a lot of hints peppered throughout the stored procedures that made blocking inevitable. I found several examples of REPEATABLE READ and SERIALIZABLE throughout the code base. Asking developers about why we needed this level of isolation, no one thought we did and said this was done by developers who had moved on.

So, what is the easiest way to find every example of REPEATABLE READ in your code base, DBATools of course.

[shell]
Find-DbaStoredProcedure -SqlInstance SQL123 -Database WideWorldImporters -Pattern ‘REPEATABLE READ’ | Select-Object * | out-gridview
[/shell]

Just like before, if you can get a replay running that simulates production, then you can measure whether your blocking and response times go down.

Find-DBAStoredProcedure example output
Output of Find-DBAStoredProcedure

Check disk speeds

Knowing that your disks are performing well gives you the confidence to go out and work on your code to seek performance improvements. But if you have slow disks, this can also be a bottleneck. Knowing you have a problem and rectifying it means you can see performance improvements before you have even looked at your indexes or stored procedures.

A common bottleneck can be tempdb, it’s always good to know you have tempdb on high performing disks. (If using Azure VMs, you can utilise the D drive for tempdb)

[shell]
Test-DBADiskSpeed -SQlInstance SQL123 -Database tempdb | Out-Gridview
[/shell]

Test-DBADiskSpeed example output
Output of Test-DBADiskSpeed

Community tools

Easily install some of the best free community tools around which will help you evaluate and monitor your SQL Server.

[shell]

# Install the First responder kit from Brent Ozar Unlimited. Includes sp_blitz, sp_blitzcache and sp_blitzIndex which all help with performance tuning. See https://www.brentozar.com/ for more info.
Install-DbaFirstResponderKit -SqlInstance SQL123 -Database dba

# Installs sp_whoisactive by Adam Machanic which is a priceless tool in terms of evaluating what is running on your instance right now. See http://whoisactive.com/ for more info.
Install-DbaWhoIsActive -SqlInstance SQL123 -Database dba

# Install the free community monitoring tool SQLWatch. See https://sqlwatch.io/ for more info.
Install-DbaSqlWatch -SqlInstance SQL123

[/shell]

External Links

A full list of commands can be reviewed on https://dbatools.io/commands/.

  • First Responder Kit
  • sp_whoisactive
  • SQLWatch

Filed Under: front-page, Performance Tuning Tagged With: DBATools, Performance, Performance tuning, SQL server

  • 1
  • 2
  • Next Page »
Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

John McCormack · Copyright © 2025

 

Loading Comments...