John McCormack DBA

SQL Server Databases and Cloud

  • Personal
    • About
  • Free Training
    • SQL Server on Amazon RDS (Free Course)
    • Free practice questions to help you pass DP-900
  • Save money in Azure
    • Azure IaaS SQL Backups – Stop burning money
    • Your Azure SQL Database and Managed Instance is too big
    • Turn the cloud off at bedtime to save 70%
    • Your Azure SQL Virtual Machine might be too big
    • Save money with Azure SQL DB serverless
    • Save up to 73% with reserved instances
    • Delete unused instances to save money in Azure
  • Hire me
    • 60 minute cost optimization
    • Let me solve your SQL Server problems
    • Take a look at my Sessionize speaker’s profile

How do I find the agent service account for Azure SQL Database Managed Instance

25th September 2020 By John McCormack 1 Comment

How do I find the agent service account for Azure SQL Database Managed Instance

Normally if you want to know which service accounts run your SQL Server Agent, you can just run:

SELECT servicename, service_account FROM sys.dm_server_services

However this draws a blank when you run it on Azure SQL DB Managed Instance. So how do you find the agent service account for Azure SQL Database Managed Instance?

I’ve came up with 3 suggestions so far:

  1. Run an agent job and check which user ran it
  2. Check the error log
  3. Check sys.dm_exec_sessions

Run an agent job and check which user ran it

Output of SQL Server Agent Job

This is not exactly a very efficient process.

Check the error log

EXEC sp_readerrorlog 0,1,'service account'

Even if you cycle the error log, it will always be there but you’ll need to do some string manipulation to just pull out the credentials you need.

Check sys.dm_exec_sessions

(Thanks to David Alcock for this suggestion)

SELECT login_name FROM sys.dm_exec_sessions
WHERE program_name = 'SQLAgent - Generic Refresher'

This might be the best because it will only return one result and will only the login name, rather than a bunch of extra text as well.

Do you have a better way? Please let me know in the comments if you do.

John

Filed Under: Azure SQL DB Managed Instance, front-page Tagged With: azure, azure sql db, azure sqldb managed instance, SQL Agent service account

Automate Your Stress Away – Run sp_whoisactive on a schedule

8th September 2020 By John McCormack 1 Comment

Automate collection of sp_whoisactive on a schedule

t-sql tuesday logoThis post is part of the September 2020 edition of #tsql2sday. Thanks to Elizabeth Noble for hosting (blog|twitter) and for coming up with a great topic. SQL Server has a great tool for scheduling repeatable tasks called SQL Agent. SQL Server also has a great community, who have written SQL Scripts so established and well known that some might think they come with SQL Server (or at least should). This post combines using SQL Agent with one of those invaluable community scripts, to help solve a real world problem.

When I took over a fairly troublesome SQL Server, I was always being asked what was running when the SQL Server was slow. That was fine when it was during the day and I could look. But often I would be asked, why was the server slow at 9pm last Friday? Or some other time. It’s not always the easiest thing to do as SQL Server doesn’t natively keep this information for you.

A quick but effective way to automate this is to download and schedule Adam Machanic’s sp_whoisactive stored procedure. I run this every 10 minutes but I have seen it scheduled as often as every 30-60 seconds. It depends on your system and your needs.

How to automate running sp_whoisactive on a schedule

  1. Download sp_whoisactive from GitHub if you don’t have it already
  2. Run the code to create the stored procedure on your SQL Server
  3. Use my t-sql below to create an agent job that collects the data (and deletes older rows)
  4. Change any of the parameters to suit your own needs
  5. Review data next time you are asked “What was running at X time”? by simply querying the table

Create SQL Agent Job

! Important ! I use a utility database called DBA for stored procedures such as sp_whoisactive. If you don’t have a database called DBA, please either create one or change the reference to the name of the database you want to create your table in.

USE [msdb]

GO

BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0

IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'[Uncategorized (Local)]' AND category_class=1)
BEGIN
EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'[Uncategorized (Local)]'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

END

DECLARE @jobId BINARY(16)
EXEC @ReturnCode = msdb.dbo.sp_add_job @job_name=N'sp_whoisactive_log_to_table',
@enabled=1,
@notify_level_eventlog=0,
@notify_level_email=0,
@notify_level_netsend=0,
@notify_level_page=0,
@delete_level=0,
@description=N'Log output of sp_whoisactive every x minutes',
@category_name=N'[Uncategorized (Local)]',
@owner_login_name=N'sa', @job_id = @jobId OUTPUT
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'Write output to table',
@step_id=1,
@cmdexec_success_code=0,
@on_success_action=3,
@on_success_step_id=0,
@on_fail_action=2,
@on_fail_step_id=0,
@retry_attempts=0,
@retry_interval=0,
@os_run_priority=0, @subsystem=N'TSQL',
@command=N'IF NOT EXISTS ( SELECT 1 FROM dba.sys.tables WHERE NAME = ''sp_whoisactive_log'')
BEGIN
CREATE TABLE dba.dbo.sp_whoisactive_log
(
id INT IDENTITY(1,1) PRIMARY KEY,
[dd hh:mm:ss:mss] VARCHAR(20) NULL,
session_id SMALLINT NOT NULL,
sql_text XML NULL,
login_name sysname NULL,
wait_info NVARCHAR(4000) NULL,
CPU VARCHAR(30) NULL,
tempdb_allocations VARCHAR(30) NULL,
tempdb_current VARCHAR(30) NULL,
blocking_session_id SMALLINT NULL,
blocked_session_count VARCHAR(30) NULL,
reads VARCHAR(30) NULL,
writes VARCHAR(30) NULL,
physical_reads VARCHAR(30) NULL,
query_plan XML NULL,
locks XML NULL,
used_memory VARCHAR(30) NULL,
[status] VARCHAR(30) NULL,
open_tran_count VARCHAR(30) NULL,
percent_complete VARCHAR(30) NULL,
[host_name] sysname NULL,
[database_name] sysname NULL,
[program_name] sysname NULL,
start_time datetime NULL,
login_time datetime NULL,
request_id SMALLINT NULL,
collection_time datetime NULL
)
CREATE INDEX idx_collection_time ON dba.dbo.sp_whoisactive_log (collection_time)
END

/* Load data into table.
If you want to change parameters, you will likely need to add columns to table
*/
EXEC sp_whoisactive @get_locks = 1, @find_block_leaders = 1, @get_plans = 1, @destination_table = ''sp_whoisactive_log''',
@database_name=N'DBA',
@flags=0
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'Delete old rows',
@step_id=2,
@cmdexec_success_code=0,
@on_success_action=1,
@on_success_step_id=0,
@on_fail_action=2,
@on_fail_step_id=0,
@retry_attempts=0,
@retry_interval=0,
@os_run_priority=0, @subsystem=N'TSQL',
@command=N'DELETE dba.dbo.sp_whoisactive_log
WHERE collection_time < DATEADD(MONTH,-1,GETDATE())',
@database_name=N'master',
@flags=0
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobschedule @job_id=@jobId, @name=N'Every x minutes',
@enabled=1,
@freq_type=4,
@freq_interval=1,
@freq_subday_type=4,
@freq_subday_interval=10,
@freq_relative_interval=0,
@freq_recurrence_factor=0,
@active_start_date=20200209,
@active_end_date=99991231,
@active_start_time=0,
@active_end_time=235959
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:
GO

This job will run sp_whoisactive on a schedule every 10 minutes and record the output in the table dba.dbo.sp_whoisactive_log. It also deletes any rows older than 1 month.

Hope this helps
John

Filed Under: front-page, T-SQL Tagged With: #tsql2sday, sp_whoisactive, tsqltuesday

SQL Server on Amazon RDS (Free Course)

23rd July 2020 By John McCormack Leave a Comment

SQL Server on Amazon RDS Course

I’ve put together a short video series on running SQL Server on Amazon RDS. It covers some basics and can be completed in around an hour, with several very short videos. It’s intended for SQL professionals who are just getting started with AWS, but might also be enjoyed by AWS users, looking to get started with a relational database management system.

It’s my first attempt at putting together a training plan. So any feedback will be gratefully received. Either in the blog comments or YouTube video comments.

Contents

1. Introduction to SQL Server on Amazon RDS (Scroll down)
2. Benefits and limitations of AWS RDS
3. Create your first RDS instance using the AWS console
4. Connecting to your AWS RDS instance
5. Advanced Configurations using Parameter Groups
6. Securing your SQL Server AWS RDS Instance
7. Automating Deployments using PowerShell
8. Backup and restore for AWS RDS
9. Providing High-Availability through Multiple Availability Zones
10. Monitoring Your Instances using CloudWatch

Introduction to SQL Server on Amazon RDS

Download slides

Next Video

Filed Under: AWS RDS, AWS SQL Server, front-page, Training

Optimising a slow stored procedure

14th July 2020 By John McCormack 1 Comment

Optimising a slow stored procedure

t-sql tuesday logo

In this blog post, I describe optimising a slow stored procedure. It’s part of a series of posts by members of the SQL Server community in relation to T-SQL Tuesday. For this month’s T-SQL Tuesday, Kerry Tyler asks:

Tell me (us all, obviously) about something recently that broke or went wrong, and what it took to fix it. Of course, the intent here would be for this to be SQL Server-related, but it doesn’t have to be. We can all learn from something going wrong in infrastructure-land, or how there was a loophole in some business process that turned around and bit somebody’s arm. It doesn’t even have to be all that recent–maybe you’ve got a really good story about modem banks catching on fire and that’s how you found out the fire suppression system hadn’t been inspected in years. Just spitballin’ here. If you’ve got an incident whose resolution can help someone else avoid the same problem in the future or improve a policy as a preventative measure, let us hear about it.

The situation

I received a call out of hours that a key web page in the application was suffering from timeouts. It was a page for managing important finance data and our team in the US were unable to work. I needed to work out what had changed and how to fix it?

Identifying the problem

All http requests are logged so I could took a quick look at the logs to see if there were any problems. It stood out that one stored procedure in particular was timing out. I quickly ran a trace (sorry xe fans) and found that one particular proc was running for 30 seconds every time. (This is the application timeout value). I took the proc name and parameters and ran from SSMS and found that the procedure took 4 minutes to complete.

How to fix

So why had this procedure suddenly gone bad? Well the fact is it was poorly performing anyway and finance users were frustrated on the whole. My first thought was recompile it quickly and see what happens. Like flicking a switch, it was back to around 20 seconds and the finance page would load again, albeit slowly. So, the issue here was that a bad execution plan had been cached and was being used for each subsequent execution.

This is however a very unsatisfactory fix. First of all, you are not stopping the problem from reoccurring. Secondly, you are only improving the situation from broken to slow. Hardly something for the CV.

The next morning, I took a look at the stored procedure code and could see it needed to be optimised. I ran the proc on a test system, and collected the key metrics such as logical reads and cpu duration by running [SQL]SET STATISTICS TIME,IO ON[/SQL]. To simplify the output, I always copy this into Richie Rump’s Statistics Parser.

This output showed me logical reads into the millions for multiple tables. Starting from the tables with the highest reads, I worked through all of the high ones, looking at which columns were being queried and which predicates were being used. I used this information to design and test alternative indexes, each time remeasuring the output from SET STATISTICS TIME,IO ON. Once I had the correct indexes in place, I was able to submit a PR with the changes. Once it went to production, it resulted in sub second page loads which made the finance team a lot happier.

One other thing that I did that is worth mentioning is I used Sentry One Plan Explorer (It’s free). The query plan was one of those intimidating ones, with hundreds of nodes. When I look at these in SSMS, it’s sometime difficult to know where to start. However in Plan Explorer, there is an option to ‘Show Cumulative Costs’ which helps you can see which branches of the plan can be minimised as the have little cumulative impact, rather than the impact of each node within the plan. This makes reading the rest of the plan much easier because it gets a lot smaller.

SentryOne plan explorer cumulative costs

Whilst you’re here, you may find the other posts interesting.

How DBATools can help with performance tuning

A successful performance tuning project

Filed Under: front-page, Guides Tagged With: Performance tuning, t-sql, t-sql tuesday

A successful performance tuning project

5th June 2020 By John McCormack Leave a Comment

Performance tuning project

I’m coming to the end of what has been a successful performance tuning project for SQL Server. I wanted to share some of the things that made it a success.

Corporate buy in

The company had a goal to improve the page load times of a number of key pages within our application. This was to improve the user experience for our customers. They acknowledged that the database code, both indexes and stored procedures needed optimsation but so too did aspects of the application code. It was good to see this acknowledged as I’ve been in many situations where the database takes all the blame.

The company approved a considerable amount of resource in terms of personnel to work solely on this stability and optimisation project. It included senior developers, testers and a project manager. I was brought in to look at the database performance. Whilst some business as usual (BAU) priorities did come in from time to time, a large core of the team was protected from this and allowed to get on with the work of making the system go faster, thus improving the customer experience.

Daily standups

We held daily standups where we covered what we had done since the last meeting, what we were working on and if anything was blocking our progress. These were kept short so as to not get in the way of the development work, but allowed everyone an overview of what the other team members were working on. Often, as a result of these, side conversations spun up and team members helped out others who were looking for a bit of assistance. (Or simply to bounce ideas around)

Collaboration

The team were willing to help each other. When Pull Requests (PRs) were submitted, these were swiftly approved where there were no objections, or challenged in a positive way which helped get the best overall result. When the API calls were showing as slow, but nothing was obvious on the SQL server, heads were put together to use the tools at our disposal to get to the root cause. This often included Azure App Insights which I had not previously used, and this helped us get the end to end transaction details. We could pull out the SQL for any areas which were slow and work on making it perform better.

Measuring improvements

The Azure Instance class for the SQL Server had previously been doubled so there was no appetite to scale it up again. The hope was that we may eventually be able to scale back down after a period of stability.

The system previously had issues with blocking, high CPU and slow durations so I wanted to reduce page reads, CPU and duration for all of the SQL calls I was working on. I wouldn’t consider a PR if at least 2 of these metrics were not improved. My main focus was on reducing duration of calls but I didn’t want to improve one thing, and make others worse as a consequence. In my own tickets, I always made sure to document the before and after metrics from my standalone testing so as to give confidence that they would be included in upcoming releases.

CPU graph showing performance over time.

We also used Apdex which is a standardised way of measuring application performance. It ranks page views on whether the user is satisfied, tolerating or frustrated. The more we move users out of the frustrated and tolerating groups, and in to satisfied, the higher the Apdex score will be. As our project moved through release cycles, we were able to see steady increases in our Apdex scores. Apdex also allowed us to identify what was hurting us most and create tickets based on this information.

Top Quality Load Test Environment

We had a top quality load test environment which used production masked backups for the databases. I set up the availability groups to match production, the servers were all sized the same as production and had the same internal settings such as tempdb size, sp_configure settings and trace flags etc. We were able to replay the same tests over and over again using Gatling, and our testers made really useful reports available to help us analyse the performance of each hotfix. If it was a proven fix, it was promoted to a release branch, if it wasn’t, it was binned.

End Game

This intensity was kept up for the almost 2 months and it was ultimately transformative for the business. Whilst there are still many further improvements that can be made, the specialised squad is being disbanded and team members are being reallocated to other squads. Performance should be a way of life now, rather than an afterthought or another performance tuning project.

We can be happy that we improved the Apdex scores, sped up a huge number of regularly used SQL transactions, and removed the large CPU peaks that dominated our core business hours.

If you enjoyed this, you may also enjoy some of these other posts.

  • https://johnmccormack.it/2020/05/how-dbatools-can-help-with-performance-tuning/
  • https://johnmccormack.it/2019/03/put-tempdb-files-on-d-drive-in-azure-iaas/

Filed Under: front-page, Guides, SQL Server Tagged With: Performance tuning, project, Scrum, sql, SQL Performance, SQL server

  • « Previous Page
  • 1
  • …
  • 6
  • 7
  • 8
  • 9
  • 10
  • …
  • 16
  • Next Page »
Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

John McCormack · Copyright © 2025

 

Loading Comments...