John McCormack DBA

SQL Server Databases and Cloud

  • Personal
    • About
  • Free Training
    • SQL Server on Amazon RDS (Free Course)
    • Free practice questions to help you pass DP-900
  • Save money in Azure
    • Azure IaaS SQL Backups – Stop burning money
    • Your Azure SQL Database and Managed Instance is too big
    • Turn the cloud off at bedtime to save 70%
    • Your Azure SQL Virtual Machine might be too big
    • Save money with Azure SQL DB serverless
    • Save up to 73% with reserved instances
    • Delete unused instances to save money in Azure
  • Hire me
    • 60 minute cost optimization
    • Let me solve your SQL Server problems
    • Take a look at my Sessionize speaker’s profile

Test read intent connections to an AG Listener

19th April 2019 By John McCormack 4 Comments

To test read intent connections to an AG Listener, I prefer to use SQLCMD but you can also test easily using SSMS.

SQLCMD

The -Kreadonly switch is your key to success here but remember to also specify the database using -d. When not set (and with an initial catalog of master for my login), I found I always got the primary instance back during my check. This simple omission cost me hours of troubleshooting work, because I was convinced my listener wasn’t working correctly. In fact, I just wasn’t testing it correctly.

Read/Write Connections
-sqlcmd -S "SQL01-AG1-list" -d WideWorldImporters -E -q "SELECT @@SERVERNAME;"

Read only Connections
-sqlcmd -S "SQL01-AG1-list" -d WideWorldImporters-E -q "SELECT @@SERVERNAME;" -Kreadonly

The instance that you are connected to will show in the command prompt. Type exit to leave sqlcmd.

SSMS

In object explorer, click Connect and choose Database Engine.

Then, in the bottom right hand side of the dialog box, click on Options >>

In Connection Properties, Click Connect to database and then <Browse server ..>. Choose a DB that is in your availability group.

Connect To Server SSMS

Then click on Additional Connection Parameters and type in ApplicationIntent=ReadOnly

ApplicationIntent=ReadOnly SSMS

Click connect and run SELECT @@SERVERNAMEand you should expect to see the instance name of your secondary replica, providing you have set up the read only routing correctly.

If you change connection and remove ApplicationIntent=ReadOnly from the Additional Connection Parameters, you should see the result as the name of your primary instance.

Summary

Hopefully these 2 simple techniques to test read intent connections to an AG Listener will be useful and help save you time. It’s a simple blog post but I wanted to write it because I was looking at the problem in too much depth and missing the obvious mistake of choosing my database context.

Popular posts

  • https://johnmccormack.it/2019/03/put-tempdb-files-on-d-drive-in-azure-iaas/
  • https://johnmccormack.it/2017/12/ec2-sql-server-backups-to-amazon-s3/
  • https://johnmccormack.it/2018/10/glasgow-super-meetup-aws-athena-presentation/

Filed Under: Always On Availability Groups, front-page Tagged With: AG Listener, Always On, Availability Groups, SQL server

Running SQL Server on RDS SQLBits presentation

2nd March 2019 By John McCormack Leave a Comment

Running SQL Server on RDS

I presented on Running SQL Server on RDS at SQLBits. It was  a lightning talk, so only 5 minutes.

You can download my slides from here:

Running SQL Server on RDS

Further interesting links:

My more detailed post on running SQL Server on AWS (RDS vs EC2)

Amazon RDS for SQL Server FAQs

SQL Server Agent Information

Filed Under: AWS RDS, AWS SQL Server, front-page Tagged With: presentation, rds, SQL server, sqlbits

SQL Server Threadpool Waits

22nd April 2016 By John McCormack Leave a Comment

Threadpool Waits

I encountered the following error this week:

Event Description: SQL Server failed with error code 0xc0000000 to spawn a thread to process a new login or connection. Check the SQL Server error log and the Windows event logs for information about possible related problems.

This SQL Server hosts the databases for a few utility apps and has always ticked along without any concerns. However, more and more databases had been added over time and most recently Altiris was added and this is when the problems began.

Searching online, I came across this question and answer in which Jonathan Kehayias suggested the problem could be caused by threadpool starvation. It doesn’t seem to be a common problem so I had to look into to it to find out more.

In his first answer, Jonathan suggested that scaling out or splitting the workload across more SQL servers would work better than scaling up because the max number of worker threads increase would not be sufficient by scaling up. I looked into the server but realised it was had a very low spec of only 2 CPUS and 4GB memory to support 20 databases of varying sizes so in this case, maybe scaling up would help. Had I read on, I would have seen that Jonathan didn’t expect the guy’s server to have such low spec (1041 databases 1CPU 2GB RAM) and he also advised scaling up.

Measuring the threadpool waits

Querying the DMV sys.dm_os_wait_stats returns a count of waiting tasks since the last restart. I ran the query below and found that this system had experienced 18,000 threadpool waits per day since its last restart. 300,000 in total.

[sql]

select *
from sys.dm_os_wait_stats
where wait_type = ‘THREADPOOL’

[/sql]

As the stats are cumulative, I wanted to know how often the threadpool waits were happening right now. I set up a sql agent job to collect the waiting_tasks_count from sys.dm_os_wait_stats along with the date of collection and scheduled this to run every minute. I found that I was receiving threadpool waits at a rate of 822 per hour, roughly 14 per minute.

How to fix

Fortunately for me, this box was a virtual machine (VM) and as such, adding CPU and memory was a fairly seamless process. I doubled the CPUs and increased memory to 16GB to give SQL Server a fighting chance. I could go further if needed.

Once this was added, I continued to monitor the count of threadpool waits every minute and found they had reduced from 822 per hour to <1 per hour.

Summary

To summarise, although I’m sure it’s not always the answer, I found that beefing up a very low spec server had a dramatic improvement on threadpool waits. I should add this was a 64bit SQL Server running 2008 R2.

Other resources

Whilst deciding what to do and getting more information on threadpool starvation, I read a number of other resources which are listed below.

https://blogs.msdn.microsoft.com/psssql/2009/11/24/doctor-this-sql-server-appears-to-be-sick/ (Part 6)

http://sqlblog.com/blogs/jonathan_kehayias/archive/2009/11/12/the-benefits-of-attending-pass-realized.aspx 

https://www.brentozar.com/archive/2014/05/connections-slow-sql-server-threadpool/

As always, your comments and suggestions for improvement are welcome.

Filed Under: front-page, Guides Tagged With: SQL server, threadpool starvation, threadpool waits, worker_threads

Disk space monitoring with SQL Server

30th March 2016 By John McCormack 2 Comments

In this post, I discuss 3 methods (of varying complexity) to carry out some disk space monitoring with SQL server. This can be ad-hoc or you can build it into SQL agent jobs to run on a schedule. Now the first thing I’d like to say is this isn’t a replacement for good monitoring software, it’s more of a compliment to monitoring software. It’s a double check in case something goes wrong with your monitoring. Having some handy scripts is useful as well, because it lets you find out info on an ad-hoc basis, rather than relying on alerts or checking manually. [Read more…]

Filed Under: Guides Tagged With: free space, powershell, SQL server, sql server drive space, SQLNEWBLOGGER, t-sql, xp_fixeddrives

Compressing backups with Litespeed for SQL Server

11th March 2016 By John McCormack Leave a Comment

I recently looked at an issue where free space on a backup drive was running low. One option was to increase the storage. Another was to look at the composition of the backups and see if I could try compressing backups with Litespeed for SQL Server.

2 particular backups stood out. Database1 was 93GB and Database2 was 297GB. Incidently, this was only half of the total size as we stripe across 2 drives so the total size was 186GB and 594GB.

Example backup script for Litespeed.

[sql]
exec master.dbo.xp_backup_database
@database = ‘Database1’,
@filename = ‘D:\Backup\Database1\Database1_1.BKP’,
@filename = ‘G:\Backup\Database1\Database1_2.BKP’,
@backupname = Database1 backup’,
@desc = ‘Backup of Database1’,
@encryptionkey = ‘LiteSp1234567’,
@cryptlevel = 8,
@init = 1,
@logging = 0,
@throttle = 99,
@comment = ”,
@with = ‘SKIP’,
@with = ‘STATS = 5’,
@compressionlevel = 1
[/sql]

For the purposes of this post, the line we are most interested in is @compressionlevel = 1. By increasing @compressionlevel on a scale from one to eight, we can reduce the size of our backups. One thing to note is that the higher the level of compression, the higher the CPU utilisation will be. More highly compressed backups will usually take more time. It is worth testing the backups using incremental increases in @compressionlevel to find the one that best suits your server.

LiteSpeed for SQL Server 7.5 allows a compression level of 1 – 8.
1 = Medium Compression
2 = Medium High Compression
3-6 = High Compression
7-8 = Extreme Compression.
(Previous versions allowed 9-11 but these are now classed as Level 8)

It is worth testing the backups using incremental increase in @compressionlevel to find the one that best suits your server.

My results:

Compressing backups with Litespeed for SQL Server

In the end, I opted for @compressionLevel = 5 for Database1 and @CompressionLevel = 3 for Database2. The increased backup time to use @compressionlevel=5 for Database1 is justifed, given the 64GB reduction in backup size. For Database 2 backup, there was too large a jump in processing time between levels 3 and 5. If we had no option, we could live with it but the original reduction in size by using @CompressionLevel = 3 was sufficient.

Side note: As a side benefit I wasn’t considering, our backup team advised the tape backup duration reduced by 1.5 hours so there was a positive knock on effect for another team too.

Source: http://documents.software.dell.com/litespeed-for-sql-server/7.5/netvault-litespeed-for-sql-server-user-guide/back-up-databases/compression-levels

Filed Under: front-page, Guides Tagged With: Database backups, litespeed, SQL server, SQLNEWBLOGGER

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • Next Page »
Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

John McCormack · Copyright © 2025

 

Loading Comments...