John McCormack DBA

SQL Server Databases and Cloud

  • Personal
    • About
  • Free Training
    • SQL Server on Amazon RDS (Free Course)
    • Free practice questions to help you pass DP-900
  • Save money in Azure
    • Azure IaaS SQL Backups – Stop burning money
    • Your Azure SQL Database and Managed Instance is too big
    • Turn the cloud off at bedtime to save 70%
    • Your Azure SQL Virtual Machine might be too big
    • Save money with Azure SQL DB serverless
    • Save up to 73% with reserved instances
    • Delete unused instances to save money in Azure
  • Hire me
    • 60 minute cost optimization
    • Let me solve your SQL Server problems
    • Take a look at my Sessionize speaker’s profile

Search Results for: iaas

Locks, blocks and deadlocks in SQL Server

8th December 2020 By John McCormack 1 Comment

Terminology matters: Locks, blocks and deadlocks

I’ve lost count of how many times people have told me there is deadlocking in the database, only to check and see no deadlocks have occurred. In this scenario, it is usually blocking they are trying to describe. As DBAs or developers, it is important to know the difference between locks, blocks and deadlocks.

Read on, or just skip to the video

https://www.youtube.com/watch?v=nzehJ00Be5c
Locks block and deadlocks YouTube video

What are SQL Server locks

Locks are essential for ensuring the ACID properties of a transaction. Various SELECT, DML and DDL commands generate locks on resources. e.g. In the course of updating a row within a table, a lock is taken out to ensure the same data cannot be read or modified at the same time. This ensures that only data that is committed to the database can be read or modified. A further update can take place after the initial one, but they cannot be concurrent. Each transaction must complete in full or roll back, there are no half measures.

It should be noted that isolation levels can have an impact on the behaviour of reads and writes, but this is generally how it works when the default isolation level is in use.

Lock types

I don’t want to write a full post about lock types, mainly because the ultimate guide already exists, along with a matrix showing lock compatibility across all possible lock combinations. For a simple explanation of the basics:

  1. If data is not being modified, concurrent users can read the same data.
    1. As long as the isolation level is the SQL Server default (Read Committed)
    2. This behaviour changes however if a higher isolation level such as serializable is being used.
  2. If data is being modified, the select query will have to wait on acquiring the shared lock it needs to read data.

What is blocking

Blocking is the real world impact of locks being taken on resources and other lock types being requested which are incompatible with the existing lock. You need to have locks in order to have blocking. In the scenario where a row is being updated, the lock type of IX or X means that a simultaneous read operation will be blocked until the data modification lock has been released. Similarly, data being read blocks data from being modified. Again, there are exceptions to these based on the isolation level used.

Blocking then is a perfectly natural occurrence within SQL Server. In fact, it is vital to maintain ACID transactions. On a well optimised system, it can be hard to notice and doesn’t cause problems.

Problems occur when blocking is sustained for a longer period of time, as this leads to slower transactions. A typical connection timeout from a web app is 30 seconds so anything above this leads to lots of exceptions. Even at 10 or 15 seconds, it can lead to frustrated users. Very long blocking can bring full servers to a stand still until the lead blockers have cleared.

Identifying blocking

I simply use Adam Machanic’s sp_whoisactive stored procedure. You could use sp_who2 if you absolutely can’t use 3rd party scripts, but this proc is pure t-sql so argue your case.

EXEC sp_whoisactive @find_block_leaders = 1

To kill or not to kill

Sometimes you may have no option but to kill spids in order to clear blocking but it is not desirable. I’m generally a bit happier killing a select query if it is causing blocking, because it won’t result in a DML transaction failing. It might just mean that a report or user query fails.

Multiple identical blockers

If you have multiple blockers and they are all similar or identical, it could mean that an end user is rerunning something that keeps timing out on the app layer. These app timeouts don’t correlate to SQL timeouts so it can be the case that user just keeps hitting f5, oblivious that this is making the problem worse. I’m a lot happier killing these spids, but it’s important to say to the end user where possible, so they don’t keep doing the same thing.

It could also be that a piece of code which is called regularly has regressed and no longer completes quickly. You’ll need to fix this or the blocking headache won’t go away.

What are deadlocks?

Deadlocks occurs when two or more processes are waiting on the same resource as well as waiting on the other process to finish before they can move on. With a scenario like this, something has got to give or they will be in a stand off until the end of time. They are resolved by SQL Server picking a victim, usually the least expensive transaction to roll back. This is like having one of your blocking queries automatically killed to get things moving again. It’s far from ideal, leads to exceptions and may mean that some data intended for your database never got there.

How to check for deadlocks

I like to use sp_blitzlock from Brent Ozar’s first responder kit. If I’m in firefighting mode, I’ll just check for the previous hour. You can also pick out deadlocks from the SQL Server Error Log, or you can set up extended events to capture them.

-- Deadlocks in last hour
DECLARE @StartDateBlitz datetime = (SELECT DATEADD(HH,-1,GETDATE())),@EndDateBlitz DATETIME = (SELECT GETDATE())
EXEC sp_BlitzLock @EndDate = @EndDateBlitz, @StartDate = @StartDateBlitz

Simulating blocking

If you want to simulate blocking, you can try this on the the Wide World Importers database.

/*
   Run each of these, in order, in a different SSMS window.
*/
-- Query 1 (This naughty person went to lunch and never committed their update)
BEGIN TRANSACTION
UPDATE [WorldWideImporters].[Sales].[Customers]
SET CustomerName = 'SpinTail Toys (Head Office)'
WHERE customerID  = 1
-- COMMIT
-- Only run the commit above after all the queries have been run and you have observed blocking. Query 2 will finish instantly.

-- Query 2 (I just want my select results, but there is an uncommitted transaction blocking me)
SELECT *
  FROM [WorldWideImporters].[Sales].[Customers]
WHERE customerID  = 1

-- Query 3 (Check the wait_info)
USE DBA

EXEC sp_whoisactive @find_block_leaders = 1

-- You should see a wait type of LCK_M_S on your select query. This means the thread is waiting to acquire a shared lock.

The image below shows the output of the 3 queries side by side. Query 1 completes quickly, but notice it is uncommitted. Query 2 will not complete until Query 1 is committed or rolled back. Running Query 3 (sp_whoisactive) lets you know which spids are causing the blocking and which are being blocked.

t-sql blocking example

I’ve tried to keep the post on locks, blocks and deadlocks about the differences. I haven’t gone too technical with isolation levels, wait types or lock compatibility. The post is aimed at newer DBAs and developers to help them grasp the technology and understand the distinct purposes of locks, blocks and deadlocks.

IT Certification Category (English)728x90

Popular posts on johnmccormack.it

https://johnmccormack.it/2020/10/how-do-i-set-up-database-mail-for-azure-sql-db-managed-instance/
https://johnmccormack.it/2019/03/put-tempdb-files-on-d-drive-in-azure-iaas/

Filed Under: front-page, Guides, SQL Server Tagged With: blocking, blocks, deadlocks, locking, locks, SQL server, sql server blocking

7 ways for data teams to save money in Azure

19th January 2021 By John McCormack Leave a Comment

Save money in Azure

save money in azure - piggy bank with sunglasses

In this series of posts, I list 7 ways to quickly save money in Azure by using cost optimisation principles. Get on top of your cloud costs and start saving money by putting these into action. Whilst this series is specific to Azure, most of the principles can be applied to other public cloud providers. The only thing that differs is the product. Think RDS as an equivalent for SQL DB or S3 for Storage Accounts.

It’s important to keep on top of your cloud costs. Operating in the cloud means it is easy to just spin up new instances in minutes. Whilst that is great and allows company to work in agile manner, the switch to an Operational Expenditure (OPEX) model means that the cost increase can be gradual and go unnoticed.

1. Review your backup retention policy

https://johnmccormack.it/2021/01/azure-iaas-sql-backups-stop-burning-money/

2. Right size your Azure SQL DBs and managed instances

https://johnmccormack.it/2021/01/your-azure-sql-database-and-managed-instance-is-too-big/

3. Turn non production instances off out of hours

https://johnmccormack.it/2021/01/turn-the-cloud-off-at-bedtime-to-save-70/

4. Right Size your VMs to save money in Azure

https://johnmccormack.it/2021/01/your-azure-sql-virtual-machine-might-be-too-big-cost-optimisation/

5. Consider moving to Azure SQL DB serverless

https://johnmccormack.it/2021/01/save-money-with-azure-sql-db-serverless/

6. Reserved instances

https://johnmccormack.it/2021/01/save-73-with-reserved-instances-for-stable-workloads/

7. Delete unused instances

https://johnmccormack.it/2021/01/delete-unused-instances-to-save-money-in-azure/

Bonus steps:

This mini series of steps are also important.

  1. Implement Elastic Pools to share resources
  2. Look to see if you are eligible for Azure Hybrid Benefit

Everybody wins

By optimising your costs in Azure and ensuring you are paying the right price for all of your services, everybody wins. Not only are you helping your company save money, it is good for your career and it could even save jobs in your company. Not to mention turning off what you don’t use is good for the environment too.

Need some help? – Book a call

Please get in touch if you would like to schedule a free introductory 15 minute call for some help in reducing your Azure bill.

Filed Under: Azure, cost-optimization Tagged With: azure, azure billing, azure iaas, azure sql db, cost optimisation, cost optimization

Your Azure SQL Database and Managed Instance is too big

15th January 2021 By John McCormack Leave a Comment

Don’t buy for the future

With the cloud, it’s important to understand the concepts of scaling and elasticity. Your cloud services should be scaled at the right size to handle current workloads, with a small buffer. With Azure SQL Database, there is no need to buy that 32 vCore service, when all you need right now is 4. The rest will be available when you need them. You should be ready to scale up or out when the time is right but not before. With the cloud, you should shift your thinking from a CAPEX operating model to and OPEX operating model.

CAPEX vs OPEX

Capital Expenditure (CAPEX) describes where an up-front purchase is made to provide value into the future. e.g. A Server. Once it is purchased, you don’t need to buy it again and can use it until it outlives is usefulness. It’s a big upfront expense to provide long term value. For something like a database server, you might forecast what size you will need in 5 years and buy accordingly.

Operational Expenditure (OPEX) on the other hand describes ongoing costs and operating expenses. Items such as electrical power, rent and staff salaries come under OPEX. Pay As You Go cloud solutions fall under operational expenditure (OPEX) as they are ongoing costs. With this operating model, you should only pay for what you need, and adjust your scale as driven by demand, not future projections.

Resource utilisation and purchase models

Example graph for Azure SQL Database

DTU Model

Small to medium single databases can use the DTU purchase model which is a blend of cpu, memory and IO. They come with fixed storage per instance type. When monitoring performance, you should be looking mainly at DTU percentage used and storage percentage used.

If you are averaging above 85% and have regular peaks higher than that for DTU usage, you should scale up. You can adjust this figure based on how relaxed or nervous you are the DTU getting pinned at 100%.

If you are averaging below 50% and have few or no peaks above this, you can scale down. (Providing the lower class has enough storage) Again, adjust this to suit your own needs but there is no need to pay double unless you absolutely need to bake in a lot of spare capacity.

vCore

The vCore purchase model provides more flexibility than the DTU model. You can choose your compute and storage resources separately. There is no way to allocate extra memory however, this is bound to the number of vCores you have. Managed Instances only use the vCore purchase model.

I would apply a similar rule to scaling up or down here based on needs. Any instance using < 50% of CPU is a good candidate for scaling down.

Azure SQL Database Service Tiers

Ensure you are on the right service tier. Azure SQL Single DB comes with a choice of Basic, Standard and Premium.

Basic (DTU)

If you are on the basic service tier, then from a cost optimization process there is not too much more to say. It has the cheapest instance types, and the lowest resources so if it is doing a job for you, then there is no need to change. It only offers 7 days backup retention so be aware.

Standard (DTU)

A large number of workloads will be running on standard. If you are on the standard tier, you can scale down based on utilisation however you should bear a couple of things in mind.

  • S2 and below does not allow columnstore indexing
  • S1 instance types and below use HDD storage and this might have an impact on your query durations
    • S2 and above uses SSD

Premium (DTU)

Moving to the premium service tier should be well thought out as it includes the most expensive instances, and may mean that you are over provisioned. If you are on a premium instance, look closely at your utilisation, particularly DTU percent used. For seemingly over provisioned instances, you should also take note of a couple of factors before making a decision. Premium is particularly good for IO intensive queries due to a higher average IOPS and a lower IO latency.

Scaling down from premium service tier means:

  • A decrease in average IOPS from 25 to (1-4) IOPS per DTU
  • An increase in IO latency from 2ms to 5ms
  • A new maximum database size of 1 TB

* Changing instance size but staying with premium service tier affects resources only

If you can handle the drop in instance size and/or reduction in service tier, I would recommend scaling down. Remember to keep a closer eye on newly changed instances, and scale back up if performance is no longer acceptable.

https://docs.microsoft.com/en-us/azure/azure-sql/database/service-tiers-dtu

General Purpose (vCore)

General purpose is the budget option but is suitable for a lot of production workloads. If you are currently on GP and you are happy with your performance, then do not change to a higher service tier.

Business Critical (vCore)

Most production workloads can run in Business Critical but you might be over provisioned. 1 read-only replica is built into the price so you can offload read-only workloads to your replica. It doesn’t come cheap though and is more than double the price of a General Purpose instance due to increased hardware and licensing. Scaling down to General Purpose will take away this replica so beware if you use it. A good option may be to scale to a lower Business Critical Instance, so that you still have a read replica built into the price.

Hyperscale (vCore)

Hyperscale is not currently available with Managed instance. (January 2021)

Your instances on Hyperscale are roughly double the cost of General Purpose but less than Business Critical. The main considerations are database size and log write throughput. On an exceptionally busy and large database, hyperscale is often the best choice.

Be aware of the reduced performance and features available if you think you can scale down to General Purpose. Changing tiers down from Hyperscale is not a native or easy task. It isn’t supported so you will need to export and import your data to a new database.

Scaling down to GP means:

  • Your database must be less than 4TB
  • You will move to premium remote storage, rather than local SSD
  • You cannot continue to use In-Memory OLTP
  • It’s a lot of work

https://docs.microsoft.com/en-us/azure/azure-sql/database/service-tiers-general-purpose-business-critical

Summary

The main thing is to regularly review your instances and make sure you are not paying for too much headroom. Make sure that by scaling down, you still have the functionality you need. If you scale down and performance is not what you expected, you should scale back up. Cost optimization shouldn’t come at the expense of performance. It is about ensuring that you are paying for what you need, and nothing more.

If you missed it, please post 1 in this cost optimization series, please have a read about Azure IaaS SQL Backups.

IT Certification Category (English)728x90

Filed Under: Azure, cost-optimization, front-page Tagged With: azure billing, cost optimisation, cost optimization, managed instance, SQL Database, SQL DB

Why I certified to become more than a SQL DBA

11th May 2021 By John McCormack 2 Comments

t-sql tuesday logo

For this month’s T-SQL Tuesday, Andy Leonard (b|t) asks How Do You Respond When Technology Changes Under You?

It’s a great topic and inspired me to write a T-SQL Tuesday post this month. I’ve missed the last couple due to a combination of just not feeling like it, and the previous topics weren’t best suited to me. Hopefully this can inspire me to get writing again.

Hey, I thought I was the SQL DBA!

When I got my start with SQL Server, I was just that –  A SQL DBA. I worked as a junior member of a big team, we had a large SQL Server estate and I just wanted to learn to become as experienced as the senior folk. I was able to learn and be mentored and was enjoying making progress. Then I heard we were going to start using “The Cloud”. Not only that but I was going to be responsible for the operation of our Kafka clusters, EMR and ElasticSearch.

A lot of these technologies were developer led, they knew why they wanted to use them and from an operations perspective, it was about making sure things didn’t fail, and errors were handled. I had no background in them, and didn’t really understand them well. Simply learning to use PuTTY to log into a linux VM was one of my early successes.

AWS Certifications FTW

The move to the cloud, in this case AWS was going ahead with or without me so I had to embrace it. Of course, I learned and collaborated with colleagues and got to an acceptable standard but it was only when I decided to do AWS certifications that things really took off. I found that by studying for the AWS Cloud Solutions Architect certification that I got an appreciation for all cloud services, not just the data ones like RDS. Studying for the AWS certifications improved my networking skills; I learned about object vs block storage and of course I learned all of the different data products (often managed services but not always).

I approached the certification process from a genuine perspective of learning the technology well, rather than just passing the exam. Simply passing the exam does not do much for you. Even if it gets you a job, you will likely be found out pretty quickly.

A further benefit to learning one cloud technology well is that when we started using Azure, I found the concepts were the same. I understood the networking, the security groups and PaaS vs IaaS. It made this transition much easier and meant that when I moved to a job that was predominantly based in Azure, I didn’t have the huge knowledge gap that I would have had, if I had not done my AWS certifications.

So for me, a structured learning approach is best. It’s not for everyone but I am certainly glad that my job is no longer just “SQL DBA”.

Filed Under: Certification, front-page

  • « Previous Page
  • 1
  • 2
  • 3
Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

John McCormack · Copyright © 2025

 

Loading Comments...