John McCormack DBA

SQL Server Databases and Cloud

  • Personal
    • About
  • Free Training
    • SQL Server on Amazon RDS (Free Course)
    • Free practice questions to help you pass DP-900
  • Save money in Azure
    • Azure IaaS SQL Backups – Stop burning money
    • Your Azure SQL Database and Managed Instance is too big
    • Turn the cloud off at bedtime to save 70%
    • Your Azure SQL Virtual Machine might be too big
    • Save money with Azure SQL DB serverless
    • Save up to 73% with reserved instances
    • Delete unused instances to save money in Azure
  • Hire me
    • 60 minute cost optimization
    • Let me solve your SQL Server problems
    • Take a look at my Sessionize speaker’s profile

Your Azure SQL Virtual Machine might be too big

19th January 2021 By John McCormack Leave a Comment

Cost optimisation

This is post #4 in my series on 7 ways for data teams to save money in Azure. Cost optimisation is crucial to any organisation that operates in the cloud, as costs can and do run away from away you without regular attention.

To avoid repetition of my my post regarding the size of your Azure SQL DBs, please read that to understand the concepts of elasticity, scalability and CAPEX vs OPEX.

One great thing about the cloud is that you can change your mind, or adjust as you go along. Unlike days gone by where you would have a bought a giant SQL Server and ran about 10% utilisation (just to allow room for future growth), you can be much smarter now and only pay for what you need.

Is my instance oversized?

You should be collecting metrics on CPU and memory, as well as looking at your throughput and disk latencies to see how well you are sized. If your CPU is always below 50%, including spikes and you haven’t identified memory pressure, you can look at scaling down. In an ideal world, you should run load tests against the old and new instance types, and compare them before making a decision. Cutting CPUs will give you the biggest financial saving in Azure due to the way that licensing works.

Is my instance undersized?

If your CPU is regularly pinned above 85%, you need to take action. Scaling up is one option, improving your queries is another. Some people will choose to scale up and then plan fix their queries, with the ambition to scale back down again later. In my experience, this rarely happens and they just accept the new instance size as normal. Other work becomes the priority. If you can hold off scaling your instance to give you time to fix your expensive queries first, I would recommend it.

Try not to rush a decision, there are so many instance types and sizes to choose from, each one optimised for a different use case. Do your research and choose the most appropriate instance type, not just next the one up.

I do need more compute power, what’s next?

SQL Server is licensed by the core. The more cores, the higher your licensing costs. However although instance classes tend to double cores and memory as you step up to the next level, there is a smarter way to do it. If you have determined that you don’t need more CPU but you do need more memory, look into the memory optimised instances such as the E-Series.

Looking at the table below, imagine you are on a D16 V4 which provides 64 GB of RAM, but you need to increase your RAM to 128 GB. If you choose the next highest D-Series machine (D32 V4), you double your costs. You might get the extra cores, and they will also help performance, but the expense is unjustified.

Consider instead changing to an E-Series (E16s V4) where you can keep your cores at 16, meaning no change to Windows or SQL licensing costs but you also get the desired 128 GB of RAM. You only pay £158 more per month. *

Instance TypeCPUsMemory (RAM)Monthly cost
D16 V41664 GBCompute £505.45
OS (Windows) £400.44
SQL Licence £3,264.45
Total £4,170.34
D32 V432128 GBCompute £1011.44
OS (Windows) £800.88
SQL Licence £6,528.91
Total £8,341.22
E16s V416128 GBCompute £663.77
OS (Windows) £400.44
SQL Licence £3,264.45
Total £4,328.67
* Prices taken from Azure Calculator on 18/1/2021. Subject to change

Constrained CPU Instances

For memory intensive applications such as SQL Server, constrained CPU instances provide another tool in the battle for cost optimisation.

Say you actually needed 256 GB of RAM but you would still be ok with 16 cores. Sticking with the D-Series, you would need to go up to 64 cores to get the desired memory. You can take the idea of memory optimisation a step further with constrained CPU instances. With these, you get the extra CPUs but they are not available to you. This allows an instance with even more RAM than the memory optimised instances. This means that whilst you have increased compute costs, you don’t have increased license costs as you only pay licenses based on available CPUs. This is a significant saving as you move into higher instance tiers.

Instance TypeCPUsMemory (RAM)Monthly cost
D64s V464256 GBCompute £2,022.33
OS (Windows) £1,601.76
SQL Licence £13,057.81
Total £16,681.90
E32 16s V416 available256 GBCompute £1,327.00
OS (Windows) £800.88
SQL Licence £3,264.45
Total £5,392.33
* Prices taken from Azure Calculator on 18/1/2021. Subject to change

Cost optimisation summary

Don’t provision for what you will need in the future, size your instance for what you need now and keep on top of it. Changing is quick and easy. Be aware of non standard instance types like memory optimised and Constrained CPU to maximise savings against your SQL licensing costs.

Play about with the Azure pricing calculator in order to compare instance types.

IT Certification Category (English)728x90

Filed Under: cost-optimization, front-page Tagged With: azure, azure iaas, cost optimisation, cost optimization

Turn the cloud off at bedtime to save 70%

18th January 2021 By John McCormack Leave a Comment

Turn the cloud off at bedtime

When you operate in a public cloud such as Azure or AWS, you pay compute costs for all of the time that your instances are online. For your business facing and business critical systems, that is usually 24/7. But what about all those non production instances that run all night and all weekend? If these don’t need to be online 24/7, you can save up to 70% of your running costs by switching them off. This is what is meant by turn the cloud off at bedtime.

Example SQL Server VM

Your development hours are the standard 9am to 5pm. To allow some flexibility, you wish to make your development instances available to developers from 8am until 6pm.

Weekly hoursPriceSaving %
168£152.76–
5045.4670.24%
* E16s V4 SQL Server VM in UK West, with Dev/Test licensing on Pay as you go pricing model

Deallocate is important

Ensure that your VM status is set to deallocated in the Azure portal to ensure you don’t receive ongoing billing for your compute expenses. This is best achieved by using the Azure Portal, CLI or PowerShell. If you just shutdown a machine, this won’t deallocate your compute resources and you will still be charged.

Compute savings only

The savings to be made are on compute resources as well as any associated licenses when they are included in your PAYG pricing. These are usually your biggest expenses anyway, but it’s worth noting that you will still be charged for any storage attached to your instance.

PAYG vs Reserved Instances

This post in primarily but not exclusively focussed on PAYG instances. There is a whole post to follow on reserved instances (RI) but I’d like to say this for now: You may still make savings by shutting down a reserved instance out of hours, if you have another instance (perhaps used by a dev team in a different time zone) that can pick up the free allocation. The billing for these is complex however. This page explains how RI reservations work against VMs in more detail.

How to turn the cloud off

When I first did this, it involved a mixture of Automation runbooks, PowerShell and instance tagging. There also wasn’t any flexibility to keep instance on for a bit longer without deleting the tag. Thankfully, Azure have made this process much easier now with this guide on how to start/stop VMs during off-hours.

What can’t be turned off

Certain PaaS services don’t allow the start and stop of services. Azure SQL Database and Managed Instance do not allow the stopping or pausing of the service. This takes this money saving option away from you. You could certainly save out of hours by scaling down your instance on a schedule using automation runbooks. This is a fairly quick operation for SQL DB. As for Managed Instance, it could take up to 4 hours so whilst this isn’t usually a great option for overnights, you could consider it for weekends.

The alternative is dropping and recreating your SQL DBs or managed instances each time using Infrastructure as Code. It’s more work but potentially worth the effort if you are running many and/or large PaaS instances.

What can be turned off

These posts are aimed at SQL professionals so I will stop short of listing everything on Azure however VMs as mentioned extensively in this post are the main choice. They are easy to stop and start manually, and the automation of this process is now easier than ever.

There is a PaaS Service that allows pausing of compute and that is Synapse Analytics. SQL Pools can be brought online for ETL purposes and paused the rest of the time. This can offer significant savings on Synapse Analytics.

Summary

Review all of your non production data services in Azure. If you have a lot, automation is the best and only way to keep fully on top of ensuring you are only paying for the services when you need them. Optimizing costs isn’t a one-time event. You need to be constantly on top of your bill to ensure that new instances are adhering to your cost optimization plans.

As always, please feel free to leave a comment if you wish to join the discussion.

Filed Under: cost-optimization, front-page Tagged With: azure billing, cost optimisation, cost optimization, SQL VMs

Your Azure SQL Database and Managed Instance is too big

15th January 2021 By John McCormack Leave a Comment

Don’t buy for the future

With the cloud, it’s important to understand the concepts of scaling and elasticity. Your cloud services should be scaled at the right size to handle current workloads, with a small buffer. With Azure SQL Database, there is no need to buy that 32 vCore service, when all you need right now is 4. The rest will be available when you need them. You should be ready to scale up or out when the time is right but not before. With the cloud, you should shift your thinking from a CAPEX operating model to and OPEX operating model.

CAPEX vs OPEX

Capital Expenditure (CAPEX) describes where an up-front purchase is made to provide value into the future. e.g. A Server. Once it is purchased, you don’t need to buy it again and can use it until it outlives is usefulness. It’s a big upfront expense to provide long term value. For something like a database server, you might forecast what size you will need in 5 years and buy accordingly.

Operational Expenditure (OPEX) on the other hand describes ongoing costs and operating expenses. Items such as electrical power, rent and staff salaries come under OPEX. Pay As You Go cloud solutions fall under operational expenditure (OPEX) as they are ongoing costs. With this operating model, you should only pay for what you need, and adjust your scale as driven by demand, not future projections.

Resource utilisation and purchase models

Example graph for Azure SQL Database

DTU Model

Small to medium single databases can use the DTU purchase model which is a blend of cpu, memory and IO. They come with fixed storage per instance type. When monitoring performance, you should be looking mainly at DTU percentage used and storage percentage used.

If you are averaging above 85% and have regular peaks higher than that for DTU usage, you should scale up. You can adjust this figure based on how relaxed or nervous you are the DTU getting pinned at 100%.

If you are averaging below 50% and have few or no peaks above this, you can scale down. (Providing the lower class has enough storage) Again, adjust this to suit your own needs but there is no need to pay double unless you absolutely need to bake in a lot of spare capacity.

vCore

The vCore purchase model provides more flexibility than the DTU model. You can choose your compute and storage resources separately. There is no way to allocate extra memory however, this is bound to the number of vCores you have. Managed Instances only use the vCore purchase model.

I would apply a similar rule to scaling up or down here based on needs. Any instance using < 50% of CPU is a good candidate for scaling down.

Azure SQL Database Service Tiers

Ensure you are on the right service tier. Azure SQL Single DB comes with a choice of Basic, Standard and Premium.

Basic (DTU)

If you are on the basic service tier, then from a cost optimization process there is not too much more to say. It has the cheapest instance types, and the lowest resources so if it is doing a job for you, then there is no need to change. It only offers 7 days backup retention so be aware.

Standard (DTU)

A large number of workloads will be running on standard. If you are on the standard tier, you can scale down based on utilisation however you should bear a couple of things in mind.

  • S2 and below does not allow columnstore indexing
  • S1 instance types and below use HDD storage and this might have an impact on your query durations
    • S2 and above uses SSD

Premium (DTU)

Moving to the premium service tier should be well thought out as it includes the most expensive instances, and may mean that you are over provisioned. If you are on a premium instance, look closely at your utilisation, particularly DTU percent used. For seemingly over provisioned instances, you should also take note of a couple of factors before making a decision. Premium is particularly good for IO intensive queries due to a higher average IOPS and a lower IO latency.

Scaling down from premium service tier means:

  • A decrease in average IOPS from 25 to (1-4) IOPS per DTU
  • An increase in IO latency from 2ms to 5ms
  • A new maximum database size of 1 TB

* Changing instance size but staying with premium service tier affects resources only

If you can handle the drop in instance size and/or reduction in service tier, I would recommend scaling down. Remember to keep a closer eye on newly changed instances, and scale back up if performance is no longer acceptable.

https://docs.microsoft.com/en-us/azure/azure-sql/database/service-tiers-dtu

General Purpose (vCore)

General purpose is the budget option but is suitable for a lot of production workloads. If you are currently on GP and you are happy with your performance, then do not change to a higher service tier.

Business Critical (vCore)

Most production workloads can run in Business Critical but you might be over provisioned. 1 read-only replica is built into the price so you can offload read-only workloads to your replica. It doesn’t come cheap though and is more than double the price of a General Purpose instance due to increased hardware and licensing. Scaling down to General Purpose will take away this replica so beware if you use it. A good option may be to scale to a lower Business Critical Instance, so that you still have a read replica built into the price.

Hyperscale (vCore)

Hyperscale is not currently available with Managed instance. (January 2021)

Your instances on Hyperscale are roughly double the cost of General Purpose but less than Business Critical. The main considerations are database size and log write throughput. On an exceptionally busy and large database, hyperscale is often the best choice.

Be aware of the reduced performance and features available if you think you can scale down to General Purpose. Changing tiers down from Hyperscale is not a native or easy task. It isn’t supported so you will need to export and import your data to a new database.

Scaling down to GP means:

  • Your database must be less than 4TB
  • You will move to premium remote storage, rather than local SSD
  • You cannot continue to use In-Memory OLTP
  • It’s a lot of work

https://docs.microsoft.com/en-us/azure/azure-sql/database/service-tiers-general-purpose-business-critical

Summary

The main thing is to regularly review your instances and make sure you are not paying for too much headroom. Make sure that by scaling down, you still have the functionality you need. If you scale down and performance is not what you expected, you should scale back up. Cost optimization shouldn’t come at the expense of performance. It is about ensuring that you are paying for what you need, and nothing more.

If you missed it, please post 1 in this cost optimization series, please have a read about Azure IaaS SQL Backups.

IT Certification Category (English)728x90

Filed Under: Azure, cost-optimization, front-page Tagged With: azure billing, cost optimisation, cost optimization, managed instance, SQL Database, SQL DB

Azure IaaS SQL Backups – Stop burning money

14th January 2021 By John McCormack 2 Comments

This post is part of a planned series on 7 ways for data professionals to save money in Azure. It’s so easy for costs to escalate in the cloud, and being proactive is the best way to optimise your costs, and make sure you aren’t spending more than you need to. In this post, I look at how to cut costs by implementing an Azure IaaS SQL Backups retention policy and ensuring you are not paying for more than you need.

Azure IaaS SQL Backups

This advice is to specific on-premises or IaaS backups which are going directly into an Azure blob storage account. Whilst blob storage can be reasonably cheap, if you keep every backup with no retention policy, the costs will soon escalate.

Scenario

You have 5 databases on a SQL Server Instance. You take daily full backups of each database on your instance. You also take log backups every 15 minutes as each database is in full recovery mode. This means in 1 week, you will have 35 full backups and 3,360 transaction log backups. This multiplies to 1,820 full and 174,720 t-log backups over 52 weeks. Multiply this for 7 years or more and the costs can get very expensive.

Your company policy only requires point in time restore functionality for 2 weeks of data. On top of that, daily full backups for the previous 3 months and a weekly full backup for between 3 months and 7 years must be kept.

For simplicity, lets assume an average full backup size of 25GB and a log backup size of 100MB.

Costs with no planned retention policy

TimescaleFull backupsLog backupsBlob storage tier
(East US – GPV2 – LRS – Standard
)
Monthly Cost
Previous 7 years (5 DBs)12,7751,226,400Hot£4,784.79 + £1856.92
£6,641.71

Costs with a planned retention policy

TimescaleFull backupsLog backupsBlob Storage tier
(East US – GPV2 – LRS – Standard)
Monthly Cost
Previous 14 days706,720Hot £27.13
£10.42
£38.55
15 days to 93 days3900Cool£110.45
94 days to 2,555 days17580Archive£45.40
TOTAL2,2186,720Mixed£194.40

So if we optimise the storage to only keep the full and t-log backups that we need, and we use cool and archive storage correctly, we can save over £6,000 per month. There are four main ways to actively manage data retention in order that you optimise your cloud costs, without breaching your data retention policy.

1. Azure Blob Storage lifecycle management

Lifecycle management rules are a simple way to manage files within blob storage. You can either delete files or move them to a cheaper storage class. The rules are fairly customisable so this a basic option which works well without too much configuration.

There are huge savings to be made by moving storage to the archive tier. If you’re holding on to backups for regulatory purposes or to adhere to company policies but won’t be using them regularly, the archive tier provides great value for money. Your data may take several hours to be available when you make a request so it’s not suitable for any disaster recovery scenario but providing you don’t mind waiting, you will cut your Azure bill substantially.

https://docs.microsoft.com/en-us/azure/storage/blobs/storage-lifecycle-management-concepts?tabs=azure-portal

2. SQL Server IaaS Agent extension

If you have SQL Server running in Azure VMs, you can opt to have them automatically backed up by the SQL Server IaaS Agent Extension. Whilst there is some customisation available, you can’t retain your backups beyond 30 days. This makes it unsuitable for our scenario here but if all you need is 30 days or fewer, then this is a great option. Having Azure automatically handle the backups, gets you one step closer to Platform as a Service whilst still retaining the flexibility or running a full SQL Server instance.

sql server iaas agent extension backup

Automated backups for SQL Server 2014

Automated backups for SQL Server 2016+

3. Azure Backup (SQL Server VMs)

You can use Azure backup, via recovery services vault to configure backups on multiple servers. You can choose to back up specific databases or all, including any new ones added after configuration by selecting AUTOPROTECT. There are very detailed retention options which allow you to set a very fine grained retention policy.

4. Roll your own solution

This isn’t needed as much as it used to be. It used to be the case that the lifecycle rules were very inflexible and to keep certain backups, and change storage options, you needed to use your own script. My preferred choice was using PowerShell and Azure Automation Runbooks. With one client, this solution saved hundreds of thousands of pound per year.

* I would only recommend this if the two options above don’t provide you with the flexibility you need to manage your storage retention, according to your own complex requirements.

P.S Don’t forget backup compression

With all of these solutions, don’t forget to enable compression on your backups. This will help you save even more money in addition to the retention policy you have just put in place.

IT Certification Category (English)728x90

Filed Under: Azure, cost-optimization, front-page Tagged With: azure billing, backups, cost optimisation, cost optimization, sql backups

  • « Previous Page
  • 1
  • 2
Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

John McCormack · Copyright © 2025

 

Loading Comments...