John McCormack DBA

SQL Server Databases and Cloud

  • Personal
    • About
  • Free Training
    • SQL Server on Amazon RDS (Free Course)
    • Free practice questions to help you pass DP-900
  • Save money in Azure
    • Azure IaaS SQL Backups – Stop burning money
    • Your Azure SQL Database and Managed Instance is too big
    • Turn the cloud off at bedtime to save 70%
    • Your Azure SQL Virtual Machine might be too big
    • Save money with Azure SQL DB serverless
    • Save up to 73% with reserved instances
    • Delete unused instances to save money in Azure
  • Hire me
    • 60 minute cost optimization
    • Let me solve your SQL Server problems
    • Take a look at my Sessionize speaker’s profile

Search Results for: iaas

IaaS++ (Azure SQL Server IaaS Agent Extension)

11th December 2020 By John McCormack Leave a Comment

IaaS++

Most DBAs or cloud practioners have seen a graph similar to this ⬇. It shows the flexibility and responsibilities between different methods of adopting SQL in Azure. SQL on VMs gives you the most flexibility but also the most administrative work. SQL DB single instance handles almost all of the “heavy lifting” (things like backup,os patching, installation etc), but gives you the least flexibility. Azure SQL DB managed instance lies some where in between. SQL on VMs are known as Infrastructure As A Service (IaaS). SQL DB (Single DB or managed instance) are known as Platform As A Service (PaaS).

flexibility vs responsibility graph

But now there is another option, called SQL Server IaaS Agent extension. I think of it as IaaS++ as it extends your SQL VMs to give them some of that heavy lifting funtionality that the PaaS offerings provide, whilst still allowing you full control over the instance.

What do you get with SQL Server IaaS Agent extension?

The main two items I will go into here are automated backups and automated patching. These are a standard on most PaaS products, with all cloud providers, however it is only down to the introduction of this “IaaS++” extension, that you can now get this for SQL on VMs.

You can also configure storage, high availability, Azure Key Vault integration and R services, as well as enabling a subscription wide view of all your instance and license types, however this post only focuses on automated backups and patching.

Real world scenarios

Patching

My client had fallen behind with patching and needed to ensure that important servers were patched regularly. By enabling automated patching, it meant that they could have only the important patches applied during an agreed window, and then look at other patches and cumulative updates when it suited them. They had a test environment that mirrored production, with a 3 node availability group cluster. (Automatic failover was enabled) so I was able to test the solution there, before going anywhere near production. The plan was as simple as this:

  1. Add a 90 minute window at 12:00 for Server1
  2. Add a 90 minute window at 02:00 for Server2
  3. Add a 90 minute window ar 04:00 for Server3.

This approached allowed 30 minutes at the end of each window for VMs to be restarted before the next VM’s window would start.

  • Click on automated patching from the SQL Virtual Machine in Azure Portal.
  • Update the toggles to set your patching window.
  • Daily or weekly schedules can be chosen.
  • If patches are applied, your VM will be restarted.
IaaS Extension Automated Patching

This approach allowed them to move from 44 outstanding patches to 4 on 3 servers without manual intervention. Failovers happened seemlessly. I’d just urge a word of caution with critical production systems, as this will restart your VMs. Are you ready for that? My advice is get comfortable with it on non prod systems before starting on production.

I think it’s a great feature. It’s not always possible to just go to Managed Instance so for those of us who need a full SQL install, this is a handy hybrid.

Backups

Another client was using the Ola Hallengren solution for managing backups. It’s the best solution out there when you need to configure your own backups but what if your cloud provider will do it for you? This client also didn’t have an experienced DBA, so in this case, it is better to let Microsoft do it. What’s more, you can configure a retention period between 1 and 30 days to stop your storage costs from ever increasing.

Before starting, make sure you don’t have your own backup solution running in parallel.

  • Click on automated backups
  • Configure the toggles to suit your needs
  • Link it to a storage account
  • Check your backups are working as expected and can be restored

These tasks can be automated as well using PowerShell or Azure CLI. I’ll maybe cover this in a future blog.

Popular posts on johnmccormack.it

https://johnmccormack.it/2020/10/how-do-i-set-up-database-mail-for-azure-sql-db-managed-instance/
https://johnmccormack.it/2019/03/put-tempdb-files-on-d-drive-in-azure-iaas/

Filed Under: Azure, Azure SQL DB, Azure VM, front-page Tagged With: azure, azure iaas, IaaS++, SQL server

Azure IaaS SQL Backups – Stop burning money

14th January 2021 By John McCormack 2 Comments

This post is part of a planned series on 7 ways for data professionals to save money in Azure. It’s so easy for costs to escalate in the cloud, and being proactive is the best way to optimise your costs, and make sure you aren’t spending more than you need to. In this post, I look at how to cut costs by implementing an Azure IaaS SQL Backups retention policy and ensuring you are not paying for more than you need.

Azure IaaS SQL Backups

This advice is to specific on-premises or IaaS backups which are going directly into an Azure blob storage account. Whilst blob storage can be reasonably cheap, if you keep every backup with no retention policy, the costs will soon escalate.

Scenario

You have 5 databases on a SQL Server Instance. You take daily full backups of each database on your instance. You also take log backups every 15 minutes as each database is in full recovery mode. This means in 1 week, you will have 35 full backups and 3,360 transaction log backups. This multiplies to 1,820 full and 174,720 t-log backups over 52 weeks. Multiply this for 7 years or more and the costs can get very expensive.

Your company policy only requires point in time restore functionality for 2 weeks of data. On top of that, daily full backups for the previous 3 months and a weekly full backup for between 3 months and 7 years must be kept.

For simplicity, lets assume an average full backup size of 25GB and a log backup size of 100MB.

Costs with no planned retention policy

TimescaleFull backupsLog backupsBlob storage tier
(East US – GPV2 – LRS – Standard
)
Monthly Cost
Previous 7 years (5 DBs)12,7751,226,400Hot£4,784.79 + £1856.92
£6,641.71

Costs with a planned retention policy

TimescaleFull backupsLog backupsBlob Storage tier
(East US – GPV2 – LRS – Standard)
Monthly Cost
Previous 14 days706,720Hot £27.13
£10.42
£38.55
15 days to 93 days3900Cool£110.45
94 days to 2,555 days17580Archive£45.40
TOTAL2,2186,720Mixed£194.40

So if we optimise the storage to only keep the full and t-log backups that we need, and we use cool and archive storage correctly, we can save over £6,000 per month. There are four main ways to actively manage data retention in order that you optimise your cloud costs, without breaching your data retention policy.

1. Azure Blob Storage lifecycle management

Lifecycle management rules are a simple way to manage files within blob storage. You can either delete files or move them to a cheaper storage class. The rules are fairly customisable so this a basic option which works well without too much configuration.

There are huge savings to be made by moving storage to the archive tier. If you’re holding on to backups for regulatory purposes or to adhere to company policies but won’t be using them regularly, the archive tier provides great value for money. Your data may take several hours to be available when you make a request so it’s not suitable for any disaster recovery scenario but providing you don’t mind waiting, you will cut your Azure bill substantially.

https://docs.microsoft.com/en-us/azure/storage/blobs/storage-lifecycle-management-concepts?tabs=azure-portal

2. SQL Server IaaS Agent extension

If you have SQL Server running in Azure VMs, you can opt to have them automatically backed up by the SQL Server IaaS Agent Extension. Whilst there is some customisation available, you can’t retain your backups beyond 30 days. This makes it unsuitable for our scenario here but if all you need is 30 days or fewer, then this is a great option. Having Azure automatically handle the backups, gets you one step closer to Platform as a Service whilst still retaining the flexibility or running a full SQL Server instance.

sql server iaas agent extension backup

Automated backups for SQL Server 2014

Automated backups for SQL Server 2016+

3. Azure Backup (SQL Server VMs)

You can use Azure backup, via recovery services vault to configure backups on multiple servers. You can choose to back up specific databases or all, including any new ones added after configuration by selecting AUTOPROTECT. There are very detailed retention options which allow you to set a very fine grained retention policy.

https://youtu.be/wmbANpHos_E
4. Roll your own solution

This isn’t needed as much as it used to be. It used to be the case that the lifecycle rules were very inflexible and to keep certain backups, and change storage options, you needed to use your own script. My preferred choice was using PowerShell and Azure Automation Runbooks. With one client, this solution saved hundreds of thousands of pound per year.

* I would only recommend this if the two options above don’t provide you with the flexibility you need to manage your storage retention, according to your own complex requirements.

P.S Don’t forget backup compression

With all of these solutions, don’t forget to enable compression on your backups. This will help you save even more money in addition to the retention policy you have just put in place.

IT Certification Category (English)728x90

Filed Under: Azure, cost-optimization, front-page Tagged With: azure billing, backups, cost optimisation, cost optimization, sql backups

Put tempdb files on D drive in Azure IAAS

21st March 2019 By John McCormack 5 Comments

Data loss warning Azure VMTempdb files on D drive you say?

Azure warn you not to to store data on the D drive in Azure VMs, but following this advice could mean you are missing out on some very fast local storage. It’s good general advice because this local storage is not permanently attached to your instance, meaning you could lose data or log files if your VM is stopped and restarted but what if you could afford to lose certain files? Say files that are recreated during startup anyway.

TempDB is the ideal candidate for this. No other database is suitable! Putting the tempdb data and log files onto D drive can be achieved quite easily with a little bit of effort. And you will most likely see a big improvement in tempdb read/write latency.

Just look at these results! Even for someone like me who sometimes looks at charts and often can’t see the obvious, I don’t think there is any denying when the change took place here.

tempdb latency azure IAAS
Blazing fast now

 

Overview of how to configure SQL Server to use D drive for tempdb on Azure VM

  1. Run the ALTER DATABASE command for all of the files you need to change e.g.
    • ALTER DATABASE [tempdb] MODIFY FILE ( NAME = N'tempdev', FILENAME = 'D:\TempDB\tempdev.mdf')
  2. Create Folder D:\TempDB or something similar
  3. Restart SQL Server and make sure your files are created in D: as expected
  4. Manually remove any old tempdb files still lying around

Now to make sure tempdb is created when the VM restarts

  1. Create a Powershell script which will create the D:\TempDB folder on startup and start the SQL services
    • You could use this script at https://community.idera.com/database-tools/blog/b/community_blog/posts/configuring-tempdb-on-azure-iaas-for-sql-server 
    • Schedule this script with a Task Scheduler startup job
    • Change your execution policy to remote signed if it isn’t that. Set-ExecutionPolicy RemoteSigned
  2. Change the startup type of your SQL Services to Automatic (Delayed Start)
  3. Test this thoroughly in dev before you rely on it in production. If it doesn’t work, you’ll need to remember how you configured tempdb and manually create the folders, then start SQL Server. Not a fun task under pressure.
More tempdb goodness

https://johnmccormack.it/2016/12/configuring-tempdb-sql-server-2016-instances/

IT Certification Category (English)728x90

Filed Under: Azure, Azure VM, front-page Tagged With: azure, azure iaas, sql server on Azure, tempdb

DP-900 – Exam prep reading list

17th August 2020 By John McCormack Leave a Comment

DP-900 Exam prep reading list  (further reading)

Data visualisation example for DP-900 Exam prep reading listThis DP-900 Exam prep reading list comes from Microsoft Learn. Links will be added as I go through the course material / suggested reading on the Microsoft Learn site until I have completed the full course. I am currently working my way through the material.

If you are thinking of sitting DP-900 Azure data fundamentals, I recommend following through the free course on the Microsoft learn site (scroll down past exam info) and also reading the recommended links. The links may add a bit of time onto your learning, and you can be the judge about about how intensively you need to study the extra material, however the more prepared you are, the greater your chance of success with the certification exam. Links duplicated in Microsoft Learn are only shown once in the list below.

I find it great that Microsoft are making these resources available for free these days. Some video learning would also be nice but this is a great start.

Azure Data Fundamentals: Explore core data concepts

Explore core data concepts

  • https://docs.microsoft.com/en-us/office/troubleshoot/access/database-normalization-description
  • https://docs.microsoft.com/en-us/azure/cosmos-db/introduction
  • https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction
  • https://docs.microsoft.com/en-us/azure/azure-sql/database/sql-database-paas-overview

Explore roles and responsibilities in the world of data

  • https://docs.microsoft.com/en-us/azure/databricks/scenarios/what-is-azure-databricks
  • https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-overview
  • https://docs.microsoft.com/en-us/power-bi/fundamentals/power-bi-overview
  • https://docs.microsoft.com/en-us/azure/data-factory/introduction

Describe concepts of relational data

  • https://docs.microsoft.com/en-us/office/troubleshoot/access/database-normalization-description
  • https://docs.microsoft.com/en-us/sql/odbc/reference/structured-query-language-sql?view=sql-server-ver15
  • https://docs.microsoft.com/en-us/azure/azure-sql/azure-sql-iaas-vs-paas-what-is-overview

Explore concepts of non-relational data

  • https://docs.microsoft.com/en-us/azure/architecture/guide/technology-choices/data-store-overview
  • https://docs.microsoft.com/en-us/azure/cosmos-db/index-overview
  • https://docs.microsoft.com/en-us/azure/cosmos-db/table-introduction
  • https://docs.microsoft.com/en-us/azure/cosmos-db/graph-introduction

Explore concepts of data analytics

  • https://docs.microsoft.com/en-us/power-bi/create-reports/
  • https://azure.microsoft.com/en-us/services/databricks/
  • https://azure.microsoft.com/en-gb/services/cognitive-services/  (Broken link on site)
  • https://docs.microsoft.com/en-us/azure/architecture/data-guide/relational-data/etl

Azure Data Fundamentals: Explore relational data in Azure

Explore relational data offerings in Azure

Explore provisioning and deploying relational database offerings in Azure

Query relational data in Azure

Azure Data Fundamentals: Explore non-relational data in Azure

Explore non-relational data offerings in Azure

Explore provisioning and deploying non-relational data services in Azure

Manage non-relational data stores in Azure

Azure Data Fundamentals: Explore modern data warehouse analytics in Azure

Examine components of a modern data warehouse

Explore data ingestion in Azure

Explore data storage and processing in Azure

Get started building with Power BI

 

Recommended posts:

SQL Server on Amazon RDS 

Put tempdb files on d drive in Azure IAAS

Filed Under: Azure, Certification, DP-900 Tagged With: azure data fundamentals, dp-900

A successful performance tuning project

5th June 2020 By John McCormack Leave a Comment

Performance tuning project

I’m coming to the end of what has been a successful performance tuning project for SQL Server. I wanted to share some of the things that made it a success.

Corporate buy in

The company had a goal to improve the page load times of a number of key pages within our application. This was to improve the user experience for our customers. They acknowledged that the database code, both indexes and stored procedures needed optimsation but so too did aspects of the application code. It was good to see this acknowledged as I’ve been in many situations where the database takes all the blame.

The company approved a considerable amount of resource in terms of personnel to work solely on this stability and optimisation project. It included senior developers, testers and a project manager. I was brought in to look at the database performance. Whilst some business as usual (BAU) priorities did come in from time to time, a large core of the team was protected from this and allowed to get on with the work of making the system go faster, thus improving the customer experience.

Daily standups

We held daily standups where we covered what we had done since the last meeting, what we were working on and if anything was blocking our progress. These were kept short so as to not get in the way of the development work, but allowed everyone an overview of what the other team members were working on. Often, as a result of these, side conversations spun up and team members helped out others who were looking for a bit of assistance. (Or simply to bounce ideas around)

Collaboration

The team were willing to help each other. When Pull Requests (PRs) were submitted, these were swiftly approved where there were no objections, or challenged in a positive way which helped get the best overall result. When the API calls were showing as slow, but nothing was obvious on the SQL server, heads were put together to use the tools at our disposal to get to the root cause. This often included Azure App Insights which I had not previously used, and this helped us get the end to end transaction details. We could pull out the SQL for any areas which were slow and work on making it perform better.

Measuring improvements

The Azure Instance class for the SQL Server had previously been doubled so there was no appetite to scale it up again. The hope was that we may eventually be able to scale back down after a period of stability.

The system previously had issues with blocking, high CPU and slow durations so I wanted to reduce page reads, CPU and duration for all of the SQL calls I was working on. I wouldn’t consider a PR if at least 2 of these metrics were not improved. My main focus was on reducing duration of calls but I didn’t want to improve one thing, and make others worse as a consequence. In my own tickets, I always made sure to document the before and after metrics from my standalone testing so as to give confidence that they would be included in upcoming releases.

CPU graph showing performance over time.

We also used Apdex which is a standardised way of measuring application performance. It ranks page views on whether the user is satisfied, tolerating or frustrated. The more we move users out of the frustrated and tolerating groups, and in to satisfied, the higher the Apdex score will be. As our project moved through release cycles, we were able to see steady increases in our Apdex scores. Apdex also allowed us to identify what was hurting us most and create tickets based on this information.

Top Quality Load Test Environment

We had a top quality load test environment which used production masked backups for the databases. I set up the availability groups to match production, the servers were all sized the same as production and had the same internal settings such as tempdb size, sp_configure settings and trace flags etc. We were able to replay the same tests over and over again using Gatling, and our testers made really useful reports available to help us analyse the performance of each hotfix. If it was a proven fix, it was promoted to a release branch, if it wasn’t, it was binned.

End Game

This intensity was kept up for the almost 2 months and it was ultimately transformative for the business. Whilst there are still many further improvements that can be made, the specialised squad is being disbanded and team members are being reallocated to other squads. Performance should be a way of life now, rather than an afterthought or another performance tuning project.

We can be happy that we improved the Apdex scores, sped up a huge number of regularly used SQL transactions, and removed the large CPU peaks that dominated our core business hours.

If you enjoyed this, you may also enjoy some of these other posts.

  • https://johnmccormack.it/2020/05/how-dbatools-can-help-with-performance-tuning/
  • https://johnmccormack.it/2019/03/put-tempdb-files-on-d-drive-in-azure-iaas/

Filed Under: front-page, Guides, SQL Server Tagged With: Performance tuning, project, Scrum, sql, SQL Performance, SQL server

  • 1
  • 2
  • 3
  • Next Page »
Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

John McCormack · Copyright © 2023