John McCormack DBA

SQL Server Databases and Cloud

  • Personal
    • About
  • Free Training
    • SQL Server on Amazon RDS (Free Course)
    • Free practice questions to help you pass DP-900
  • Save money in Azure
    • Azure IaaS SQL Backups – Stop burning money
    • Your Azure SQL Database and Managed Instance is too big
    • Turn the cloud off at bedtime to save 70%
    • Your Azure SQL Virtual Machine might be too big
    • Save money with Azure SQL DB serverless
    • Save up to 73% with reserved instances
    • Delete unused instances to save money in Azure
  • Hire me
    • 60 minute cost optimization
    • Let me solve your SQL Server problems
    • Take a look at my Sessionize speaker’s profile

Search Results for: availability groups

SQL Server on Amazon RDS (Free Course)

23rd July 2020 By John McCormack Leave a Comment

SQL Server on Amazon RDS Course

I’ve put together a short video series on running SQL Server on Amazon RDS. It covers some basics and can be completed in around an hour, with several very short videos. It’s intended for SQL professionals who are just getting started with AWS, but might also be enjoyed by AWS users, looking to get started with a relational database management system.

It’s my first attempt at putting together a training plan. So any feedback will be gratefully received. Either in the blog comments or YouTube video comments.

Contents

1. Introduction to SQL Server on Amazon RDS (Scroll down)
2. Benefits and limitations of AWS RDS
3. Create your first RDS instance using the AWS console
4. Connecting to your AWS RDS instance
5. Advanced Configurations using Parameter Groups
6. Securing your SQL Server AWS RDS Instance
7. Automating Deployments using PowerShell
8. Backup and restore for AWS RDS
9. Providing High-Availability through Multiple Availability Zones
10. Monitoring Your Instances using CloudWatch

Introduction to SQL Server on Amazon RDS

Download slides

Next Video

Filed Under: AWS RDS, AWS SQL Server, front-page, Training

A successful performance tuning project

5th June 2020 By John McCormack Leave a Comment

Performance tuning project

I’m coming to the end of what has been a successful performance tuning project for SQL Server. I wanted to share some of the things that made it a success.

Corporate buy in

The company had a goal to improve the page load times of a number of key pages within our application. This was to improve the user experience for our customers. They acknowledged that the database code, both indexes and stored procedures needed optimsation but so too did aspects of the application code. It was good to see this acknowledged as I’ve been in many situations where the database takes all the blame.

The company approved a considerable amount of resource in terms of personnel to work solely on this stability and optimisation project. It included senior developers, testers and a project manager. I was brought in to look at the database performance. Whilst some business as usual (BAU) priorities did come in from time to time, a large core of the team was protected from this and allowed to get on with the work of making the system go faster, thus improving the customer experience.

Daily standups

We held daily standups where we covered what we had done since the last meeting, what we were working on and if anything was blocking our progress. These were kept short so as to not get in the way of the development work, but allowed everyone an overview of what the other team members were working on. Often, as a result of these, side conversations spun up and team members helped out others who were looking for a bit of assistance. (Or simply to bounce ideas around)

Collaboration

The team were willing to help each other. When Pull Requests (PRs) were submitted, these were swiftly approved where there were no objections, or challenged in a positive way which helped get the best overall result. When the API calls were showing as slow, but nothing was obvious on the SQL server, heads were put together to use the tools at our disposal to get to the root cause. This often included Azure App Insights which I had not previously used, and this helped us get the end to end transaction details. We could pull out the SQL for any areas which were slow and work on making it perform better.

Measuring improvements

The Azure Instance class for the SQL Server had previously been doubled so there was no appetite to scale it up again. The hope was that we may eventually be able to scale back down after a period of stability.

The system previously had issues with blocking, high CPU and slow durations so I wanted to reduce page reads, CPU and duration for all of the SQL calls I was working on. I wouldn’t consider a PR if at least 2 of these metrics were not improved. My main focus was on reducing duration of calls but I didn’t want to improve one thing, and make others worse as a consequence. In my own tickets, I always made sure to document the before and after metrics from my standalone testing so as to give confidence that they would be included in upcoming releases.

CPU graph showing performance over time.

We also used Apdex which is a standardised way of measuring application performance. It ranks page views on whether the user is satisfied, tolerating or frustrated. The more we move users out of the frustrated and tolerating groups, and in to satisfied, the higher the Apdex score will be. As our project moved through release cycles, we were able to see steady increases in our Apdex scores. Apdex also allowed us to identify what was hurting us most and create tickets based on this information.

Top Quality Load Test Environment

We had a top quality load test environment which used production masked backups for the databases. I set up the availability groups to match production, the servers were all sized the same as production and had the same internal settings such as tempdb size, sp_configure settings and trace flags etc. We were able to replay the same tests over and over again using Gatling, and our testers made really useful reports available to help us analyse the performance of each hotfix. If it was a proven fix, it was promoted to a release branch, if it wasn’t, it was binned.

End Game

This intensity was kept up for the almost 2 months and it was ultimately transformative for the business. Whilst there are still many further improvements that can be made, the specialised squad is being disbanded and team members are being reallocated to other squads. Performance should be a way of life now, rather than an afterthought or another performance tuning project.

We can be happy that we improved the Apdex scores, sped up a huge number of regularly used SQL transactions, and removed the large CPU peaks that dominated our core business hours.

If you enjoyed this, you may also enjoy some of these other posts.

  • https://johnmccormack.it/2020/05/how-dbatools-can-help-with-performance-tuning/
  • https://johnmccormack.it/2019/03/put-tempdb-files-on-d-drive-in-azure-iaas/

Filed Under: front-page, Guides, SQL Server Tagged With: Performance tuning, project, Scrum, sql, SQL Performance, SQL server

IaaS++ (Azure SQL Server IaaS Agent Extension)

11th December 2020 By John McCormack 1 Comment

IaaS++

Most DBAs or cloud practioners have seen a graph similar to this ⬇. It shows the flexibility and responsibilities between different methods of adopting SQL in Azure. SQL on VMs gives you the most flexibility but also the most administrative work. SQL DB single instance handles almost all of the “heavy lifting” (things like backup,os patching, installation etc), but gives you the least flexibility. Azure SQL DB managed instance lies some where in between. SQL on VMs are known as Infrastructure As A Service (IaaS). SQL DB (Single DB or managed instance) are known as Platform As A Service (PaaS).

flexibility vs responsibility graph

But now there is another option, called SQL Server IaaS Agent extension. I think of it as IaaS++ as it extends your SQL VMs to give them some of that heavy lifting funtionality that the PaaS offerings provide, whilst still allowing you full control over the instance.

What do you get with SQL Server IaaS Agent extension?

The main two items I will go into here are automated backups and automated patching. These are a standard on most PaaS products, with all cloud providers, however it is only down to the introduction of this “IaaS++” extension, that you can now get this for SQL on VMs.

You can also configure storage, high availability, Azure Key Vault integration and R services, as well as enabling a subscription wide view of all your instance and license types, however this post only focuses on automated backups and patching.

Real world scenarios

Patching

My client had fallen behind with patching and needed to ensure that important servers were patched regularly. By enabling automated patching, it meant that they could have only the important patches applied during an agreed window, and then look at other patches and cumulative updates when it suited them. They had a test environment that mirrored production, with a 3 node availability group cluster. (Automatic failover was enabled) so I was able to test the solution there, before going anywhere near production. The plan was as simple as this:

  1. Add a 90 minute window at 12:00 for Server1
  2. Add a 90 minute window at 02:00 for Server2
  3. Add a 90 minute window ar 04:00 for Server3.

This approached allowed 30 minutes at the end of each window for VMs to be restarted before the next VM’s window would start.

  • Click on automated patching from the SQL Virtual Machine in Azure Portal.
  • Update the toggles to set your patching window.
  • Daily or weekly schedules can be chosen.
  • If patches are applied, your VM will be restarted.
IaaS Extension Automated Patching

This approach allowed them to move from 44 outstanding patches to 4 on 3 servers without manual intervention. Failovers happened seemlessly. I’d just urge a word of caution with critical production systems, as this will restart your VMs. Are you ready for that? My advice is get comfortable with it on non prod systems before starting on production.

I think it’s a great feature. It’s not always possible to just go to Managed Instance so for those of us who need a full SQL install, this is a handy hybrid.

Backups

Another client was using the Ola Hallengren solution for managing backups. It’s the best solution out there when you need to configure your own backups but what if your cloud provider will do it for you? This client also didn’t have an experienced DBA, so in this case, it is better to let Microsoft do it. What’s more, you can configure a retention period between 1 and 30 days to stop your storage costs from ever increasing.

Before starting, make sure you don’t have your own backup solution running in parallel.

  • Click on automated backups
  • Configure the toggles to suit your needs
  • Link it to a storage account
  • Check your backups are working as expected and can be restored

These tasks can be automated as well using PowerShell or Azure CLI. I’ll maybe cover this in a future blog.

Popular posts on johnmccormack.it

https://johnmccormack.it/2020/10/how-do-i-set-up-database-mail-for-azure-sql-db-managed-instance/
https://johnmccormack.it/2019/03/put-tempdb-files-on-d-drive-in-azure-iaas/

Filed Under: Azure, Azure SQL DB, Azure VM, front-page Tagged With: azure, azure iaas, IaaS++, SQL server

Troubleshooting Transactional Replication in SQL Server

4th February 2016 By John McCormack Leave a Comment

SQL Server ReplicationThis might make me the odd one out but I actually really like replication. It took me a while to get comfortable with it but when I did and when I learned how to troubleshoot transactional replication confidently, I became a fan. Since I exclusively use transactional replication and not snapshot replication or merge replication, this post is only about transactional replication and in particular, how to troubleshoot transactional replication errors.

In the production system I work on, replication is highly reliable and rarely if ever causes the DBA’s headaches. It can be less so in our plethora of dev and qa boxes, probably down to rate of change in these environments with regular refreshes. Due to this, I’ve had to fix it many times. As I explain how I troubleshoot replication errors, I assume you know the basics of how replication works. If you don’t, a really good place to start is books online. It describes how replication uses a publishing metaphor and describes all the component parts in detail.

[Read more…]

Filed Under: front-page, Guides Tagged With: replication error 1205, replication error 21074, replication error 3729, SQL server, SQLNEWBLOGGER, transactional replication

Database recovery models

26th August 2019 By John McCormack 1 Comment

Database recovery models

3 choiceThis post is about database recovery models for SQL Server databases. Having the correct recovery model for a database is crucial in terms of your backup and restore strategy for the database. It also defines if you need to do maintenance of the transaction log or if you can leave this task to SQL Server. Let’s look at the various recovery models and how they work.

The three database recovery models for SQL Server:

  • Full
  • Bulk-Logged
  • Simple

Full Recovery

Every operation is logged and written to the transaction log. This is required for very low or no data loss requirements and means that regular log backups are required. If they are not taken, the transaction log will grow indefinitely (or until it fills up the disk or hits the maximum sized specified). If this occurs, a 9002 error is generated, and no further data modifications can take place until the log is backed up or truncated. Full recovery model is usually used in production, especially for OLTP workloads.

Without a full backup being taken, databases in the full recovery model are treated as if they are in simple recovery model, the log is automatically truncated. So it is imperative to take a full backup of your database when you first use full recovery. (A differential backup would also suffice if there has previously been a full backup that would form part of your backup chain).

Choose the full recovery model if you want or need to do any of the following things:

  • Have point in time recovery
  • Minimise data loss including potentially no data loss
  • Recover to a marked transaction
  • If you want to restore individual pages
  • VLDBS with multiple filegroups where you might want to be able to utilise piecemeal restores
  • If you want to use availability groups, database mirroring or log shipping technologies

Bulk-Logged Recovery

Only extents that are modified are recorded in the transaction log. This means they are reliant upon the database’s data files. Certain types of transaction are classed as minimally logged including:

  • BCP
  • BULK INSERT
  • INSERT INTO SELECT
  • SELECT INTO
  • CREATE INDEX
  • ALTER INDEX
  • DROP INDEX

You are also required to take log backups with the bulk-logged recovery model. The log backups refer to the Bulk Changed Map (BCM) to identify modified extents in need of backup. Read up more on BCM: Further Reading

Choose the bulk-logged recovery model if you want or need to do any of the following things:

  • Improve performance of bulk-logged operations
  • Minimise the size of the transaction log

Simple Recovery

The transaction log is automatically managed by SQL Server and cannot be backed up. SQL Server reclaims the space used by automatically truncating the log. This recovery model is mostly used for non-prod environments. Although SQL Server manages the transaction log in simple recovery model, there is still a chance your t-log could fill up. This would occur if open transactions are left in place (not committed or rolled back) or by long running transactions.

Choose the simple recovery model in the following scenarios:

  • You can afford data loss since the latest full backup (or latest differential)
  • It’s a non-prod database and can easily be restored or refreshed
  • Read only databases
  • Rarely updated databases including data warehouses where the data is updated only once per day
  • If you are utilising a 3rd part solution such as a SAN snapshots
    • Important Note: If you are a DBA or have responsibility for backups/restores, make sure that you fully understand any backup technology in place and that you know how to recover using it.

Switching recovery model

The default recovery model is determined by the recovery model of the model database but can be changed at any time using an ALTER DATABASE command. If you do change the recovery model, you’ll need to understand the impact on the backup chain.

  • Switching from FULL or BULK_LOGGED to SIMPLE will break the log backup chain. It’s worthwhile taking once last log backup before making this switch, this will allow you point in time recovery up to that point.
  • Switching FROM SIMPLE to FULL or BULK_LOGGED requires a new full or differential backup (if full exists) in order to initialise a new log chain. So, take one of these backups as soon as possible so that you can start backing up your logs.

Verifying Recovery Model

There’s a handy DBATools PowerShell command for checking that the database you think are in Full Recovery model are actually in Full Recovery Model. Test-DbaDbRecoveryModel

Test-DbaDbRecoveryModel -SqlInstance localhost | Select InstanceName, Database, ConfiguredRecoveryModel, ActualRecoveryModel | Out-GridView

Note: In the output below, you can see that NebulaDataSolutions database is configured for FULL recovery but is actually in SIMPLE. This is because there is no valid Full backup for NebulaDataSolutions.

Test-DbaDbRecoveryModel output

Filed Under: front-page, SQL Server Recovery Models Tagged With: 70-764, BULK-LOGGED RECOVERY, database recovery models, FULL RECOVERY, SIMPLE RECOVERY, SQL Server recovery models

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • 5
  • Next Page »
Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

John McCormack · Copyright © 2025

 

Loading Comments...