John McCormack DBA

SQL Server Databases and Cloud

  • Personal
    • About
  • Free Training
    • SQL Server on Amazon RDS (Free Course)
    • Free practice questions to help you pass DP-900
  • Save money in Azure
    • Azure IaaS SQL Backups – Stop burning money
    • Your Azure SQL Database and Managed Instance is too big
    • Turn the cloud off at bedtime to save 70%
    • Your Azure SQL Virtual Machine might be too big
    • Save money with Azure SQL DB serverless
    • Save up to 73% with reserved instances
    • Delete unused instances to save money in Azure
  • Hire me
    • 60 minute cost optimization
    • Let me solve your SQL Server problems
    • Take a look at my Sessionize speaker’s profile

PowerShell Splatting – What was wrong with backticks?

8th December 2020 By John McCormack Leave a Comment

T-SQL Tuesday

t-sql tuesday logo

This month’s T-SQL Tuesday is hosted by Lisa Griffin Bohm. (b|t). Lisa asks “This month, I’d like those of you who have presented, or written a presentation, to share something technical THAT DID NOT RELATE to the topic of the presentation, that you’ve learned in writing or giving the presentation.” I’m going to write about how I came across PowerShell Splatting and how it made me better at PowerShell, despite presenting on a cloud topic.

Powershell splatting

At DataScotland 2019, I did a presentation on AWS RDS for SQL Server. The technical content was about how RDS works, what you can do with it, and how to provision it etc. As part of my demos, I decided to use AWS PowerShell commands. When I had made this presentation at previous events, I had used AWS CLI so I had to update my code examples. I’m ok with PowerShell, but I’m not an expert. I just wanted to show that were were multiple ways to interface with AWS.

My code was full of backticks. You could say I was daft about backticks. I loved them and thought they made my PowerShell code readable because they stopped the lines from running off the monitor. Someone asked me why I don’t use PowerShell splatting? “Whatting” I asked? I had never heard of splatting.

At the break, I spoke to a couple of people who were more experienced in PowerShell than me. They advised that PowerShell splatting was the way to go for large commands because they make the code more readable. More readable was definitely something I was interested in so I decided to go away and learn about splatting, and update my code for future events.

So what is PowerShell Splatting?

Rather than passing a long list of parameters into commands, you can create a variable in advance to hold these values. The variable is an array or a hash table and includes as many parameters as you need. If you need to pass in parameter names and values, use a hash table. If you just need to pass in a list of parameter values, you should use an array. Then, when you run the command, you simply pass in the hash table parameter instead of all the individual parameters.

Example command before splatting

New-RDSDBInstance -dbinstanceidentifier "datascotland-posh" -region "eu-central-1" -VpcSecurityGroupId "sg-00a1234g567c3d4ab" `
    -allocatedstorage 20 -dbinstanceclass "db.t2.micro" -engine "sqlserver-ex" `
    -masterusername "rds_name" -masteruserpassword "secure_pw_here" -availabilityzone "eu-central-1a" `
    -port 50000 -engineversion "14.00.3049.1.v1"

Example command after splatting

$HashArguments = @{
dbinstanceidentifier= "datascotland-posh"
region = "eu-central-1"
VpcSecurityGroupId = "sg-00a1234g567c3d4ab"
allocatedstorage = 20
dbinstanceclass = "db.t2.micro"
engine = "sqlserver-ex"
masterusername = "rds_name"
masteruserpassword = "secure_pw_here"
availabilityzone = "eu-central-1a"
port = 50000
engineversion = "14.00.3049.1.v1"
}
New-RDSDBInstance @HashArguments

At a glance

As you can see, the 2nd example which uses splatting is easier to read and you can pick out the value of each parameter at a quick glance. It was worth learning to make my code more readable and improve my overall PowerShell knowledge.

“Writing this post has just made me realise that I should update my RDS course ↙ as the examples in it don’t use splatting. 🤦‍♂️”

SQL Server on Amazon RDS (Free Course)

Filed Under: front-page, PowerShell, T-SQL Tuesday, Uncategorised Tagged With: powershell, splatting, t-sql tuesday

Your SQL Server indexes can’t always be perfect

27th November 2020 By John McCormack 1 Comment

Perfection is not the aim

Your SQL Server indexes can’t always be perfect. What I mean by that is you can’t index for every query on a busy SQL server. In this post I’m going to describe just one table from a server I have been tuning recently. It had 26 indexes, and that was not unusual for this database. This most likely happened over time as a result of developers creating an index per query using the table, as well as following some perceived best practices (with the best of intentions).

Did you say 26 indexes?

Yes, one of the heavily used tables on this database had 26 indexes. It had one clustered index on an ID column which is fairly standard. There were 25 further non clustered indexes including an index per foreign key. A shared code base runs across several customers. Some customer’s use all the functionality, some only use parts of it, but they all get the same database schema. And all of those indexes.

Current usage and how to fix

Remove unused indexes

9 of the non clustered indexes on this table were read from less than once per day but were written to up to 18,000 times per day. The servers had been online for 180 days at the time of checking and some of the index uses were in single or double figures. These needed to be dropped as they were not bringing anything to the party.

I got this info by running sp_blitzindex and comparing the index usage stats against the server uptime. (I took care to run on all production servers for all customers, including read only secondaries).

Combine similar indexes

This left 16 remaining non clustered indexes to review.

  • 9 of which had been used several million times with a maximum of 32 Million seeks.
    • 5 out of those 9 appeared to be heavily overlapping other indexes.
      • I needed to see if these can be combined into 1 or 2 indexes.
    • 4 out of those 9 are fairly unique.
      • Those 4 will be retained.
  • 7 have been used hundreds of thousands of times and don’t appear to have much overlap.

This leaves us with 4 non clustered indexes being retained, 9 being dropped and 12 to be reviewed in more detail. This is the most difficult stage of the process as I need to decide which indexes can be merged and which can be dropped. Its always best to be cautious with this part, because removig the wrong index could slow down an important process.

Of the 12 indexes that needed a more detailed look, I decided to drop 8, and create 3 new ones. A net reduction of 5 indexes for this stage. Multiple groups of indexes had the same keys but different include columns. Some of the include column ranges were very wide and included large data types such as large NVARCHARs. Replacement indexes used the same keys, in the same order, but had a more targeted range of include columns.

Perfection? No. Better? Yes.

In total, we have a net reduction of 14 non clustered indexes. It still leaves us with 11 non clustered indexes. Your SQL Server indexes can’t always be perfect but 11 is better than 25. I’d really have loved to get that count down to 5 but it is a lot better than it was. This process should be repeated after a month of the new index structure being in place with the aim of further consolidation if possible. The new indexes won’t be perfect, some queries may not be served as well by the new consolidated indexes or by having a rarely used index removed but I was trying to improve the overall health of the SQL Server. Reducing the number of writes that were hammering the storage is a step in the right direction.

How did we get so many indexes?

Index creation was all developer led so when new functionality was rolled out, indexes were created for each new process. This resulted in a lot of overlapping indexes. I don’t blame the developers, there should just have been a process for reviewing index requests, and comparing them to what we already had.

How to avoid in future

I see two key approaches to helping avoid a repeat of this death by indexing.

  1. Don’t hoard the analysis. Sharing the results with the development team will help increase awareness of this death by indexing scenario.
  2. Implementing an approval process for pull requests where a DBA should be in the approvers list if the change includes a new index.

Verify improvements

Index changes can affect multiple queries all using the same table so just testing one query or stored procedure is not enough to release with confidence. Where possible, use a load testing environment which mirrors production, and replay a repeatable workload. By baselining a typical workload, then making the changes and taking a new benchmark, you can measure the improvements overall and for specific database calls.

With this client, there was a load testing environment with synthetic API calls to the application that attempted to mirror production. Not everyone has this luxury when testing but capturing and replaying a workload, and measuring the differences is vital for this type of performance tuning. It can help you catch any serious regressions as well as giving you the confidence that your performance will improve.

If you don’t alreqady have a suitable load testing environment, the tool I recommend to get you going is an open source project called WorkloadTools by Gianluca Sartori.

More handy links

A successful performance tuning project
How DBATools can help with performance tuning

Filed Under: front-page, Guides, Performance Tuning Tagged With: indexes, Performance tuning, SQL Performance, SQL server

What is an availability group listener – An analogy

13th October 2020 By John McCormack Leave a Comment

T-SQL Tuesday logo - What is an availability group listenerThis post is part of the October 2020 edition of #tsql2sday. Thanks to Rob Volk for hosting (blog|twitter) and for coming up with a great topic. “Data Analogies, or: Explain Databases Like I’m Five!” I genuinely can’t wait to read some of the responses. I think it’s going to be a very educational series of posts. I’ve chosen to explain how an availability group listener works by using an analogy.

What is an availability group listener – An Analogy

Let’s say you run a business and have multiple phone numbers. You may have an office number, a mobile number and a fancy 0800 number to give your business the impression of being much bigger, like your national competitors. You put the 0800 number on your van, website, all of your advertising and your business cards. You’re not actually a national scale business though and you don’t have a receptionist to handle the calls into your 0800 number. So what happens to those calls?

They are routed through to your preferred number, usually your office number, but you can change it to your mobile number when you are out of office. You could even set a preference for it to try to route the call to your office first, then try your mobile phone if the office is unavailable. Customers that have your mobile number or office number can still call you directly on those but they will not be rerouted if either of those phones are unavailable. If you change your mobile number, you can just update the routing to use your new number and the customer is unaware of the change.

What is an availability group listener – A slightly (but not much) deeper dive

What’s in a name

Like the analogy above, the AG listener uses an address that can route SQL Server connections to a primary or secondary replica within an availability group. It includes a DNS name which is unique within a domain, an IP address or several, and a listener port designation.

Connections to SQL Server where availability groups are in use should use the listener name instead of the server name. This means that if any replica in the AG is unavailable, connections will just be routed to the available replicas, meaning no loss of service to the client. You can utilise your SQL Server resources more evenly if using a listener because there is an option to direct certain connections to a readable secondary replica. You can also offset backups to secondary replicas. All of this spreads the load more evenly across available replicas. If you just route everything through the primary, it can be overworked whilst your secondary replicas are doing next to nothing. The secondary replicas still need to be fully licensed so it is a shame, and a waste of money for them to sit by idly.

Port

If you’re looking for simplicity, you should designate the listener port as 1433, as no specific port declaration is required in the connection string. However, if you need to, you can designate a different port, but must include it in your connection string.

Seemless

If your primary replica becomes unavailable and you have automatic failover enabled, it means that your old secondary replica can switch seemlessly to becoming the new primary. No new connections will notice a thing.

Spread the load

If you have more than one secondary, you can spread the load across a set of readable secondaries. This means that all replicas are taking a share of the workload. Prior to SQL Server 2016, only one preferred replica would receive all of the of the read intent traffic.

Thanks again to Rob for coming up with an innovative topic.

John

If you liked this post, why not read:

Test read intent connections to an AG Listener

Filed Under: front-page, T-SQL Tuesday Tagged With: ag, AG Listener, always on availability group, availability group, listener, sql, SQL server, t-sql tuesday

How Multi-AZ Works For SQL Server on RDS

4th August 2020 By John McCormack Leave a Comment

How Multi-AZ Works For SQL Server on RDS

This is a short video about How Multi-AZ Works For SQL Server on RDS. I didn’t feel a demo was needed this time jsut for the sake of it. Enabling Multi-AZ is a click of a button or one line of PowerShell or CLI.

Important: Please read this post for further information. I’ve highlighted some key points in my presentation but there is no point in me simply listing out everything or reading the post verbatim. There is some really great info in this post however so it is well worth reading.

Download slides

Course Homepage | Next video

Filed Under: AWS, AWS RDS, AWS SQL Server, SQL Server, Training

Backup and restore for AWS RDS SQL Server

30th July 2020 By John McCormack Leave a Comment

Backup and restore for AWS RDS SQL Server

In this video, we talk about backup and restore for AWS RDS SQL Server. One of the great benefits of Platform as a Service (PaaS) is that your cloud hosting company will handle the backups for you. And they’re really good at it. In this case, AWS will allow you to restore to any point in time within the last 35 days, and all with no administration effort on your part.

This video dives a bit deeper into the non default options such as backing up and restoring using native backups and how you can keep those backups way beyond the 35 days.

Download Slides | View T-SQL Script

Course Homepage | Next video

Filed Under: AWS RDS, AWS SQL Server, SQL Server, Training

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • Next Page »
Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

John McCormack · Copyright © 2025

 

Loading Comments...