John McCormack DBA

SQL Server Databases and Cloud

  • Personal
    • About
  • Free Training
    • SQL Server on Amazon RDS (Free Course)
  • Save money in Azure
    • Azure IaaS SQL Backups – Stop burning money
    • Your Azure SQL Database and Managed Instance is too big
    • Turn the cloud off at bedtime to save 70%
    • Your Azure SQL Virtual Machine might be too big
  • Hire me

PowerShell Splatting – What was wrong with backticks?

8th December 2020 By John McCormack Leave a Comment

T-SQL Tuesday

t-sql tuesday logo

This month’s T-SQL Tuesday is hosted by Lisa Griffin Bohm. (b|t). Lisa asks “This month, I’d like those of you who have presented, or written a presentation, to share something technical THAT DID NOT RELATE to the topic of the presentation, that you’ve learned in writing or giving the presentation.” I’m going to write about how I came across PowerShell Splatting and how it made me better at PowerShell, despite presenting on a cloud topic.

Powershell splatting

At DataScotland 2019, I did a presentation on AWS RDS for SQL Server. The technical content was about how RDS works, what you can do with it, and how to provision it etc. As part of my demos, I decided to use AWS PowerShell commands. When I had made this presentation at previous events, I had used AWS CLI so I had to update my code examples. I’m ok with PowerShell, but I’m not an expert. I just wanted to show that were were multiple ways to interface with AWS.

My code was full of backticks. You could say I was daft about backticks. I loved them and thought they made my PowerShell code readable because they stopped the lines from running off the monitor. Someone asked me why I don’t use PowerShell splatting? “Whatting” I asked? I had never heard of splatting.

At the break, I spoke to a couple of people who were more experienced in PowerShell than me. They advised that PowerShell splatting was the way to go for large commands because they make the code more readable. More readable was definitely something I was interested in so I decided to go away and learn about splatting, and update my code for future events.

So what is PowerShell Splatting?

Rather than passing a long list of parameters into commands, you can create a variable in advance to hold these values. The variable is an array or a hash table and includes as many parameters as you need. If you need to pass in parameter names and values, use a hash table. If you just need to pass in a list of parameter values, you should use an array. Then, when you run the command, you simply pass in the hash table parameter instead of all the individual parameters.

Example command before splatting

New-RDSDBInstance -dbinstanceidentifier "datascotland-posh" -region "eu-central-1" -VpcSecurityGroupId "sg-00a1234g567c3d4ab" `
    -allocatedstorage 20 -dbinstanceclass "db.t2.micro" -engine "sqlserver-ex" `
    -masterusername "rds_name" -masteruserpassword "secure_pw_here" -availabilityzone "eu-central-1a" `
    -port 50000 -engineversion "14.00.3049.1.v1"

Example command after splatting

$HashArguments = @{
dbinstanceidentifier= "datascotland-posh"
region = "eu-central-1"
VpcSecurityGroupId = "sg-00a1234g567c3d4ab"
allocatedstorage = 20
dbinstanceclass = "db.t2.micro"
engine = "sqlserver-ex"
masterusername = "rds_name"
masteruserpassword = "secure_pw_here"
availabilityzone = "eu-central-1a"
port = 50000
engineversion = "14.00.3049.1.v1"
}
New-RDSDBInstance @HashArguments

At a glance

As you can see, the 2nd example which uses splatting is easier to read and you can pick out the value of each parameter at a quick glance. It was worth learning to make my code more readable and improve my overall PowerShell knowledge.

“Writing this post has just made me realise that I should update my RDS course ↙ as the examples in it don’t use splatting. 🤦‍♂️”

SQL Server on Amazon RDS (Free Course)

Filed Under: front-page, PowerShell, T-SQL Tuesday, Uncategorised Tagged With: powershell, splatting, t-sql tuesday

What is an availability group listener – An analogy

13th October 2020 By John McCormack Leave a Comment

T-SQL Tuesday logo - What is an availability group listenerThis post is part of the October 2020 edition of #tsql2sday. Thanks to Rob Volk for hosting (blog|twitter) and for coming up with a great topic. “Data Analogies, or: Explain Databases Like I’m Five!” I genuinely can’t wait to read some of the responses. I think it’s going to be a very educational series of posts. I’ve chosen to explain how an availability group listener works by using an analogy.

What is an availability group listener – An Analogy

Let’s say you run a business and have multiple phone numbers. You may have an office number, a mobile number and a fancy 0800 number to give your business the impression of being much bigger, like your national competitors. You put the 0800 number on your van, website, all of your advertising and your business cards. You’re not actually a national scale business though and you don’t have a receptionist to handle the calls into your 0800 number. So what happens to those calls?

They are routed through to your preferred number, usually your office number, but you can change it to your mobile number when you are out of office. You could even set a preference for it to try to route the call to your office first, then try your mobile phone if the office is unavailable. Customers that have your mobile number or office number can still call you directly on those but they will not be rerouted if either of those phones are unavailable. If you change your mobile number, you can just update the routing to use your new number and the customer is unaware of the change.

What is an availability group listener – A slightly (but not much) deeper dive

What’s in a name

Like the analogy above, the AG listener uses an address that can route SQL Server connections to a primary or secondary replica within an availability group. It includes a DNS name which is unique within a domain, an IP address or several, and a listener port designation.

Connections to SQL Server where availability groups are in use should use the listener name instead of the server name. This means that if any replica in the AG is unavailable, connections will just be routed to the available replicas, meaning no loss of service to the client. You can utilise your SQL Server resources more evenly if using a listener because there is an option to direct certain connections to a readable secondary replica. You can also offset backups to secondary replicas. All of this spreads the load more evenly across available replicas. If you just route everything through the primary, it can be overworked whilst your secondary replicas are doing next to nothing. The secondary replicas still need to be fully licensed so it is a shame, and a waste of money for them to sit by idly.

Port

If you’re looking for simplicity, you should designate the listener port as 1433, as no specific port declaration is required in the connection string. However, if you need to, you can designate a different port, but must include it in your connection string.

Seemless

If your primary replica becomes unavailable and you have automatic failover enabled, it means that your old secondary replica can switch seemlessly to becoming the new primary. No new connections will notice a thing.

Spread the load

If you have more than one secondary, you can spread the load across a set of readable secondaries. This means that all replicas are taking a share of the workload. Prior to SQL Server 2016, only one preferred replica would receive all of the of the read intent traffic.

Thanks again to Rob for coming up with an innovative topic.

John

If you liked this post, why not read:

Test read intent connections to an AG Listener

Filed Under: front-page, T-SQL Tuesday Tagged With: ag, AG Listener, always on availability group, availability group, listener, sql, SQL server, t-sql tuesday

Optimising a slow stored procedure

14th July 2020 By John McCormack 1 Comment

Optimising a slow stored procedure

t-sql tuesday logo

In this blog post, I describe optimising a slow stored procedure. It’s part of a series of posts by members of the SQL Server community in relation to T-SQL Tuesday. For this month’s T-SQL Tuesday, Kerry Tyler asks:

Tell me (us all, obviously) about something recently that broke or went wrong, and what it took to fix it. Of course, the intent here would be for this to be SQL Server-related, but it doesn’t have to be. We can all learn from something going wrong in infrastructure-land, or how there was a loophole in some business process that turned around and bit somebody’s arm. It doesn’t even have to be all that recent–maybe you’ve got a really good story about modem banks catching on fire and that’s how you found out the fire suppression system hadn’t been inspected in years. Just spitballin’ here. If you’ve got an incident whose resolution can help someone else avoid the same problem in the future or improve a policy as a preventative measure, let us hear about it.

The situation

I received a call out of hours that a key web page in the application was suffering from timeouts. It was a page for managing important finance data and our team in the US were unable to work. I needed to work out what had changed and how to fix it?

Identifying the problem

All http requests are logged so I could took a quick look at the logs to see if there were any problems. It stood out that one stored procedure in particular was timing out. I quickly ran a trace (sorry xe fans) and found that one particular proc was running for 30 seconds every time. (This is the application timeout value). I took the proc name and parameters and ran from SSMS and found that the procedure took 4 minutes to complete.

How to fix

So why had this procedure suddenly gone bad? Well the fact is it was poorly performing anyway and finance users were frustrated on the whole. My first thought was recompile it quickly and see what happens. Like flicking a switch, it was back to around 20 seconds and the finance page would load again, albeit slowly. So, the issue here was that a bad execution plan had been cached and was being used for each subsequent execution.

This is however a very unsatisfactory fix. First of all, you are not stopping the problem from reoccurring. Secondly, you are only improving the situation from broken to slow. Hardly something for the CV.

The next morning, I took a look at the stored procedure code and could see it needed to be optimised. I ran the proc on a test system, and collected the key metrics such as logical reads and cpu duration by running [SQL]SET STATISTICS TIME,IO ON[/SQL]. To simplify the output, I always copy this into Richie Rump’s Statistics Parser.

This output showed me logical reads into the millions for multiple tables. Starting from the tables with the highest reads, I worked through all of the high ones, looking at which columns were being queried and which predicates were being used. I used this information to design and test alternative indexes, each time remeasuring the output from SET STATISTICS TIME,IO ON. Once I had the correct indexes in place, I was able to submit a PR with the changes. Once it went to production, it resulted in sub second page loads which made the finance team a lot happier.

One other thing that I did that is worth mentioning is I used Sentry One Plan Explorer (It’s free). The query plan was one of those intimidating ones, with hundreds of nodes. When I look at these in SSMS, it’s sometime difficult to know where to start. However in Plan Explorer, there is an option to ‘Show Cumulative Costs’ which helps you can see which branches of the plan can be minimised as the have little cumulative impact, rather than the impact of each node within the plan. This makes reading the rest of the plan much easier because it gets a lot smaller.

SentryOne plan explorer cumulative costs

Whilst you’re here, you may find the other posts interesting.

How DBATools can help with performance tuning

A successful performance tuning project

Filed Under: front-page, Guides Tagged With: Performance tuning, t-sql, t-sql tuesday

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

John McCormack · Copyright © 2021