John McCormack DBA

SQL Server Databases and Cloud

  • Personal
    • About
  • Free Training
    • SQL Server on Amazon RDS (Free Course)
    • Free practice questions to help you pass DP-900
  • Save money in Azure
    • Azure IaaS SQL Backups – Stop burning money
    • Your Azure SQL Database and Managed Instance is too big
    • Turn the cloud off at bedtime to save 70%
    • Your Azure SQL Virtual Machine might be too big
    • Save money with Azure SQL DB serverless
    • Save up to 73% with reserved instances
    • Delete unused instances to save money in Azure
  • Hire me
    • 60 minute cost optimization
    • Let me solve your SQL Server problems
    • Take a look at my Sessionize speaker’s profile

PowerShell Splatting – What was wrong with backticks?

8th December 2020 By John McCormack Leave a Comment

T-SQL Tuesday

t-sql tuesday logo

This month’s T-SQL Tuesday is hosted by Lisa Griffin Bohm. (b|t). Lisa asks “This month, I’d like those of you who have presented, or written a presentation, to share something technical THAT DID NOT RELATE to the topic of the presentation, that you’ve learned in writing or giving the presentation.” I’m going to write about how I came across PowerShell Splatting and how it made me better at PowerShell, despite presenting on a cloud topic.

Powershell splatting

At DataScotland 2019, I did a presentation on AWS RDS for SQL Server. The technical content was about how RDS works, what you can do with it, and how to provision it etc. As part of my demos, I decided to use AWS PowerShell commands. When I had made this presentation at previous events, I had used AWS CLI so I had to update my code examples. I’m ok with PowerShell, but I’m not an expert. I just wanted to show that were were multiple ways to interface with AWS.

My code was full of backticks. You could say I was daft about backticks. I loved them and thought they made my PowerShell code readable because they stopped the lines from running off the monitor. Someone asked me why I don’t use PowerShell splatting? “Whatting” I asked? I had never heard of splatting.

At the break, I spoke to a couple of people who were more experienced in PowerShell than me. They advised that PowerShell splatting was the way to go for large commands because they make the code more readable. More readable was definitely something I was interested in so I decided to go away and learn about splatting, and update my code for future events.

So what is PowerShell Splatting?

Rather than passing a long list of parameters into commands, you can create a variable in advance to hold these values. The variable is an array or a hash table and includes as many parameters as you need. If you need to pass in parameter names and values, use a hash table. If you just need to pass in a list of parameter values, you should use an array. Then, when you run the command, you simply pass in the hash table parameter instead of all the individual parameters.

Example command before splatting

New-RDSDBInstance -dbinstanceidentifier "datascotland-posh" -region "eu-central-1" -VpcSecurityGroupId "sg-00a1234g567c3d4ab" `
    -allocatedstorage 20 -dbinstanceclass "db.t2.micro" -engine "sqlserver-ex" `
    -masterusername "rds_name" -masteruserpassword "secure_pw_here" -availabilityzone "eu-central-1a" `
    -port 50000 -engineversion "14.00.3049.1.v1"

Example command after splatting

$HashArguments = @{
dbinstanceidentifier= "datascotland-posh"
region = "eu-central-1"
VpcSecurityGroupId = "sg-00a1234g567c3d4ab"
allocatedstorage = 20
dbinstanceclass = "db.t2.micro"
engine = "sqlserver-ex"
masterusername = "rds_name"
masteruserpassword = "secure_pw_here"
availabilityzone = "eu-central-1a"
port = 50000
engineversion = "14.00.3049.1.v1"
}
New-RDSDBInstance @HashArguments

At a glance

As you can see, the 2nd example which uses splatting is easier to read and you can pick out the value of each parameter at a quick glance. It was worth learning to make my code more readable and improve my overall PowerShell knowledge.

“Writing this post has just made me realise that I should update my RDS course ↙ as the examples in it don’t use splatting. 🤦‍♂️”

SQL Server on Amazon RDS (Free Course)

Filed Under: front-page, PowerShell, T-SQL Tuesday, Uncategorised Tagged With: powershell, splatting, t-sql tuesday

Your SQL Server indexes can’t always be perfect

27th November 2020 By John McCormack 1 Comment

Perfection is not the aim

Your SQL Server indexes can’t always be perfect. What I mean by that is you can’t index for every query on a busy SQL server. In this post I’m going to describe just one table from a server I have been tuning recently. It had 26 indexes, and that was not unusual for this database. This most likely happened over time as a result of developers creating an index per query using the table, as well as following some perceived best practices (with the best of intentions).

Did you say 26 indexes?

Yes, one of the heavily used tables on this database had 26 indexes. It had one clustered index on an ID column which is fairly standard. There were 25 further non clustered indexes including an index per foreign key. A shared code base runs across several customers. Some customer’s use all the functionality, some only use parts of it, but they all get the same database schema. And all of those indexes.

Current usage and how to fix

Remove unused indexes

9 of the non clustered indexes on this table were read from less than once per day but were written to up to 18,000 times per day. The servers had been online for 180 days at the time of checking and some of the index uses were in single or double figures. These needed to be dropped as they were not bringing anything to the party.

I got this info by running sp_blitzindex and comparing the index usage stats against the server uptime. (I took care to run on all production servers for all customers, including read only secondaries).

Combine similar indexes

This left 16 remaining non clustered indexes to review.

  • 9 of which had been used several million times with a maximum of 32 Million seeks.
    • 5 out of those 9 appeared to be heavily overlapping other indexes.
      • I needed to see if these can be combined into 1 or 2 indexes.
    • 4 out of those 9 are fairly unique.
      • Those 4 will be retained.
  • 7 have been used hundreds of thousands of times and don’t appear to have much overlap.

This leaves us with 4 non clustered indexes being retained, 9 being dropped and 12 to be reviewed in more detail. This is the most difficult stage of the process as I need to decide which indexes can be merged and which can be dropped. Its always best to be cautious with this part, because removig the wrong index could slow down an important process.

Of the 12 indexes that needed a more detailed look, I decided to drop 8, and create 3 new ones. A net reduction of 5 indexes for this stage. Multiple groups of indexes had the same keys but different include columns. Some of the include column ranges were very wide and included large data types such as large NVARCHARs. Replacement indexes used the same keys, in the same order, but had a more targeted range of include columns.

Perfection? No. Better? Yes.

In total, we have a net reduction of 14 non clustered indexes. It still leaves us with 11 non clustered indexes. Your SQL Server indexes can’t always be perfect but 11 is better than 25. I’d really have loved to get that count down to 5 but it is a lot better than it was. This process should be repeated after a month of the new index structure being in place with the aim of further consolidation if possible. The new indexes won’t be perfect, some queries may not be served as well by the new consolidated indexes or by having a rarely used index removed but I was trying to improve the overall health of the SQL Server. Reducing the number of writes that were hammering the storage is a step in the right direction.

How did we get so many indexes?

Index creation was all developer led so when new functionality was rolled out, indexes were created for each new process. This resulted in a lot of overlapping indexes. I don’t blame the developers, there should just have been a process for reviewing index requests, and comparing them to what we already had.

How to avoid in future

I see two key approaches to helping avoid a repeat of this death by indexing.

  1. Don’t hoard the analysis. Sharing the results with the development team will help increase awareness of this death by indexing scenario.
  2. Implementing an approval process for pull requests where a DBA should be in the approvers list if the change includes a new index.

Verify improvements

Index changes can affect multiple queries all using the same table so just testing one query or stored procedure is not enough to release with confidence. Where possible, use a load testing environment which mirrors production, and replay a repeatable workload. By baselining a typical workload, then making the changes and taking a new benchmark, you can measure the improvements overall and for specific database calls.

With this client, there was a load testing environment with synthetic API calls to the application that attempted to mirror production. Not everyone has this luxury when testing but capturing and replaying a workload, and measuring the differences is vital for this type of performance tuning. It can help you catch any serious regressions as well as giving you the confidence that your performance will improve.

If you don’t alreqady have a suitable load testing environment, the tool I recommend to get you going is an open source project called WorkloadTools by Gianluca Sartori.

More handy links

A successful performance tuning project
How DBATools can help with performance tuning

Filed Under: front-page, Guides, Performance Tuning Tagged With: indexes, Performance tuning, SQL Performance, SQL server

What is an availability group listener – An analogy

13th October 2020 By John McCormack Leave a Comment

T-SQL Tuesday logo - What is an availability group listenerThis post is part of the October 2020 edition of #tsql2sday. Thanks to Rob Volk for hosting (blog|twitter) and for coming up with a great topic. “Data Analogies, or: Explain Databases Like I’m Five!” I genuinely can’t wait to read some of the responses. I think it’s going to be a very educational series of posts. I’ve chosen to explain how an availability group listener works by using an analogy.

What is an availability group listener – An Analogy

Let’s say you run a business and have multiple phone numbers. You may have an office number, a mobile number and a fancy 0800 number to give your business the impression of being much bigger, like your national competitors. You put the 0800 number on your van, website, all of your advertising and your business cards. You’re not actually a national scale business though and you don’t have a receptionist to handle the calls into your 0800 number. So what happens to those calls?

They are routed through to your preferred number, usually your office number, but you can change it to your mobile number when you are out of office. You could even set a preference for it to try to route the call to your office first, then try your mobile phone if the office is unavailable. Customers that have your mobile number or office number can still call you directly on those but they will not be rerouted if either of those phones are unavailable. If you change your mobile number, you can just update the routing to use your new number and the customer is unaware of the change.

What is an availability group listener – A slightly (but not much) deeper dive

What’s in a name

Like the analogy above, the AG listener uses an address that can route SQL Server connections to a primary or secondary replica within an availability group. It includes a DNS name which is unique within a domain, an IP address or several, and a listener port designation.

Connections to SQL Server where availability groups are in use should use the listener name instead of the server name. This means that if any replica in the AG is unavailable, connections will just be routed to the available replicas, meaning no loss of service to the client. You can utilise your SQL Server resources more evenly if using a listener because there is an option to direct certain connections to a readable secondary replica. You can also offset backups to secondary replicas. All of this spreads the load more evenly across available replicas. If you just route everything through the primary, it can be overworked whilst your secondary replicas are doing next to nothing. The secondary replicas still need to be fully licensed so it is a shame, and a waste of money for them to sit by idly.

Port

If you’re looking for simplicity, you should designate the listener port as 1433, as no specific port declaration is required in the connection string. However, if you need to, you can designate a different port, but must include it in your connection string.

Seemless

If your primary replica becomes unavailable and you have automatic failover enabled, it means that your old secondary replica can switch seemlessly to becoming the new primary. No new connections will notice a thing.

Spread the load

If you have more than one secondary, you can spread the load across a set of readable secondaries. This means that all replicas are taking a share of the workload. Prior to SQL Server 2016, only one preferred replica would receive all of the of the read intent traffic.

Thanks again to Rob for coming up with an innovative topic.

John

If you liked this post, why not read:

Test read intent connections to an AG Listener

Filed Under: front-page, T-SQL Tuesday Tagged With: ag, AG Listener, always on availability group, availability group, listener, sql, SQL server, t-sql tuesday

How do I set up database mail for Azure SQL DB Managed Instance

5th October 2020 By John McCormack 2 Comments

How do I set up database mail for Azure SQL DB Managed Instance

It’s not too difficult to set up database mail for Azure SQL DB Managed Instance in comparison to SQL Server (on-prem or IaaS) however there are a few extra things to consider. This post will describe how to set up database mail for Azure SQL DB Managed Instance. I will use Sendgrid as the mail provider but you can follow the same steps for any other mail provider or your company’s smtp server.

Before I go on, my personal opinion is that including database mail is a massive feature for Managed Instances. The lack of DB Mail on Azure SQL DB Single Database or Amazon RDS is a major blocker to PaaS adoption. Now with Managed Instance, we can have PaaS and database mail.

Create an Azure SQL DB Managed Instance

Whilst I’m planning a separate post on how to provision an Azure SQL DB Managed Instance, I’m going to assume for the purposes of this post that you already have an instance up and running. If you don’t, have a look at this howto article before going any further.

Change outbound settings on your Network Security Group (NSG)

All managed instances are created with a NSG. The NSG allows you to create inbound and outbound rules for traffic coming in and out of your database. You can configure ports, protocols and IP ranges etc.

To allow secure smtp for sendgrid, you must open port 587 for outbound traffic. Otherwise, any attempt to send mail will result in an error. You can do this in the Azure Portal by navigating to the NSG or you could do it in Powershell by modifying and running the code below.

# This code was originally taken from https://docs.microsoft.com/en-us/azure/service-fabric/scripts/service-fabric-powershell-add-nsg-rule
# and amended to be used to set up an outbound rule for Database Mail on Port 587

Login-AzAccount
Get-AzSubscription
Set-AzContext -SubscriptionId "yourSubscriptionID"

$RGname="RG-JOHN-DEMO"
$port=587
$rulename="DBMailOutbound"
$nsgname="nsg-johnmccormack-demo"

# Get the NSG resource
$nsg = Get-AzNetworkSecurityGroup -Name $nsgname -ResourceGroupName $RGname

# Add the inbound security rule
$nsg | Add-AzNetworkSecurityRuleConfig -Name $rulename -Description "Allow DBMail" -Access Allow `
-Protocol * -Direction Outbound -Priority 500 -SourceAddressPrefix "*" -SourcePortRange * `
-DestinationAddressPrefix * -DestinationPortRange $port

# Update the NSG
$nsg | Set-AzNetworkSecurityGroup

Set up a sendgrid account

The simplest way if you haven’t done it before is just to use the Azure Portal.

  1. Start typing sendgrid in the search bar
  2. Click on Sendgrid accounts from services
  3. Click + Add
  4. Complete the form to create your new account. (See image at bottom of post)
  5. Click manage sendgrid to get the credentials you will need for Database Mail (See other at bottom of post)
    1. Once you are logged in:
      1. Click Settings
      2. Click API Keys
      3. Click Create API Key
      4. Create and verify a sender identity. (If you don’t do this, no sendgrid emails will work)

Enable database mail for your Managed Instance

You must run this t-sql code to enable database mail on your Managed Instance:

EXEC sp_configure 'show advanced options', 1
RECONFIGURE

EXEC sp_configure 'Database Mail XPs', 1
RECONFIGURE

Create a suitable database mail account and profile

Your database mail profile must be called AzureManagedInstance_dbmail_profile. If you try to call it anything else, your database mail won’t work.

-- Create a database mail account
IF NOT EXISTS (SELECT 1 FROM msdb.dbo.sysmail_account WHERE [name] = 'DBA Sendgrid Account')
BEGIN
DECLARE @servername NVARCHAR(255) = @@servername
SET @servername = substring(@servername, 1, charindex('.', @servername)-1)
EXECUTE msdb.dbo.sysmail_add_account_sp
@account_name = 'DBA Sendgrid Account',
@description = 'DB account for DBAs and SQL Agent',
@email_address = 'john.mccormack@*****.***',
@display_name = @servername,
@mailserver_name = 'smtp.sendgrid.net',
@mailserver_type = 'SMTP',
@username = 'apikey',
@password = '******************',
@port = 587,
@enable_ssl = 1;
END
ELSE
PRINT 'sysmail_account already configured'

-- Create a database mail profile (Profile must be called AzureManagedInstance_dbmail_profile)
IF NOT EXISTS (SELECT 1 FROM msdb..sysmail_profile WHERE name = 'AzureManagedInstance_dbmail_profile')
BEGIN
EXECUTE msdb.dbo.sysmail_add_profile_sp
@profile_name = 'AzureManagedInstance_dbmail_profile',
@description = 'Main profile for sending database mail';

-- Associate account with profile
EXECUTE msdb.dbo.sysmail_add_profileaccount_sp
@profile_name = 'AzureManagedInstance_dbmail_profile',
@account_name = 'DBA Sendgrid Account',
@sequence_number = 1 ;
END
ELSE
PRINT 'DBMail profile already configured'

Send a test email

-- Test Email
EXEC msdb.dbo.sp_send_dbmail
@profile_name = 'AzureManagedInstance_dbmail_profile',
@recipients = 'john.mccormack@*****.***',
@body = 'Email successfully sent from managed instance.',
@subject = 'OK, this works now. Thats great!';

So that’s all there is to it. Now you know how to set up database mail for Azure SQL DB Managed Instance.

If you have any comments or suggestions, please tweet me or leave them in the comments section.

John

More handy Azure links

How do I find the agent service account for Azure SQL Database Managed Instance

Change a user’s resource class within Azure Synapse Analytics using Azure Automation runbooks

Images to help you set up with sendgrid configuration for database mail

Create sendgrid account in Azure Portal
Create Sendgrid

Manage Sendgrid button in Azure Portal
Manage Sendgrid

Filed Under: Azure SQL DB Managed Instance, front-page Tagged With: azure, azure sql db, azure sqldb managed instance, database mail, managed instance, sendgrid

Zero Budget DBA – SQLBITS 2020

2nd October 2020 By John McCormack Leave a Comment

Zero Budget DBA – SQLBITS 2020

SQLBits 2020At SQLBits 2020, I presented the session: The Zero Budget DBA | SQL Community has got your back. The session aimed to highlight some free of charge tools which make life easier for DBAs and other data professionals.

SQLBits usually make past sessions available online. When this occurs, I’ll link to it here so you can rewatch. In the meantime, the session slides and scripts are available to download from my Github profile.

The data source I used for the Paisley Police Department was just a simple csv of Detroit 911 calls from Data.World.

I hope you enjoyed the session. If you have any questions, please feel free to tweet me.

John

Filed Under: front-page, Training Tagged With: DBATools, DLM Dasshboard, First Responder Kit, maintenance solution, sp_whoisactive, sqlbits, statistics parser

  • « Previous Page
  • 1
  • …
  • 5
  • 6
  • 7
  • 8
  • 9
  • …
  • 16
  • Next Page »
Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

John McCormack · Copyright © 2025

 

Loading Comments...