John McCormack DBA

SQL Server Databases and Cloud

  • Personal
    • About
  • Free Training
    • SQL Server on Amazon RDS (Free Course)
    • Free practice questions to help you pass DP-900
  • Save money in Azure
    • Azure IaaS SQL Backups – Stop burning money
    • Your Azure SQL Database and Managed Instance is too big
    • Turn the cloud off at bedtime to save 70%
    • Your Azure SQL Virtual Machine might be too big
    • Save money with Azure SQL DB serverless
    • Save up to 73% with reserved instances
    • Delete unused instances to save money in Azure
  • Hire me
    • 60 minute cost optimization
    • Let me solve your SQL Server problems
    • Take a look at my Sessionize speaker’s profile

Search Results for: aws

Data Scotland 2019 presentation

13th September 2019 By John McCormack 2 Comments

Data Scotland 2019 presentation:

My Data Scotland 2019 presentation: You can run SQL Server on AWS

https://www.slideshare.net/JohnMcCormack33/data-scotland-2019-you-can-run-sql-server-on-aws

Useful links:

https://aws.amazon.com/rds/sqlserver/

https://docs.aws.amazon.com/en_pv/AmazonRDS/latest/UserGuide/USER_ConnectToMicrosoftSQLServerInstance.html

https://aws.amazon.com/blogs/developer/handling-credentials-with-aws-tools-for-windows-powershell/

https://johnmccormack.it/?s=aws

 

Filed Under: AWS RDS, AWS SQL Server, front-page Tagged With: #datascotland, datascotland2019, ec2, rds, SQL server

Create your first RDS instance using the AWS console

26th July 2020 By John McCormack 1 Comment

Create your first RDS instance using the AWS console

In this video, we walk through how to create your first RDS instance using the AWS console. (SQL Server) No scripting skills are required at this stage.

Course homepage | Next video

Filed Under: AWS RDS, AWS SQL Server, Training

SQL Server on Amazon RDS (Free Course)

23rd July 2020 By John McCormack Leave a Comment

SQL Server on Amazon RDS Course

I’ve put together a short video series on running SQL Server on Amazon RDS. It covers some basics and can be completed in around an hour, with several very short videos. It’s intended for SQL professionals who are just getting started with AWS, but might also be enjoyed by AWS users, looking to get started with a relational database management system.

It’s my first attempt at putting together a training plan. So any feedback will be gratefully received. Either in the blog comments or YouTube video comments.

Contents

1. Introduction to SQL Server on Amazon RDS (Scroll down)
2. Benefits and limitations of AWS RDS
3. Create your first RDS instance using the AWS console
4. Connecting to your AWS RDS instance
5. Advanced Configurations using Parameter Groups
6. Securing your SQL Server AWS RDS Instance
7. Automating Deployments using PowerShell
8. Backup and restore for AWS RDS
9. Providing High-Availability through Multiple Availability Zones
10. Monitoring Your Instances using CloudWatch

Introduction to SQL Server on Amazon RDS

Download slides

Next Video

Filed Under: AWS RDS, AWS SQL Server, front-page, Training

Why I certified to become more than a SQL DBA

11th May 2021 By John McCormack 2 Comments

t-sql tuesday logo

For this month’s T-SQL Tuesday, Andy Leonard (b|t) asks How Do You Respond When Technology Changes Under You?

It’s a great topic and inspired me to write a T-SQL Tuesday post this month. I’ve missed the last couple due to a combination of just not feeling like it, and the previous topics weren’t best suited to me. Hopefully this can inspire me to get writing again.

Hey, I thought I was the SQL DBA!

When I got my start with SQL Server, I was just that –  A SQL DBA. I worked as a junior member of a big team, we had a large SQL Server estate and I just wanted to learn to become as experienced as the senior folk. I was able to learn and be mentored and was enjoying making progress. Then I heard we were going to start using “The Cloud”. Not only that but I was going to be responsible for the operation of our Kafka clusters, EMR and ElasticSearch.

A lot of these technologies were developer led, they knew why they wanted to use them and from an operations perspective, it was about making sure things didn’t fail, and errors were handled. I had no background in them, and didn’t really understand them well. Simply learning to use PuTTY to log into a linux VM was one of my early successes.

AWS Certifications FTW

The move to the cloud, in this case AWS was going ahead with or without me so I had to embrace it. Of course, I learned and collaborated with colleagues and got to an acceptable standard but it was only when I decided to do AWS certifications that things really took off. I found that by studying for the AWS Cloud Solutions Architect certification that I got an appreciation for all cloud services, not just the data ones like RDS. Studying for the AWS certifications improved my networking skills; I learned about object vs block storage and of course I learned all of the different data products (often managed services but not always).

I approached the certification process from a genuine perspective of learning the technology well, rather than just passing the exam. Simply passing the exam does not do much for you. Even if it gets you a job, you will likely be found out pretty quickly.

A further benefit to learning one cloud technology well is that when we started using Azure, I found the concepts were the same. I understood the networking, the security groups and PaaS vs IaaS. It made this transition much easier and meant that when I moved to a job that was predominantly based in Azure, I didn’t have the huge knowledge gap that I would have had, if I had not done my AWS certifications.

So for me, a structured learning approach is best. It’s not for everyone but I am certainly glad that my job is no longer just “SQL DBA”.

Filed Under: Certification, front-page

EC2 SQL Server Backups to Amazon S3

22nd December 2017 By John McCormack 8 Comments

How to write your EC2 SQL Server Backups to Amazon S3

How to write your EC2 SQL Server Backups to Amazon S3This post specifically discusses how to write your EC2 SQL Server Backups to Amazon S3. It should not be confused with running SQL Server on RDS which is Amazon’s managed database service. To back up to S3, you will need to have an AWS account and a bucket in S3 that you want to write to. You’ll also need a way of authenticating into S3 such as an access key or IAM roles.

Background

Recently, I have been investigating running SQL Server in Amazon EC2. One issue to resolve is where to store database backups. Amazon S3 is far cheaper and has much higher redundancy than Amazon EBS (Elastic Block Store) so storing the backups here is an attractive idea. Furthermore, we can use rules in S3 to auto delete old backups or move them to Glacier which is cheaper than S3 Standard. However,  there is no obvious or inbuilt way to write your EC2 SQL Server Backups to Amazon S3 directly.

Example Costs as of December 2017 (US-EAST-1 Region) for storing each Terabyte of backup.

EBS – General purpose SSD Amazon S3 Amazon Glacier
$1228.80 $282.62 $49.15

Why is this a problem? (Block storage vs object storage)

SQL Server expects to be able to write backups to a file location (block storage) but S3 is Object Storage. This makes S3 unsuitable for direct backups. Newer versions of SQL Server (2016 onwards) also allow a BACKUP DATABASE TO URL option but this is designed for Microsoft Azure Blob Storage and as such, an S3  bucket cannot be used as the URL.

As such, we need a method to get the file into S3, either directly or indirectly because the storage cost is so much cheaper than EBS. Amazon recommend using File Gateway but this involves running another EC2 machine. This isn’t cost effective or easy to implement so I’ve discarded this method for now. The two options which I have tried and tested are below, they both have their pros and cons. I’ve also listed 3 other methods which may help you write your EC2 SQL Server Backups to Amazon S3 directly but these are untested by me as I rejected the approach.

A note on Ola

The Ola Hallengren backup solution (link) works well in EC2 when writing to EBS storage as this is just treated like a local drive. It also works well if writing to Azure Blob Storage. Whilst it also works with 3rd party backup tools such as LiteSpeed, it doesn’t cater for the latest LiteSpeed functionality that does allow writes directly to S3. I have written to Ola to ask if this is something that could be considered for a future release but until then, I am unable to use Ola’s solution.

Option 1 – Write to EBS and Sync with S3

Take a database backup using t-sql. Better still, schedule a job via SQL Agent to run at a set interval.

BACKUP DATABASE [Test]
TO DISK = N'D:\MSSQL\BACKUP\Test.bak' WITH NOFORMAT,
NOINIT,
NAME = N'Test-Full Database Backup',
SKIP,
NOREWIND,
NOUNLOAD,
STATS = 10

Using the AWS command line, copy the backup from your EBS volume to your S3 bucket. This should be step 2 of your agent job.

aws s3 sync D:\MSSQL\BACKUP s3://sqlpoc-backups/prod-backups

Delete the backup on EBS so the space will be available for the next backup. I have used PowerShell to do this but you can choose any method you prefer. This should be step 3 of your agent job.

remove-item D:\MSSQL\BACKUP\* -Recurse

Note: When sizing your EBS volume, it needs to be at least the size of your biggest database backup (with some room for growth). If this is the size you choose, bear in mind you can only back up one database at a time. If you size the drive bigger, you can back up more databases in the same job before syncing with S3 however this will be more expensive. With EBS, you pay for the size of the volume allocated, not just what is used. This is different to S3 where you only pay for data stored.

Option 2 – Write directly using LiteSpeed

Quest LiteSpeed is a 3rd party product for backing up SQL Databases. One very useful feature of enterprise edition is that it allows you to backup directly to Amazon S3. This is huge because it takes away the need for any intermediate storage, VMs or other services. It pushes your backup directly to the desired bucket location in S3. To do this, you’ll need to add a few new parameters to your backup calls but overall, it’s saves a load of hassle.

Note: This isn’t an ad for LiteSpeed, my decision to use it is down to my company already having a license agreement with Quest. If I didn’t have this in place, I would more than likely go with option 1, ultimately cost would be the deciding factor unless the difference was small.

Before writing your t-sql for the backup job, you will need to add your cloud account details to the LiteSpeed desktop app. Here you will need to pass in your Cloud Vendor, Display name, AWS Access Key, AWS Secret Key, region, storage class and bucket name.

exec master.dbo.xp_backup_database
@database = N'Test',
@backupname = N'Test - Full Database Backup',
@desc = N'Full Backup of Test on %Y-%m-%d %I:%M:%S %p',
@compressionlevel = 7,
@filename = 'test-backups/Test-Full.bkp', -- folder name within bucket defined here
@CloudVendor = N'AmazonS3',
@CloudBucketName = N' sqlpoc-backups',
@CloudAccessKeyEnc = N'xxxxxxxxxxxx’, -- an encrypted version of your access key (generated by LiteSpeed app)
@CloudSecretKeyEnc = N' xxxxxxxxxxxx, -- an encrypted version of your secret key (generated by LiteSpeed app)
@CloudRegionName = N'us-east-1',
@UseSSL = 1,
@OLRMAP = 1 ,
@init = 1,
@with = N'STATS = 10',
@verify = 1

As Amazon S3 is object storage, it does not support appending to an existing file so if using the same name each time for your backups, the previous backup is deleted and replaced. If you turn on versioning in your S3 bucket, you can keep old versions of the same filename. Or if each backup has a unique name (due to time stamp being appended), it will save them side by side. Do remember to have Object Lifecycle Rules turned on for all or specific items in your bucket to either delete old backups or send them to the much cheaper Glacier storage tier. If you don’t, your bucket size will grow very quickly and costs will escalate.

Option 3 – Map s3 bucket as a drive

If only this was easy. Doing this natively is not possible as far as I can tell. This comes down to a difference in storage types. S3 is object storage and for a drive, we need block storage. That said, there a number of 3rd party solution which I have not tested that seem to offer this service. A quick google suggests CloudBerry and Tntdrive as the most popular providers.

Option 4 – File Gateway/Storage Gateway

When investigating methods for saving my backups to S3, AWS support recommended using File Gateway. This would certainly overcome the technical challenge of getting the files into object storage (S3) as it allows you to mount an NFS Volume, but it adds complexity and cost to the process. File gateway is very useful as a method of connecting an on-premises appliance to cloud based storage however as the instances were already in the cloud (EC2), I didn’t feel like this was a very satisfactory solution.

Option 5 – Other 3rd Party tools

Other 3rd party tools exist such as CloudBerry which allow but I have not looked into them in detail. Prices start around $150 per server but they do offer volume discounts.

Useful references

https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/sql-server-backup-to-url

https://support.quest.com/technical-documents/litespeed-for-sql-server/8.6/user-guide/8

How do you write your EC2 SQL Server Backups to Amazon S3?

If you use another method listed not here, I’d be delighted to hear about it in the comments. 

IT Certification Category (English)728x90

Filed Under: AWS, AWS EC2, AWS SQL Server, front-page, Guides Tagged With: aws s3, backup, backup to aws s3, backup to s3, litespeed, sql backup

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • Next Page »
Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

John McCormack · Copyright © 2025

 

Loading Comments...