Running SQL Server on RDS
I presented on Running SQL Server on RDS at SQLBits. It was a lightning talk, so only 5 minutes.
You can download my slides from here:
I presented on Running SQL Server on RDS at SQLBits. It was a lightning talk, so only 5 minutes.
You can download my slides from here:
How do I run SQL Server on AWS?
Running SQL Server on AWS can be done in 2 ways.
Azure may be the right choice for you. This purpose of this post is to show that you do have options and that Azure is not the only choice. Really, you should weigh up what you need from your cloud provider and make a choice. Managing a multi-cloud environment can be challenging. If you are working for a company already heavily invested it AWS, it may make more sense to stay with AWS. Even if not, AWS’s SQL Server offering is mature and RDS’s features are improving all the time so you should consider what you can achieve with AWS.
|Use database as source or target for SSIS||✔||✔|
|Full control over the instance||✔|
|Always On Availability groups||✔||✔|
|AWS-managed Multi AZ deployment||✔|
A full list of included and excluded features can be viewed at https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_SQLServer.html
EC2, you are responsible for all of your own backups. Backups can be pushed to S3 to save reduce costs. I have a full post dedicated to SQL Server backups on EC2. You could also store backups on EBS volumes attached to your instance however this is much more expensive than storing on S3.
RDS will handle backups for you. You can restore to a point in time in RDS withing the backup window you configure. Transaction logs are taken every 5 minutes so you may experience data loss up to 5 minutes. All databases are restored with 1 second of each other. The downside to this is that transactions spanning multiple databases may not be recovered consistently.
Features are being added to RDS all the time however if you need features in the excluded list, you would need to go with an EC2 installation. However if your business size doesn’t allow for DBAs or database specialists, RDS may be the best option as AWS take care of all the installation and maintenance for you.
The Glasgow Super Meetup was a joint event between Glasgow Azure User Group, Glasgow SQL User Group and Scottish PowerShell & DevOps User Group. I did an AWS Athena Presentation to the group.
Speaking about AWS Athena at the Glasgow Super Meetup might seem like an odd choice since most attendees will use Azure heavily or be more interested in SQL Server, however I was pleasantly surprised by the interest that people took in the subject matter. It was only a lightning talk so there wasn’t time to answer questions however I was asked a number of questions during the break by attendees.
I showed how tables can be archived out of the database and into S3, at a fraction of the price yet the data can still be queried if needed using Athena. I stressed that Athena isn’t intended as a replacement for an RDBMS and as such, queries will be slower than SQL Server however it is much cheaper to store large amounts of data in flat files in object storage (such as S3), rather than expensive block storage which is used with databases. So if the use case fits, such as infrequently accessed archive data, then it is something to consider. I’ve uploaded my slides and also linked to a recording of the event. If you want to try the code, you’ll find it below.
As a proof of concept, I want to export the data from the Sales.SalesOrderHeader table in Adventureworks2012 to flat files using BCP. The data would be partitioned into unique days using the OrderDate column. This data is then exported to the local file system and then uploaded to Amazon S3. The next steps include creating a table in Athena, querying it to review the data and validating the correct data has been uploaded.
-- 1. Query the tables and generate PoSH and BCP commands SELECT DISTINCT OrderDate, 'New-Item -ItemType directory -Path C:\Users\jmccorma\Documents\SQL_to_S3_Demo\Output_Files\year='+CONVERT(varchar(4), OrderDate, 102)+'\month='+CONVERT(varchar(2), OrderDate, 101)+'\day='+CONVERT(varchar(2), OrderDate, 103)+' -ErrorAction SilentlyContinue' as PoSH_command, 'bcp "SELECT SalesOrderID, RevisionNumber, OrderDate, DueDate, ShipDate, Status, OnlineOrderFlag, SalesOrderNumber, PurchaseOrderNumber, AccountNumber, CustomerID, SalesPersonID, TerritoryID, BillToAddressID, ShipToAddressID, ShipMethodID, CreditCardID, CreditCardApprovalCode, CurrencyRateID, SubTotal, TaxAmt, Freight, TotalDue, Comment, rowguid, ModifiedDate FROM [AdventureWorks2012].[Sales].[SalesOrderHeader] WHERE OrderDate = '''+convert(varchar, OrderDate, 23)+'''" queryout "c:\users\jmccorma\Documents\SQL_to_S3_Demo\Output_Files\year='+CONVERT(varchar(4), OrderDate, 102)+'\month='+CONVERT(varchar(2), OrderDate, 101)+'\day='+CONVERT(varchar(2), OrderDate, 103)+'\SalesOrderHeader.tsv" -c -t\t -r\n -T -S localhost\SQLEXPRESS' as bcp_command FROM [AdventureWorks2012].[Sales].[SalesOrderHeader]
aws s3 sync C:\SQL_to_S3_Demo\Output_Files s3://athena-demo-usergroup/Change to your local file location and your s3 bucket
CREATE DATABASE adventureworks2012;
-- Athena table created by John McCormack for Glasgow User Group CREATE EXTERNAL TABLE `SalesOrderHeader`( `SalesOrderID` INT, `RevisionNumber` TINYINT, `OrderDate` TIMESTAMP, `DueDate` TIMESTAMP, `ShipDate` TIMESTAMP, `Status` TINYINT, `OnlineOrderFlag` BOOLEAN, `SalesOrderNumber` STRING, `PurchaseOrderNumber` STRING, `AccountNumber` STRING, `CustomerID` INT, `SalesPersonID` INT, `TerritoryID` INT, `BillToAddressID` INT, `ShipToAddressID` INT, `ShipMethodID` INT, `CreditCardID` INT, `CreditCardApprovalCode` STRING, `CurrencyRateID` INT, `SubTotal` DECIMAL(12,4), `TaxAmt` DECIMAL(12,4), `Freight` DECIMAL(12,4), `TotalDue` DECIMAL(12,4), `Comment` STRING, `rowguid` STRING, `ModifiedDate` TIMESTAMP ) PARTITIONED BY ( `year` string, `month` string, `day` string) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' ESCAPED BY '\\' LINES TERMINATED BY '\n' LOCATION 's3://athena-demo-usergroup/' TBLPROPERTIES ( 'has_encrypted_data'='false', 'skip.header.line.count'='0')
MSCK REPAIR TABLE salesorderheader;
-- Validate Athena data is correct -- Athena SELECT COUNT(*) as row_count,SUM(territoryid) as column_sum FROM "adventureworks2012"."salesorderheader" WHERE year='2014' AND month = '01' AND day = '23';-- SQL Server SELECT COUNT(*) as row_count,SUM(territoryid) as column_sum FROM adventureworks2012.sales.salesorderheader WHERE OrderDate = '2014-01-23 00:00:00.000'
-- Not using partition (12 seconds - scanned 7.53MB) SELECT * FROM "adventureworks2012"."salesorderheader" WHERE OrderDate = CAST('2014-01-23 00:00:00.000' as TIMESTAMP);-- Using Partition (1.8 seconds - scanned 15.55KB - 1/6 of the duration and 1/495 of cost) SELECT * FROM "adventureworks2012"."salesorderheader" WHERE year='2014' AND month = '01' AND day = '23';
Before you learn how to create a table in AWS Athena, make sure you read this post first for more background info on AWS Athena.
When you create a table in Athena, you are really creating a table schema. The underlying data which consists of S3 files does not change. You are simply telling Athena where the data is and how to interpret it. Therefore, tables are just a logical description of the data. Just like a traditional relational database, tables also belong to databases. Therefore, databases are also logical objects, which exist to group a collection of tables together. Databases and tables do not need to be created before the data is placed in to AWS S3. Similarly, if a table or database is dropped, the data will remain in S3.
All DDL statements in Athena use HiveQL DDL. Thankfully, you don’t need to be an expert in HiveQL DDL to create tables, you can learn as you go along. You can even use a wizard in the AWS console to create tables. You can script out the DDL from existing tables using the Athena console and this will give you guide for future tables.
The data used in the demo is a free download from data.gov.uk. They also have loads of data in various formats which you can use for testing.
Some syntax in HiveQL DDL is similar to ANSI SQL however there are are few key differences.
CREATE TABLEshould included the keyword EXTERNAL.
CREATE EXTERNAL TABLE
ROW FORMAT SERDE– This describes which SerDe you should use. (More about that in the about SerDe section)
SERDEPROPERTIES– e.g a set of rules which is applied to each row that is read, in order to split the file up into different columns. If you are not sure about this, read up more on SerDe
LOCATION– the S3 bucket and folder where the data resides. No filename is required, just the location. e.g. s3://testathenabucket/traffic
SerDes are libraries which tell Hive how to interpret your data. SerDe is short for Serializer/Deserializer. There are a few to choose from that Athena supports and you cannot currently add you own.
You can use the create table wizard within the Athena console to create your tables. Just populate the options as you click through and point it at a location within S3. You must have access to the underlying data in S3 to be able to read from it. This method is slightly laborious as a result of all the screens and dropdowns, however it is reasonable enough when you only need to create a small number of tables.
First of all, select from an existing database or create a new one. Give your table a name and point to the S3 location.
Various data formats are acceptable. Parquet and ORC are compressed columnar formats which certainly makes for cheaper storage and query costs and quicker query results. Other formats such as JSON and CSV can also be used, these can be compressed to save on storage and query costs however you would still select the data format as the original data type. e.g. For .csv.gz – you would choose CSV.
Column names and data types are selected by you. As a result, you need to know the structure of your data for this (or open the file to check)
Above all, data should be partitioned where appropriate, such as by day or by customer ID. Wherever it makes sense as this will reduce the amount of data scanned by Athena which reduces cost and improves query performance even more than compression alone.
An AWS Glue crawler will automatically scan your data and create the table based on its contents. Due to this, you just need to point the crawler at your data source. Once created, you can run the crawler on demand or you can schedule it. Hence, scheduling is highly effective for loading in new data and updating data where underlying files have changed.
Give your crawler a name and description
Point the crawler to your data store.
Select or create an IAM role. The crawler runs under an IAM role which must have the correct permission to create tables and read the data from S3.
Choose a schedule for your Glue Crawler.
Declare the output location for your data.
Finally, query your data in Athena. You can type SQL into the new query window, or if you just want a sample of data you can click the ellipses next to the table name and click on preview table.
This method is useful when you need to script out table creation. As well as the AWS Athena console, you can also use programs such SQL Workbench/J which rely on a JDBC connection.
CREATE EXTERNAL TABLE `demo_traffic`( `region name (go)` string, `ons lacode` string, `ons la name` string, `cp` bigint, `s ref e` bigint, `s ref n` bigint, `s ref latitude` double, `s ref longitude` double, `road` string, `a-junction` string, `a ref e` bigint, `a ref n` bigint, `b-junction` string, `b ref e` bigint, `b ref n` bigint, `rcat` string, `idir` string, `year` bigint, `dcount` string, `hour` bigint, `pc` bigint, `2wmv` bigint, `car` bigint, `bus` bigint, `lgv` bigint, `hgvr2` bigint, `hgvr3` bigint, `hgvr4` bigint, `hgva3` bigint, `hgva5` bigint, `hgva6` bigint, `hgv` bigint, `amv` bigint) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' LOCATION 's3://johnbox-athena/Traffic/' TBLPROPERTIES ( 'compressionType'='none', 'delimiter'=',', 'objectCount'='1', 'skip.header.line.count'='1', 'typeOfData'='file')
This introduction to AWS Athena gives a brief overview of what what AWS Athena is and some potential use cases. I discuss in simple terms how to optimize your AWS Athena configuration for cost effectiveness and performance efficiency, both of which are pillars of the AWS Well Architected Framework. My Slides | AWS White Paper
AWS’s own documentation is the best place for full details on the Athena offering, my page hopes to serve as further explanation and also act as an anchor to some more detailed information. As it is a managed service, Athena requires no administration, maintenance or patching. Performance may not be on par with a Relational Database Management System (RDBMS) but costs can be much cheaper as you are running no servers and there are no licencing implications.
Athena is a service provided by AWS. It allows users to query static files, such as CSVs (which are stored in AWS S3) using SQL Syntax. The queries are made using ANSI SQL so many existing users of database technologies such as MySQL or SQL Server can adapt quickly to using ANSI. New users can learn the commands easily.
“Object based storage” like Amazon S3 is a lot cheaper than “block based storage” such as EBS. This means you can store large data sets as CSV files on Amazon S3 at a fraction of the price it would cost to store the data using EBS. You are then charged for each query (currently $5 per 5TB scanned). Clever use of compression and partitioning can reduce the amount of data scanned, meaning queries will be cheaper. AWS Athena is described as serverless which means the end user doesn’t need to manage or administer any servers, this is all done by AWS.
If you notice from the previous paragraph that the query cost is $5 per 5TB scanned so the pricing is quite straightforward. Athena uses per megabyte charging, with a 10MB minimum. You can save by compressing, partitioning and/or converting data to a columnar format. The less data that needs to be scanned, the cheaper the query.