AWS service of the week – RDS
Hey there! I’m back again for another instalment of AWS Service of the Week Blog.
I thought this week I would discuss a service which was highlighted quite dramatically in the past few days from AWS.
This was due in part to this viral video of AWS shutting down and decommissioning their final Oracle Database, and instead using their own in house Database Technologies. Whilst there are many to choose from, I am going to focus the attention on RDS or Relational Database Service.
What is RDS?
RDS provides a scalable Relational Database, which is simple to set up, operate, and maintain by reducing administrative activities such as patching and backups.
There are several Relational Database Engines available including PostgreSQL, MySQL, MariaDB, SQL Server, Oracle, and Amazon’s own Aurora, which covers the lion’s share of most common Engines available on the market today. With just a few clicks of the console, or CLI commands, I can have a fully functional Database in an engine of my choice, which is able to utilise the key features of Hyper-Scaled Cloud.
Securing my Data
This is the Data layer, and the most important asset to any organisation, so keeping it secured is of the highest priority.
An RDS in AWS will communicate inside a VPC (Virtual Private Cloud), and it is highly recommended to create a private subnet where the RDS instances will live. In fact, with all the research and projects I have done, I don’t believe I have come across a use case yet for a production RDS to be in a public subnet!
Alongside this you can make use of SG (Security Group) Rules, to lock down access from specific IP Address locations and ports.
A common approach in a 3-tiered web application is to only allow the Application SG to communicate with the Database SG, again limiting the scope of who can touch the RDS data. Finally, encryption is available on the Databases, and AWS can support here with their KMS (Key Management Service) to manage the keys that will encrypt all the data at rest.
To make use of encryption in transit be sure to use an SSL/TLS connection (Secure Socket Layer/Transport Layer Security).
Backing Up Data
AWS will automatically take backups of the entire database and store them as snapshots.
You can configure the lifecycle management on how long they should be stored for, or if you want more granular backups (say every 30 minutes), then you can employ a CloudWatch event to run a manual backup triggered by time.
These backups will store the database itself and the transaction logs that have occurred on the database
Lets say your Database is very popular and when you look at the statistics in CloudWatch Metrics, you notice a very high number of database reads, which is corresponding to an increased CPU utilisation.
This in turn is slowing down the overall performance of the Application, as the RDS has become a bottleneck.
A solution to this is to create one or many read replicas of the Master instance. These read replicas, can take the load of the read traffic (If you configure your application logic to route the read requests to the replica), which are asynchronously updated from the Master, and are available in any AWS geographic region.
We’ve made use of this Read Replica feature at Wirehive, when performing Region Migration projects, as we can create the Replica in a new region, allow it to sync with master (so no data is lost), and then promote this read replica to its own primary instance.
Customise your Database
AWS give a vast array of options for your RDS instances, that it can seem a little overwhelming, so I’ll break down the key aspects that you should consider.
Firstly, it’s the Instance type. This is where you define how much Memory (RAM), and vCPU’s will you need in order for the database to operate effectively.
The second is storage, how much will be required, and what type? AWS give you the option of “General Purpose Storage” or their higher tier of “Provisioned IOPS Storage”. As you would expect the later is more expensive, as you are paying for a more performant storage with higher IOPS (Input/Output Operations per Second).
The third is whether to have a Multi-AZ setup, which means if the Master Instance fails, the Secondary can take over and keep the service running. AWS will continually monitor, and if it finds the Master in an unhealthy state, it will automatically perform tasks behind the scenes to point traffic to the Secondary, so that’s one less thing for you to monitor and worry about.
All of this customisation can be changed in the future as this is the cloud we are dealing with, if you want to change instance type or storage volume, this can all be achieved quickly and easily.
Thanks for stopping by again, I hope you have enjoyed the read, and I look forward to seeing you again.