Wirehive’s Journey to the AWS Well-Architected Framework
Recently my colleague Jon and I were given the amazing opportunity to represent Wirehive and visit AWS at their head office to attend a workshop.
This workshop was all about the AWS Well-Architected Framework and the Partner Program that sits alongside it. I thought I would share my experience of the day and inform you that it is Wirehive’s ambition to become a member of this program in the coming few weeks, so watch this space on an upcoming announcement!
What is the AWS Well-Architected Framework?
This framework has been developed over the years from Solution Architects’ experiences, and thousands of AWS cloud environments that have been deployed, to understand a best practice approach on how to “put something on the AWS cloud”.
The AWS Well-Architected Framework consists of five pillars:
- Operational Excellence
- Performance Efficiency
- Cost Optimisation
Each pillar has a set of criteria and values which are used for critical analysis when reviewing an architecture. This ensures that what you are deploying to the AWS cloud is done so successfully, mitigating potential risks.
Who should be involved in an AWS Well-Architected Review?
It’s very important to say that this piece of work is not an audit, but a way for various stakeholders in the business (Developers, C-Suite, Operations etc) to come together with a Partner to explores their strategy for the cloud, their pain points, and what is important to them.
The benefits of having multiple stakeholders in an AWS Well-Architected Review
Take the hypothetical scenario of a streaming service losing subscribers month after month. The CFO may only be interested in the bottom line, and wants to decrease costs to accommodate this, whereas the Operations team leader is concerned on the level of service uptime. On the surface these two viewpoints are completely different, however digging down they share the same viewpoint. By providing a performant solution, the reliability improves. If the reliability improves it reasons that customer satisfaction improves, which in turn will decrease subscriber cancellations.
Whilst this scenario is hypothetical you can see how bringing a variety of business and technical viewpoints into a room, and discussing the five pillars, a lot more can be achieved. In isolation these views may not be interpreted correctly, and a CFO cost cutting measure by decreasing instance t-shirt sizes does not get to the heart of the problem.
This pillar of the AWS Well-Architected Framework analyses the running and monitoring of your systems, so that they can continue to deliver the business value you expect them to. As a business’ values will evolve over time, you should codify with Infrastructure as Code solutions, which enables the small, rapid changes that can be implemented, or be easily backed out if something does not go to plan.
This pillar also emphasises the proactive approach to things going wrong, and having a solid set of tested procedures that can be acted upon, because it is inevitable that things can and will go wrong, but the proactiveness alleviates its impact to your customers.
Most if not all organisations will have business critical data that they would not want anyone else getting hold of. So, at this pillar we consider the security of accounts, access, and systems and whether they are using a least privilege approach or are too permissive.
A common thing to consider is a review of Security Groups and its rules. The number of times during an infrastructure audit we find a security group rule with an open port to the world, or a public S3 bucket is astonishing really!
We all accept that failures in systems do occur, but what happens when it does? How long are your systems offline? Will your customers see the impact? When will your data be restored? These are just some of the many questions that are covered in this pillar of the AWS Well-Architected Framework.
A great thing to consider is automatic recovers from failure, such as auto-scaling groups with launch configurations attached. If my EC2 system goes offline, I can make AWS automatically create me a new one without any manual intervention, and if behind a load balancer the users should not be impacted (assuming session data is not held here).
A great question to ask of your architecture is are you using the right technologies for the job in hand? Did you know that in 2018 AWS released over 1900 new services and features? This is an absolute staggering amount, but every year they beat the previous years record!
It stands to reason that the technology you employed on AWS years ago could be obsolete, and there is a better way of working today. One such example is embracing Serverless technologies and breaking down the tasks you currently perform on legacy Virtual Machine setups in EC2. I have seen examples of EC2 instances running cron jobs, where a small serverless Lambda function could complete the task in hand, and with the pillar below save a huge amount of cost.
Simply put you should be paying the right amount for the service you provide to your customers. Now it can be said that “Cloud is more expensive” but you must consider the Total Cost of Ownership of your existing environment before you jump to that assumption.
For example, a Database may cost a little more in AWS when run 24/7, however it is a fully managed service, so how much are you saving in not having a DBA supporting that Database? I can now reskill this DBA into something else and utilise them for other business needs. A significant number of your resources may not be needed on a 24/7 availability anyhow, such as Dev or UAT environments, so scale them down to nothing during non-working hours and save yourself some more unnecessary costs.
Thanks for reading over this blog, and I hope to report the good news about the AWS Well Architected Partner Program in the near future!