Hosting architecture set-up is becoming ever more specialised. Not too long ago, a hosting company would simply set up an account on a shared or dedicated server. Now, with the move to hyperscale cloud technology things are a little different, and hosting is a much more complex business.
One of the first things you must decide is whether it's possible to make your web application stateless. Here's why.
Stateful applications: How do they work?
In a traditional stateful application, things work in a fairly simple way. If you visit a website and submit a login form correctly, your status is recorded on the server as “logged in”. That means you can access pages and services that your account is permitted to see.
It’s essentially the same as logging into your own computer to access your files. Your computer knows it's you because you logged in. It stores that knowledge (the "logged-in state") in such a way that, until you log out or the machine restarts, you have access to your data.
The only difference is that this takes place on the internet and the computer you're logged in to isn't your own PC. It's server that lives in a rack in a data centre somewhere, containing the application you want to use.
Stateless applications are simple, reliable, fast, and efficient. But this approach has its drawbacks.
The problem with stateful applications
Let’s say the application you want to access is the eCommerce site of a well-known global brand. The site owners realise the dangers of running a large, business-critical website on a single server in a single location. They're likely to be using a multi-tenant variation on public cloud hosting instead.
Chances are that their site actually lives on a number of different servers. And the traffic to those servers is distributed via a load balancer.
It doesn’t affect the way you access the site itself. All you have to do is enter the URL - the work of splitting the traffic takes place behind the scenes. You won’t notice it at all, unless the application is stateful.
Negative user experiences
It turns out the ecommerce site mentioned in our previous example lives on two different servers, located in two different places. We’ll call them server 1 and server 2.
When you visit the site, you're actually accessing the load balancer. That sends you to one or other of the servers depending on the rules it was set-up to follow.
For the sake of argument, let’s say the load balancer directs you to server 1. Your computer is now connected directly to server 1, and when you submit your login details, the server checks your username and password against the database. Once it authenticates you are who you say you are, your logged-in state is stored on server 1.
This is all completely fine as long as you stay on server 1. But that's unlikely.
The next request you send is to look at your profile. Just like before your request goes to the load balancer and this time it sends you to server 2. And here's where we encounter a problem. Your logged-in state is stored on server 1, and you are now on server 2. Server 2 doesn’t recognize you and so forces you to log in again. This is hardly conducive to a positive user experiences.
A different approach
Let's look at the problem a different perspective. You have home contents insurance and you've just bought a huge, all-singing, all-dancing 4K TV. You'll probably want to call your insurance company first to check that your policy covers your new purchase.
You get through to someone in the call centre and they ask you a series of security questions to verify your identity. It's a little annoying, but you understand why they need to do it. After all, they don’t want to be handing out confidential information to someone who isn’t authorised to have it.
Congratulations, you've cleared security and you can now start discussing your policy. You explain the situation, and they let you know that your new TV exceeds the terms of your current contract. You’ll need to talk to someone about increasing your cover. That's not something the agent can do then and there, so they put you through to their sales department.
You're back on hold and, after another short wait, someone from the sales team answers. Unfortunately, the security process you went through before is no longer valid and you'll have to answer the same questions again if you want to access your account.
Not only is this inefficient, it's extremely annoying.
Getting around the problem with stateless applications
Thankfully, there's an alternative to this approach: stateless applications.
This approach stores your login credentials on your computer, rather than the server. Instead of checking your details against the database and agreeing they're correct, the server generates a unique token. Typically, this is a long sequence of seemingly random characters. It then sends you the token and length of time it'll remain valid.
The token is stored on a separate database, not the server. So, when you decide to view your profile, you send your token alongside the correct URL. And, because the application is stateless, it doesn't matter where the site lives. Even if the load balancer sends you to a different server, it'll simply look for the corresponding token, check it against a remote database to see if it's valid, and use that as authentication.
The website may use two servers or a hundred. All you know is that the site remembers your credentials whenever you log in.
To revisit our call centre metaphor, imagine if the agent you spoke to first was able to give you a one time PIN that's valid for a short period of time. When you speak to the sales rep, they need only ask you for the PIN number and you can continue without having to go through the initial security process.
As you might imagine this gives you all sorts of advantages.
It lets you secure the uptime of your site by duplicating it across more than one server. We always recommend using a minimum of two machines to host your site. That way, if one goes down, you can fall back on the other. In an ideal world you’d have different servers in different data centres. That way, should the unthinkable happen and an entire data centre goes down, your site will remain online.
It also increases efficiency. Your load balancer splits the traffic nicely between the servers, preventing overloads.
The Next Step
Stateless applications also allow you to access one of the most impressive features of hyperscale cloud services: autoscaling.
You can set-up your hosting architecture in such a way that, if your servers reach a certain amount of load - be that bandwidth, number of connections, or whatever metric you decide to focus on - another server is spun up and the load balancer adds that to the equation. You can then level out the demand on your other servers. If the demand drops again then the server can be killed off without any negative effects on the performance of your site.
If you'd like to know more about cloud infrastructure migration and set-up, get in touch with our consultants today.