Martin Austwick

Marketing Manager


Understanding Cloud Services - Stateful vs Stateless Applications

Setting up hosting architecture is becoming more and more specialist all the time.  Not too long ago a hosting company simply set you up an account on a shared, or dedicated server in a single data centre somewhere.  Now with the move to hyperscale cloud technology things are a little different, and hosting is a much more complex business.

One of the first things you should be looking at is whether it is possible to make your web application stateless.


Stateless Applications

In a traditional stateful application things work in a fairly simple way.  If you visit a website and submit a login form correctly your status is recorded on the server as “logged in”.  That means that you can then access pages and services that your account has the permission to see.

It’s essentially the same as logging in to your own computer in order to get at the files you store there. Your computer knows it is you because you logged in, and it has stored that logged in state in such a way that until you log out again, or the machine restarts you will remain able to access it.

The only difference is that this takes place on the internet, and the computer you have logged on to is not your own PC, but a server that lives in a rack in a datacentre somewhere in the world and contains the application you are wanting to use.

However, while this is simple, reliable, fast, and efficient there are a number of problems with it.
Let’s say that the application you are accessing is an e-commerce site.  The online version of a large brand that sells their product or service online.

It’s likely that they are aware that running a large business critical website they don’t want to have the entire thing sitting on a single server in a single location. They are likely to be using some multi-tenant variation on public cloud hosting.

The chances are that their site actually lives on a number of different servers, and the traffic to those servers is distributed via a load balancer.

It doesn’t make any difference to how you, the user, access the site itself.  You simply type in the URL just like always, the work of splitting the traffic all takes place behind the scenes and you won’t ever notice.

At least you won’t notice unless the application that is running is stateful.

I’m sure you can see where this is going.


Stateful Applications

Let’s say the ecommerce site you are using lives on two different servers that are in different places. We’ll call them server 1 and server 2.

When you visit the site, you are actually accessing the load balancer.  That sends you to one or other of the servers depending on the rules it was set up with.

For the sake of the example let’s say you visit the site, and the load balancer directs you to server 1. Your computer at home is now connected directly to server 1, and when you submit your log in details, the server checks your username and password against the database.  Once it authenticates you are indeed who you say you are your logged in state is stored on server 1.

This is all completely fine as long as you stay on server 1, but that isn’t likely.

The next request you send is to look at your profile.  Just like before your request goes to the load balancer and this time it sends you to server 2.

And here is where we encounter a problem.

Your logged in state is stored on server 1, and you are now on server 2. Server 2 doesn’t recognize you and so forces you to log in again.

A Different Way Of Looking at It

We love a metaphor here at Wirehive, so to stick with tradition we’re not going to let an opportunity like this pass us by.

Have you ever rung a call centre?

Let’s say you have home contents insurance and you have just bought a massive TV. You might ring your insurance company to check that your existing cover is still appropriate.

You get through to someone in the call centre and they ask you a series of questions to make sure that you are indeed the person you are claiming to be.  It can be a little annoying, but it’s a perfectly understandable thing for them to have to do.  After all they don’t want to be handing out any confidential information to someone who isn’t authorized to have it.

Congratulations, you have now cleared security and the person you are talking to is authorized to talk to you about your insurance cover.

You explain the situation, and they let you know that your new TV is simply too massive, and you’ll need to talk to someone about increasing your cover. However sadly they are not able to do that for you, they need to put you through to their sales department.

You are back on hold, and after another short wait someone on the sales team answers.

However, the first security clearance you went through isn’t valid any more.  This person isn’t able to access your account to upgrade you without you going through the whole process all over again.

It is not only inefficient, but it is extremely annoying.

There must be a better way.

And thankfully there is indeed another way.
Instead of storing the fact that you have logged in to the website on the server you are accessing, it is possible to store it on your own computer.

When you log in to a stateless application a slightly different process takes place.

When you submit your details, instead of just checking them against the database and agreeing that they are correct, the server also generates a unique token. This is usually a long sequence of seemingly random characters - and sends this token to you with instructions for the length of time it will remain valid.

You are still logged in, but instead of that state being stored on the server, the token is stored on a separate database and recorded as being valid. Then when you decide to view your profile, instead of simply requesting the correct URL you also send your token.

Then if the load balancer sends you to a different server it doesn’t matter at all.  Every server is designed in such a way that it looks for a token, checks it against a remote database to see if it is valid, and uses that as authentication.

You can have two servers, or you can have a hundred. It really doesn’t matter at all from the point of view of the experience of the user.  All you know is that you log in, and then the site remembers that you are logged in. It doesn't matter where the site lives, or in other words it is "stateless".
To revisit our metaphor, imagine how simple it would be for the first person who took you through security clearance to give you a one time PIN that was only valid for a short period of time.  And then when you spoke to the second person they simply asked you if you had a pin.  You give them the pin, they check it hasn’t expired, and there is no need to go through security again. It wouldn’t matter how many different people you had to speak to to get your TV insured, they’d all use the same PIN to ensure you are who you say you are, and everything would be a lot more efficient.

As you might imagine this gives you all sorts of advantages.

It lets you secure the uptime of your site by duplicating it across more than one server.  We always recommend that as a minimum you have two different machines with different power supplies hosting your site.  That way if one goes down you still have another one running.  In an ideal world you’d have different servers in different data centres, then if the unthinkable happens and an entire data center went down your site would still be just fine.

It also allows you to make sure things are happening as quickly as possible too.  Your load balancer splits the traffic nicely between the servers making sure none of them become overloaded and start to slow down.

The Next Step

As if this wasn’t enough for you it also allows you to access one of the most impressive features of hyperscale cloud services.  Autoscaling.

You can set up your hosting architecture in such a way that if your servers reach a certain amount of load, be that bandwidth, number of connections, or whatever metric you decide, another server is spun up and the load balancer adds that to the equation and levels out the demand on the other servers.  If the demand drops again then the server can be killed off without any negative effects on the performance of your site.

Of course, that is not quite so simple to set up as a basic single stateful server.

If you'd like to know more about how to set up your cloud infrastructure then register for our FREE webinar.