In 2022, the buzzwords and hot topics of yesteryear are fast becoming standard practice. So, if you’re still in the dark about serverless, now’s the time to swot up.
This article will provide you with everything you need to get a firm grasp on serverless computing, including:
- What serverless actually is
- The technology trends that drove the development of serverless
- The pros and cons
- What solutions are available today
- And finally, what makes a good serverless use case
So let’s get to it!
What is serverless?
Serverless computing is – first and foremost – a misleading term. In solutions like peer to peer, it’s true there’s not a server in sight, but, in serverless solutions there are in fact servers galore.
So why the name?
Serverless actually means the developer takes less responsibility for provisioning.
Instead the application’s business logic – the code that deals with core functionality – is fed to a serverless solution like AWS Lambda or Azure Functions, and then provisioning is handled by the provider.
This is a somewhat difficult concept to grasp, but you could say that serverless is the Uber of computing vs. the traditional set up of owning a car.
The user pays on a per journey basis with a new vehicle provided – and charged for – per trip. Which of course offsets the costs of insuring, maintaining and even learning to drive one’s own vehicle.
Serverless is much the same. There are servers. But they’re someone else’s problem.
How did we get here?
Serverless represents the latest iteration in a technology trend of an increasing focus on application code and business logic and decreasing focus on provisioning infrastructure.
Or, to put it another way, an increase in automated infrastructure provision, on the provider’s side.
What’s business logic again?
Business logic – in software development – is the part of an application which encodes the real-world business rules.
This is really what the application does.
So, a very simplified example might be something like ‘if a user hits play, begin the movie they’re trying to watch.’
Another way of thinking about it is that business logic is the mediator between the database and the user interface.
It decides how the user accesses data, which is another way of saying it encodes what the app actually is.
So what’s this trend toward business logic and why is serverless the endpoint?
Essentially, it’s a trend which says, ‘we want to write code, and we don’t want to maintain the environments it runs in.’
And it’s been gathering pace throughout each new approach to provisioning.
1. BMs (bare metal servers)
Here, the provisioning was literal. A warts-and-all bare metal server. You probably know all the ways in which this is inferior to a VM set-up. But for the purposes of understanding this trend, what’s important is that you maintain, patch, pay for at rest, and scale with extreme difficulty.
2. VMs (virtual machines)
The elementary particles of the cloud computing universe, virtual machines were a significant improvement on bare metal.
By partitioning off sections of physical servers to create virtual machines, public cloud providers were able to supply a rapidly scalable, flexible and cheaper service. To add more machines, all they needed to do was allocate more (virtual) space.
However, the user was still responsible for maintaining and patching.
Containers are a bit of a funny one, but in tracing the move away from environment, all we need to understand is that a container is like a virtual machine with one less layer of virtualization in between it and the provider’s server.
This is grossly oversimplified – for a deep dive, read our previous blog here.
But essentially, getting rid of this layer made containers lighter and easier to work with, though users still had to maintain them, etc.
The endpoint of this trend is currently serverless – although, just to be clear, that doesn’t mean serverless is ‘better’. What it is – however – is the thing furthest along the trend of an increased focussed on code and provider-side infrastructure provision and management.
So, what is serverless?
After that recap, we can now say serverless is a computing model wherein infrastructure provisioning/orchestration are carried out by the provider in response to application code written by the user.
We could have said that at the start, but it might not have made much sense!
An example of serverless architecture using AWS Lambda
Here we can see static elements of the app are hosted in S3. That isn’t serverless.
However, when handling services requiring app data, API keys or identities, Lambda will spin up instances of the pictured services per request – that’s serverless.
The user in this example will not have to manage/provision Cognito, STS or Dynamo.
Serverless vs FaaS
For most intents and purposes, at least at beginner level, the terms FaaS and serverless are interchangeable. And you’ll find many articles that treat them as one and the same. But they’re not. At least not quite.
FaaS specifically refers to apps written as a series of functions. Serverless is a broader term, and accounts for other forms of provider-side provisioning.
In each case, servers are managed for you, but FaaS is a little more specific.
What are the pros and cons of serverless?
Serverless is cheap, consumes no resources at rest, and because the provider takes such a huge chunk of what you’d normally be doing off your hands, the total cost of ownership goes down enormously.
2. Frees up developer creativity
Concentrating only on business logic opens doors for more creative development, with teams unfettered by concerns about environment and provisioning.
3. Decreased time to market
A sub-benefit of the above, but such a substantial one it warrants its own entry. The time saved not provisioning means developers can finish up quickly, getting the application to market faster.
4. Improved geolocation
Because servers are spun up on-demand, it’s possible for the provider to run the function in an environment close to the user’s request, cutting down latency.
Even VMs are often worked out on a pre-paid for expected compute basis – this generally being cheaper than pay-as-you-go. With serverless, there are no such constraints, since every function can be handled and scaled independently.
6. Microservices ready
Microservices applications can be implemented as functions, to find out more about microservices, read our thoughts here.
1. Multi-Tenancy security issues
Because serverless apps often perform function requests for multiple clients within the same server – this is what’s meant by multi-tenancy, i.e sharing a space – serverless solutions can be more vulnerable to certain kinds of attacks.
2. Vendor lock-in
It can be difficult migrating a serverless application between providers, and if they decide to change the cost, you might be stuck with it.
3. Cold starts
The first time your function calls up an environment, there can be increased latency. This is known as a ‘cold start,’ but it can be mitigated by having your application make routine calls to ‘keep your environment warm.’
4. Not the best approach for long-running tasks
Certain procedures – for example processing data – are better suited to dedicated servers. These are tasks that need to be ticking away in the background.
5. Lack of operational oversight
Because serverless solutions spin up a new environment every time a function’s called, debugging, gathering relevant metrics on, and monitoring can be difficult – and you’re more often than not reliant on the provider.
6. Architectural complexity
Implementing a serverless architecture can be difficult, and a great deal of thought needs to go into how the application is split into different functions – although, of course, you do avoid the complexity of provisioning.
So what serverless solutions are out there? Serverless on AWS, Azure or Google?
Serverless and FaaS solutions are offered by all of the major cloud providers. There’s Lambda from AWS, Azure Functions and Google Cloud Functions, as well as a few other serverless, or serverless-ish offerings, such as AWS Fargate.
However, with serverless now a mature technology, the real difference between these services are slight.
All are highly available with the capability to support huge workloads. You could say Google supports the fewest number of coding languages – with languages supported being cited as a meaningful distinction by some – but really, most serverless concepts are implementable in most of the languages offered.
It’s often just a case of developer preference.
Interface-wise, some point to AWS Lamda’s more hold-you-by-the-hand, pop-up–box approach as a positive, making it less likely unseasoned devs will drop the ball, while some prefer the more stripped back interface of Azure.
The bottom line is that these technologies are roughly equal for most purposes, which again speaks to the fact that they’ve been developing for some years.
How to know whether to go serverless?
Serverless has many potential use cases. It can be great for:
- Start-ups or smaller companies who want to focus on their application only
- Static websites, where the speed and scalability of serverless outstrips VMs
- As part of a DevOps pipeline, where serverless solutions manage the environments needed for testing
- Any workload that doesn’t need to be ‘always on,’ to take maximum advantage of the fact you’re paying nothing while idle
However, there’s more to it than that.
Ultimately, whether or not serverless is the right architecture for you is a very complex question, and though we’d be happy to weigh in, it’s a topic for another day.
It depends on your application, the overall business, and a whole lot more.
- Serverless isn’t serverless. There are servers, but they’re managed by the provider.
- Serverless is part of a trend in which provisioning is backgrounded and businesses focus on code
- There are pros and cons to serverless, including, in the plus column, low costs and availability, and in the minus, vendor lock-in and loss of fine control
- In 2022, there’s not that much difference between the three major providers, AWS, Azure, and Google Cloud
- Whether or not to go serverless is a complex question – it’s great in some circumstances, but not all
How we can help
As a next-gen MSP working with MACH technologies for leading brands Just After Midnight is perfectly placed to help you take advantage of cloud-native tech. We work with serverless and non-serverless cloud environments providing support, cost optimisation and brand new builds.
So, if you think serverless might be the way to go, or you just want pointing in the right direction, get in touch.