Servers, virtual machines, serverless, functions, containers. Compute options have evolved significantly over the last few years, especially with the innovation in cloud computing. But what compute options should you be looking at – and why? Here we look at the main options.
We used to build our own infrastructure in the form of physical servers. These required long procurement lead times and high overheads to install, configure, and maintain the physical and software components and their dependencies.
Public cloud providers still provide a level of physical server by providing dedicated Bare Metal services. Whilst these reduce the procurement lead time, due to being deployable on demand, many of the traditional operational and economic overheads remain.
These days Bare Metal servers are typically only used when an application has obscure licensing, compliance, or resource needs. For many organisations, they still play an important role in how their services are delivered – constraining their ability to digitise and respond with agility.
Virtualisation allowed us to run many “virtual servers” on a physical server or cluster, significantly reducing provisioning time and improving resource utilisation. The reason? You could spin up multiple virtual machines (WMs) and run multiple environments on a single physical server.
The advent of cloud computing revolved around Infrastructure as a Services (IaaS) Virtual Machines such as Azure Virtual Machines, AWS EC2, and Google Compute Engine.
The use of VMs is still a typical model used by organisations who choose to lift and shift their on-premise workloads into the cloud – effectively replicating what they had in their on-premise environment.
In this model you are leasing a server on demand with certain characteristics (and many options).
Whilst there are obvious cost and agility economies with VMs, especially in running monolithic and commercial off the shelf (COTS) application architectures, there are still overheads when comparing to more modern compute options, including:
- Unnecessary charges for keeping the server up even when we are not consuming any or all resources
- The need to still configure and maintain the operating environment, system, and patches
- Complexity or a requirement for additional services in order to scale effectively
Containers are similar to virtualisation in that they also provide a way to carve up and utilise physical compute resources.
The main difference is that each containerised application shares the same underlying operating system (OS), the single host OS. By comparison, every VM gets its own unique OS. This means that containers generally have much lower overhead than VMs due to not having to deal with multiple operating systems, making them generally also easier to scale and move than VMs.
Cloud providers take care of managing and running these underlying components so the real value of containers is that they provide a standard way to package your application’s code, configurations, and dependencies into a single object. This allows the application and its run time to be abstracted from the environment in which they actually run.
The major benefit of this approach is being able to deploy, scale and port an application consistently and quickly across virtually any environment – whether the target environment is a private data centre, public clouds, or even a developer’s desktop.
Additional benefits include the ability to manage version control for deployment and to scale very quickly due to a container’s lightweight nature, allowing for quick start and stop cycles. Containers can also help reduce operational complexity by providing a consistent isolated application operating environment.
Because of their characteristics, containers are great for new software design models such as microservices where an application’s design is broken up, developed, deployed and supported as separate functional components. Each component is put in a fast, highly scalable container that can also be independently scaled and migrated.
Containers reduce traditional server operational overheads but can be complex to orchestrate. However, cloud-based container services like Google GKE, EKS and Fargate from AWS, and Azure AKS increasingly help reduce these orchestration complexities.
Serverless or Functions?
In a serverless model (also known as ‘Function as a Service’), an application invokes cloud compute resources as needed to execute a function. A function is often invoked by an event that grabs resources when they’re needed and release them when they aren’t. You don’t really manage them which means no operating environment overheads.
By executing a piece of code and dynamically allocating the resources, there’s no underutilisation or cost when resources sit idle and you only get charged while they are invoked which can often take less than five minutes.
This suits sporadically active and stateless applications or microservices. A good example of this is an event-triggered payment, which can take minutes to execute, versus a full payroll process that might be processing for hours or days.
Whilst there is often an argument about which is best, the reality is serverless and containers can work hand in hand – especially with microservice architectures. In fact, the AWS Fargate container service uses their own serverless framework to launch, scale and manage the container clusters that underpin the service. AWS Lambda, Google Cloud Functions, and Azure Functions are the common function services available, with each having slightly different supported languages and supplementary services that need to be considered.
Whether you are migrating to the cloud, looking for operational and deployment efficiencies, or developing a new application you should carefully consider the best compute options for each workload.
The main factors to consider when choosing compute models are application architecture, workload scalability and need for portability, and longevity of run time.
If you are developing new applications then containers and serverless have to be front and centre – they simply provide better economies of scale and greater agility in cost and operation.
If you are deploying COTs or legacy applications – which are often unable to handle changing infrastructure variables (given their need to maintain state) – then containers are a good bet. VMs will also do the trick for these application workloads, but may come with a higher operational overhead.
Which compute options are right for you – now and in the future?