Why the Benefits of Serverless Computing May Never Be Realized
The promised benefits of serverless computing boil down to a way to make application deployment faster, more flexible, more cost-efficient and more scalable.
All over the Web, you can find declarations about the revolutionary nature of serverless computing. Some folks have even declared that we’re in the midst of a “serverless revolution.” The fact is, however, that that revolution has not come to pass--and it probably never will. In many ways, the hype surrounding the benefits of serverless has failed to live up to reality. Here’s why.
The Promise of the Benefits of Serverless Computing
Serverless computing refers to an architecture wherein applications (or parts of applications) run on-demand within specialized execution environments. They are usually hosted in the cloud, although serverless can be done on-premises, as well.
The major potential benefits of serverless computing include:
With serverless computing there is no need on the part of users to maintain full operating system environments in order to run serverless code (hence the term “serverless”). Users can simply upload the code they want to run, configure which conditions should trigger it to run, and then sit back and let the serverless framework do its magic.
In cloud services-based serverless environments, you pay only for when your code is actually running. This beats the conventional cloud-based virtual machine pricing model, where you have to pay constantly to keep your virtual servers running, even if they are sitting idle some of the time.
Code can be executed very quickly, and more instances can be spun up seamlessly to meet scalability needs. This makes serverless great for situations where high performance or scalability are priorities.
In general, then, the promised benefits of serverless computing boil down to a way to make application deployment faster, more flexible, more cost-efficient and more scalable. Those are the core promises of the serverless revolution.
The Limitations of Serverless
Unfortunately, serverless hasn’t fully delivered on these promises.
It’s true that serverless execution models can make code more scalable, while reducing the management burden required on the part of users. It can also add cost efficiencies in some cases.
However, these benefits of serverless are outweighed by broader limitations that constrain the value that serverless computing can deliver. Those limitations include the following.
1. Serverless supports only certain programming languages.
Most serverless frameworks allow you to deploy functions that are written only in certain programming languages.
Admittedly, most frameworks support most of the mainstream languages. Some of them, like AWS Lambda and Azure Functions, also provide wrapper facilities that make it possible to run functions written in non-supported languages, although doing so requires more configuration and can degrade performance.
Still, the lack of universal, direct support for serverless functions written in any language places significant limitations on the use cases for serverless computing. Your ability to leverage serverless’s benefits hinges on whether the code you need to deploy happens to be written in a supported language.
To make matters more complicated, the supported languages are not the same on all serverless platforms. Some support languages that are not directly supported on others, making it difficult to migrate between platforms in certain cases.
2. Serverless frameworks are vendor-specific.
On that note, serverless platforms are proprietary and vendor-specific in most respects. Efforts to develop standards for the way functions are written, deployed and managed have been fleeting, especially when compared to other cloud-native technologies, like containers (which have benefitted from the development and widespread adoption of strong, community-based standards).
What this means is that the specific ways in which you build, deploy, update and monitor serverless functions vary from one platform to the next. This limitation adds another layer of complexity to serverless migration and increases the learning curve required to use serverless functions. These things are the opposite of the agility and flexibility that serverless computing theoretically enables.
3. “Cold” serverless functions undercut performance.
A fundamental limitation of serverless architectures is that starting up functions can take a bit of time if they are “cold,” meaning they have not run in a long time and are not initiated in the serverless environment. The cold start time may be only a few seconds, but that is an eternity in high-performance contexts where you need millisecond-level responses.
There’s not really a good way around this issue. One strategy is to trigger your functions periodically to ensure that they remain warm, but that means spending money unnecessarily and defeats part of the purpose of serverless (which is to pay only for when you actually need code to be running). You can also try to optimize function startup time, of course, but that will only get you so far.
In short, issues related to cold starts mean that the high performance that serverless computing theoretically promises is only achievable under certain conditions or with certain tradeoffs.
4. You can’t (usually) make an entire app serverless.
A final limitation of serverless is that, in most cases, it’s not a way to deploy an entire application. It wouldn’t make sense, or be cost-effective, to try to run everything inside a serverless environment; instead, serverless offers a way to run certain parts of an application that require high performance and substantial compute resources on a non-continuous basis.
As such, serverless is not like containers or virtual machines, which offer holistic ways to deploy an application across a consistent platform. Serverless is more like an add-on that runs alongside containers or virtual machines.
To put this another way, serverless is a way to augment other types of application architectures. It’s not an all-purpose architecture in itself. This also limits the use cases that it can support.
Serverless Is Not New
I’d point out, too, that serverless is actually a relatively old idea. The concept of letting users pay only for the time that code actually runs has been around since it was introduced as part of the Zimki PaaS in 2006. Google App Engine offered the same type of solution starting a couple of years later.
It wasn’t until the debut of AWS Lambda in 2014, however, that folks started talking about serverless as being a game-changer. That obscures the fact that serverless has deeper roots; indeed, it’s much older than many other technologies that we call cloud-native, like containers and Kubernetes.
If you think serverless is newer than it actually is, it’s easy to fall into the trap of overstating its impact on the current IT ecosystem.
Conclusion
I don’t mean to sound overly dismissive of serverless computing. It offers clear benefits, and is a really valuable solution for certain types of situations.
Nonetheless, I think serverless as a whole has been over-hyped. It may be part of a broader cloud-native computing revolution, but serverless on its own falls short of being revolutionary. It’s a handy technology, but it’s not exactly a pathbreaking or singularly disruptive one.
Read more about:
Serverless ComputingAbout the Author
You May Also Like