Cloud Computing Outlook

Serverless Architecture - Too Good to Be True?

By Ariel Maislos, CEO, Stratoscale

Ariel Maislos, CEO, Stratoscale

Advancements in cloud computing, containers, API and automation technologies, and the growing sophistication of backend-as-a-service offerings, have created the opportunity for cloud providers to offer a Serverless Architecture cloud offering. This doesn’t mean that servers are no longer involved; it just means that the developers no longer need to worry about the infrastructure because everything is taken care of by the cloud provider. Using this approach, developers just deploy the appropriate code and everything else is managed automatically by the cloud provider. Does this sound too good to be true?

"Serverless Architecture is an excellent option if you can break up your application into microservices"

How Serverless Architecture Works

In traditional web application architecture, you must manage your infrastructure and ensure it meets your scalability and security needs. For example, when starting out, you have the client on one side and the server at the other side. The client sends a “request” and the server replies with a “response.”But, if an application gains a bit of traction, you will soon have to scale the server side.

Now, this can be done in a number of ways. One way is by scaling-up your server, adding capacity by using a stronger and bigger server.

Another way is to scale-out your server, adding additional servers to handle the load. In this case you would also have to deploy a load balancer that will “decide” how to balance the load between two or more servers. This means you will have to administer this setup, taking precautions for an event in which one of the servers fails or the load balancer fails.

In terms of the cost, you will have to pay for the allocation of all these components, including the virtual machines the load balancer, storage, etc., even if they aren’t fully utilized. This requires investment in proper planning and management of these resources. Although some cloud providers offer “pay-as-you-grow” models and “elastic pricing”, you will still generally have to decide how to implement your architecture. For web-application developers, it’s usually the latter.

Serverless models provide a radically different approach. Unlike traditional architectures, Serverless is run in stateless compute containers that are event-triggered, ephemeral (may only last for one invocation), and fully managed by a third party. Just like a “black box,” this service you simply upload code and take care of everything automatically in real time. When a request comes in, you will spin up a container, which runs your Lambda function.

In terms of cost, with the Serverless model you usually pay only for the requests served and the compute time required to run your code. Billing is metered in increments of 100 milliseconds, making it cost-effective and easy to scale automatically from a few requests per day to thousands per second.

Advantages of using a Serverless Architecture

• Reduced operational costs – If you think about it, Serverless is essentially an outsourcing solution. The infrastructure doesn’t go away. However, compared to regular cloud services, the fact that you only pay for the compute that you need means that depending on your traffic scale and shape, this may be a huge savings in terms of the operational costs, especially for early and dynamic applications with varying load requirements.

• Infinite Scalability – Extreme scalability is not new in the world of cloud services, but the Serverless takes it to a whole new level. The scaling functionality of Serverless not only reduces compute cost, it also reduces operational management because the scaling is automatic. Instead of explicitly adding and removing instances to an array of servers, with Serverless you can happily forget about that and let your vendor scale your application for you. Since scaling is performed by the cloud provider on every request, you don’t even need to think about the question of how many concurrent requests you can handle before running out of memory.

• Separation of concerns – Serverless almost forces you to implement the separation of concern model, by which you separate the application into distinct sections, such that each section addresses a separate concern.

• Isolated processes – In Serverless environments each Lambda function enjoys complete isolation. If one of the functions goes down, it does not affect the other functions and it will not crash your server.

Drawbacks of using a Serverless Architecture

• Lack of control – With any outsourcing strategy you are giving up control of some of your system to a third-party vendor. Such lack of control may manifest as system downtime, unexpected limits, cost changes, loss of functionality, forced API upgrades and more. Furthermore, if you need a specialized server for a specialized process, you will have to run this specialized server on your own. A Serverless framework, in most cases, offers commoditized infrastructure that will run your processes in a generalized manner.

• High costs for long running processes – If your processes run for a long duration, you may be better off running your own server. Of course since this relates not only to cost, but also to the skillset that you have or the attention that you want to put into running your own server; consider all these aspects as you evaluate these solutions.

• Vendor lock-in – By completely outsourcing your infrastructure management to a Serverless provider you are undoubtedly locking yourself to that vendor. Each vendor has its own standards and programming framework that is not easily portable. In almost every case, whatever Serverless features you’re using from a vendor will be differently implemented by another vendor. If you want to switch vendors you’ll almost certainly need to update your operational tools (deployment, monitoring and so on), you’ll probably need to change your code.

Serverless Architecture is an excellent option if you can break up your application into microservices. It is less suitable for long-running applications that run specialized processes. Although Serverless is relatively new, significant innovations and new features are expected from all the players in this market, as more developers adopt it and bring it to the mainstream.

Read Also

Security Vendors: Leveraging Partnerships and Reducing Risk

Security Vendors: Leveraging Partnerships and Reducing Risk

Michael A. Clancy, Chief Security Officer, Enterprise Resiliency & Security, Fannie Mae
Migrating to the Cloud is only the First Step

Migrating to the Cloud is only the First Step

Ricardo Simard, Head of Commercial-Cloud & Security Partnerships, BT
Silence is Not Golden

Silence is Not Golden

Michael R. Galin, Director - Risk Management, TELUS
Thriving in the Face of Disasters

Thriving in the Face of Disasters

Stephen E. Flynn, Ph.D., Founding Director, Global Resilience Institute & Professor of Political Science, Northeastern University
Enterprise Public Cloud Adoption

Enterprise Public Cloud Adoption

Gerum Haile, Vice President, Technical Fellow-Chief Cloud and Platform Architect- USAA
Ensuring Good Weather with Clouds

Ensuring Good Weather with Clouds

Glenn Kurowski, SVP, CACI International Inc [NYSE:CACI]