Serverless computing is not new anymore and offers a world of benefits to developers and users alike. It’s an architecture and execution model where the developer gets the freedom and flexibility to build an application not having to worry about server infrastructure requirements. So while developers focus on writing application business logic, the operations engineers look into other nitty-gritty such as upgrades, scaling, server provisioning, etc. What began as Infrastructure-as-a-Service (IAAS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS) is rapidly changing into a Function-as-a-Service (FaaS) where users can explore and experiment without the limitations of traditional servers. What they get to focus on is hard-core development of products and pay only for the actual compute time and resources used instead of the total uptime.
How it works
Serverless computing is typically event-driven computing where a particular event leads to execution of a function. The function runs for about five minutes before getting discarded and everything that’s needed for it to run is provided by the public cloud infrastructure. Although this sounds simple, the fact that a function can, in turn, trigger more functions makes the entire process more complex. While from a traditional developer perspective, this could be perceived as zero control on aspects pertaining to server, modern developers are more than happy with serverless computing or FaaS. Today, all large cloud vendors offer serverless offerings that include BaaS (Backend as a Service) and FaaS products. As such the organization or individual that owns the system does not have to buy, rent or supply servers for the back-end code to work on.
Fig: Serverless Architecture (source: G2crowd))
Green computing – A recent survey indicates that more than 75% of users will be on a serverless architecture in the next 18 months. According to Forbes, “typical servers in business and enterprise data centers deliver between 5 and 15 percent of their maximum computing output on average over the course of the year.” Also, there is a growing demand for larger data centers which would imply more physical resources and associated energy requirements. In typical ‘server’ settings, servers remain powered up even though they may be idle for long durations. Needless to say, this has a huge impact on the environment.
No doubt cloud infrastructure did help to a great extent in reducing this impact by providing servers on demand. However, this has adversely affected the situation since servers are left around without proper capacity management. By trying to make capacity decisions to make applications more sustainable to last for a longer period, enterprises end up being over-cautious and over-provision.
Luckily, the serverless architecture addresses this issue effectively and vendors provide enough capacity to manage the needs of customers in real-time thereby enabling a better use and management of resources across data centers.
To cite an instance, Parkar was responsible for evaluating and optimizing the ETL jobs for one of the largest telecom companies in the US. The rationale was to help reduce operational and maintenance costs and make it more efficient. It was observed that they were utilizing a third-party ETL product and expensive compute cycles on a server that was running 24×7 to perform these processes. By leveraging serverless architecture, the jobs were transformed into a utility-based model and while executing function runs, the cost was paid only for those ‘run instances’.
The outcome? Humungous savings, a happy customer and astounding results:
Lower costs – The easiest and perhaps the most effective way of offloading IT overhead is to go serverless. There is no investment towards server maintenance costs and the only time you pay is when you run your code. This also reduces operational costs. Besides, you save big on cloud administration cost and the cost of managing associated teams.
Fast deployment – There is a huge rise in developer productivity since they can now build, test and release in a highly agile environment. They don’t have to worry about the readiness of the infrastructure or when other elements pertaining to it are ready to be rolled out. An effort is being made by cloud service providers to provide a development environment that’s standard for all. A classic case in point was the announcement of AWS Lambda supporting C# in 2016. Although standards are known to impede vendor innovation, they also indicate a healthy inclination to embrace serverless architecture.
Reduced time to market – When faced with tight deadlines, serverless computing gives developers the advantage of running multiple versions of code simultaneously. This helps transform ideas into reality in the most effective manner. For instance, if they had to develop functionality that helps mobile users check their credit score as part of the mobile banking app, they would require several days before they could actually develop, test and deliver using traditional cloud IaaS models like AWS EC2. On the contrary, event-driven serverless computing with AWS Lambda could help them build the same functionality in just a few hours. In just a few clicks, they could develop functionality that’s foolproof, checked, flexible and scalable.
Built-in scaling – Built-in scalability is a huge advantage and enterprises never have to worry about over or under-provisioning while scaling policies. All you need to do is pay for the actual usage, and the serverless architecture infrastructure will expand or shrink as required.
Disaster recovery – Being a pay-per-use model, failover infrastructure comes as part of the CSP portfolio. Setting it up in paired regions of the geography in question is no big deal and comes at fraction of the cost of traditional computing in server settings. This facilitates a seamless switchover ensuring that the recovery time is virtually zero.
Experts continue to deliberate on the pros and cons of going serverless. As mentioned earlier, loss of control over infrastructure has been a major concern as the earlier settings allowed developers to customize or optimize the infrastructure as needed. Besides, some have voiced concern about security considering that multiple customers share the same serverless architecture. Measures are being implemented by vendors to address this issue by providing serverless offerings in a virtual private network. Also on offer is cloud portability to enable smooth transitioning from serverless offerings of one vendor to another. Cloud service providers are also involved in vulnerability scanning and penetration tests on infrastructure to help iron out compliance issues.
According to the Cloud Native Computing Foundation (CNCF), “there is a need for quality documentation, best practices, and more importantly, tools and utilities. Mostly, there is a need to bring different players together under the same roof to drive innovation through collaboration.”
Serverless adoption advances towards a point of maturity
As with the containers market, the current proliferation of open-source serverless frameworks should decrease and converge in the coming months. The consolidation is a major indication of how serverless adoption is maturing with time. Everyone was in awe of Amazon Web Services’ Lambda and before we knew all the big players were soon jumping onto the serverless bandwagon. Last year, Google along with 50 other companies including Pivotal, IBM, Red Hat and SAP introduced Knative the open-source network for cloud-native applications on Kubernetes. It was soon touted as developer-friendly software. Through its essential but reusable set of components, it came as a whiff of fresh air for those who were struggling with the difficult aspects of building, deploying and managing applications. The all-in-one orchestration platform was just what was needed for integrated teams to operate together. In times to come, this orchestration maturity will reach a whole new level, unleashing new possibilities for larger and more complex applications.
The emergence of sophisticated testing tools
When it comes to serverless applications, testing practices get even more complex. Serverless architecture brings together separate and distributed services, both tested independently as well as in combination to check the effects of their interaction with each service. In addition to this, it depends on cloud services and event-driven workflows that cannot be imitated easily for local testing. To address these testing challenges that are very different from those faced by conventional testing methods of a monolith approach, integration testing has emerged. This is just the beginning of an era of more sophisticated frameworks and tools that will change the game in a radical way.
From the climate perspective
4.5% of global electricity consumption in 2025 will be attributed to data centers, thus iterating the need for looking at ways to curtain consumption. When we think of big corporations like Microsoft, we can only imagine the scale at which they run their data centers.
They are focussing on the following areas to minimize environmental impact:
Operational efficiency – It leverages the power of multi-tenancy to increase hosting density (number of applications per server) to save on servers and related hardware, cooling, etc. As per a study by Microsoft, increasing load on a server from 10% to 40% increases energy requirements by a factor of 1.7. Increased server utilization in the cloud thus lowers the consumption of computing power.
Hardware efficiency – Microsoft in a bid to reduce power consumption can spend lavishly on making amends to design of server hardware. Many of its innovations are therefore given out in the form of open-source designs.
Infrastructure efficiency – Data centers are critically evaluating their Power Usage Effectiveness or PUE factor. While a value of 1 would mean entire energy is used only for computing and not for lighting, cooling, etc., large cloud providers need to work on improving their PUE factor. Microsoft, for instance, achieves an average of 1.25 for all their new Azure data centers.
Utilizing renewable energies
While Google is already operating with 100% renewable energy, Microsoft is inching its way to achieving its target of 60% renewables by 2019 and 70% by 2023.
The learning curve is steep when it comes to taking the FaaS journey. As Winston Churchill rightly said, “The farther back you can look, the farther forward you are likely to see.” The underlying network still has a long way to go before it matures into a solution that’s devoid of setbacks and complications. Until then, we need to focus on the myriad benefits of the operational environment it creates and how it’s changing the tide for developers worldwide ensuring a smoother sail towards their goals. Going serverless is certainly not going to be a cakewalk especially when traditional server settings dominate our systems. Though it needs to be looked at as an approach that should be embraced with open arms by developers, team leads, project managers and all concerned, being fully aware of what going serverless entails. What modern enterprises need is a well-planned architecture or else the initiative can quickly change into chaos.
To make your serverless initiatives work, call us today. Parkar Consulting is committed to helping you thrive on a serverless network and ensuring minimal impact on the environment through all its endeavors. We can help your development teams perform optimally keeping energy consumption to a bare minimum while enjoying increased efficiency. Call us today and we will tell you how.
Innovative Director of Software Engineering. Entrepreneurial, methodical senior software development executive with extensive software product management and development experience within highly competitive markets. I am an analytical professional skilled in successfully navigating corporations large and small through periods of accelerated growth.