As we progress to digital landscapes where computing is becoming omnipresent, end users and developers have less time to understand the intricacies of what lies beneath versus what is ultimately being delivered. “Serverless” has been a buzzword in the landscape over the past few years and has picked up lately due to the presence of upstream, open-source initiatives along with downstream alternatives provided by major hyperscalers.
The USP for serverless architecture has been the ability to create and run applications without the need for infrastructure management, leaving the developer to concentrate only on writing high-quality code . Today, the industry is wide open with serverless technologies inherently using a slew of technology stacks coming in with the hyperscalers or on open-source. Some of the prominent ones in the market are KNative, OpenFaaS, Google Cloud Run, Google Cloud Functions, AWS Lambda, Azure Functions, and IBM Cloud Functions.
We also need to keep in mind that the compute platform alone doesn’t constitute the serverless platform but an amalgamation with other services/components would also go in to form a serverless application like managed API gateways, databases and storage, messaging, logging and monitoring, etc. One or more of these components can be packaged as a serverless piece. Multiple such pieces can be plugged together to create a larger component. There are major independent software vendors (ISVs) along with the cloud service providers (CSPs) who work on bringing these components as service to the end user/developer
Customers choose a serverless platform of their choice based on a number of factors including:
Leverage out-of-the-box solution from the hyperscaler of choice
The native managed services/platforms provided by the adopted hyperscaler becomes a key driving factor toward adoption of a certain serverless platform. In some cases, this may lead to an eventual vendor lock-in which may be tough to break out of.
Portability of the serverless platform across hyperscalers
Enterprises looking primarily around hybrid/multi-cloud or edge-based solutions with no vendor lock-in scenarios can opt for this. With respect to this, the open-source community, in conjunction with the vendors, have come up with innovative solutions in the lines of the open application model (OAM), bringing modular, extensible, and portable design for modeling application deployment with a higher-level yet consistent API.
Open-source adoption in the enterprise
The open-source-based licensing model brings with it the advantages of reduced CAPEX and OPEX. The serverless technology solutions like KNative, OpenFaaS and Kubevela augments this vision of open source adoption as these are vendor-neutral making them portable and easy to use
MOVING APPLICATIONS ONTO SERVERLESS
Since its inception in a certain form and shape as Google App Engine in 2008 to AWS’s release of Lambda in late 2014, and finally to the current date, serverless has come a long way and become a go-to solution for multiple use cases. Some of the core benefits in applications moving to serverless architecture include:
- Reduced operations cost with pay-per-use model
- Abstraction of infrastructure layer for serverless and containers
- Simpler and faster deployment with increased time-to-market
- Better observability of individual components through the use of native monitoring and logging solutions
- Reallocation of priorities to areas which need primary attention in terms of functionalities
To move an existing monolith onto serverless, two approaches can primarily be followed, namely:
Here, we essentially take the application and only make the necessary configuration-level changes, without tinkering too much with the code base. This use case covers smaller monolithic applications aligned to single tasks like scheduled jobs, minimal data processing tasks, etc.
Here, the application is refactored to make it native to the serverless architecture. This use case holds good for applications which are essentially large and complex in nature.
Analyzing the existing application is key in order to derive treatment types for the future state.
The key assessment parameters to look out for would be:
Application Type– An application alluding onto a service-oriented architecture model would be an easier candidate for rehosting and reconfiguration, as opposed to a large monolithic one which would undergo major refactoring, following some of the well-known methods for conversion, for example, the strangler pattern of refactoring.
Caching Data– Hyperscaler-provided caching solutions can be leveraged for storing session data, user preferences, and other similar needs from a webpage perspective. Solutions like Redis/Hazelcast are ideal to cater to these kind of data.
Database Services – Database- as-a-Service (DBaaS) for both relational and non-relational databases provided by the hyperscaler can be leveraged
Code Analysis for Refactoring– A detailed code-level analysis described below would be required in mapping components within the serverless architecture from the existing monolith one. A domain-driven design for services architecture using a strangler-based approach, where the application is broken down gradually, is key to align to each cloud-run component. Identification of services with lesser dependencies and lesser number of constraints and third-party systems are the ones to be considered first prior to moving on to the others. The individual components can be broken up into minimum viable products (MVPs) and can be sequenced based on priority followed by appropriate unit testing, functional testing, system integration testing, and performance testing. The optimization in the number of services being created is essential to not have additional overheads during the actual application run.
Mainframe to Serverless– Moving away from monoliths on mainframes has been a trend to leverage the best from the cloud-native landscape using the 12 factor design principles. Automated or manual ways can be looked forward to for conversion of mainframe applications to serverless. A detailed code-level analysis would be required, followed by an incremental approach based on the strangler design pattern of modernization onto a cloud-native friendly programming framework (Java/.Net Core etc.). The modern application components need to be analyzed for hosting and can either be on a serverless platform for loads which are short-lived or on a container platform for long lived ones. HCLTech’s own homegrown tool, the Automated Technology Modernization Accelerator (ATMA), or any of HCLTech’s partner tools, can enable the journey.
Another way to look at it is to leverage x86 native transcompilers for the legacy COBOL code and make the existing application serverless compatible without making significant changes to the application business logic or architecture. This essentially extends the life of existing legacy applications while benefitting from the modern environment.
KEY SERVERLESS ADOPTION STRATEGY
Table 1: Key Elements of Serverless Adoption
BEST PRACTICES FOR A SEAMLESS SERVERLESS DEVELOPMENT AND DEPLOYMENT
While the applications can be developed in one or more of the following development languages, including Java, Python, Node.JS, C# etc., the following are a list of some of the best practices that can be followed in the serverless deployment and development paradigm:
- Make the process adhere to the 12 factor design principles
- Codify the configurations using standard vendor-neutral tools (like Terraform, Helm charts, etc.)
- Keeping credentials, keys segregated through use of secrets/vaults
- Incremental access provisioning norms starting with a deny-all mode
- Avoidance of configuration drifts across environments using appropriate tools
- Well-defined environment segregation between Dev, Test, UAT, Stage, and Prod with separate security setup for each
- Regular rotation of service account keys complying with auditing standards
- Control the flow of traffic using a canary-based approach rolling out to a subset of users first and gradually releasing it to all based on initial feedback received
SERVERLESS REFERENCE ARCHITECTURE
Figure 1: Serverless Reference Architecture
SERVERLESS CONSIDERATIONS FOR WAY FORWARD
While serverless architecture has its own benefits, long-running workloads may still look for virtual machines or dedicated servers for its deployment in order to save costs. Depending on the enterprise preference, a vendor-independent or a vendor-dependent strategy can be taken into account. In the former you may need to go all out with maintenance and configuration of the platform whilst retaining control, and in the latter you would be able to take care of these needs with the vendor-managed services but without retaining full control.
Overall, it’s predominantly an enterprise-architect-led strategy to move onto serverless landscapes built on top of scalable container environments. Pros and cons would always need to be equally weighed upon, prior to embarking on such a journey.