Environment, social, and governance (ESG) have emerged as the key focus areas for enterprises across the world. Transitioning towards a low-carbon future necessitates leveraging digital technologies to execute ESG strategies. Adopting digital technologies such as cloud computing has helped organizations reduce carbon footprint vis-à-vis the existing landscape, resulting in the emergence of the green computing paradigm. One of the critical components of green computing is energy efficiency. Efficiency is directly related to sustainable cloud computing, which is related to cloud adoption; the type of cloud service chosen is a subset of cloud adoption. The following shows the relationship between them.
- Efficiency = f (Sustainable computing)
- Sustainable computing = f (Cloud adoption)
- Cloud adoption = f (Cloud services)
Since the dawn of the IT revolution, there has been a huge proliferation of monolithic applications, which typically run on dedicated servers with their hardware. Over time, the hardware has evolved and has become more efficient. But these applications still require dedicated on-prem data centers, which need uninterrupted power to run the servers - for cooling and lighting, subsequently emitting higher carbon.
Cloud computing has the potential to further reduce energy consumption. Organizations can migrate workloads to the cloud and adopt a hybrid or full cloud movement. A counter-argument that would immediately arise is that even the cloud hyperscalers have data centers (and quite large as well) – so how does it negate? The answer is simple. “Shared Resources”. Let’s explain it with an example. In India, between the three hyperscalers (Azure, AWS, and GCP), there are less than fifteen data centers. However, the number of organizations based in India that have migrated fully or have a significant workload on the cloud is more than a hundred. Imagine – instead of more than a hundred data centers, we have less than fifteen. With co-location and coexistence of underlying hardware and virtualization, composable microservices-based architecture, the energy consumption is drastically reduced leading to a positive impact on carbon footprint.
Why do we say that cloud adoption has a long-term positive impact on environmental sustainability?
As per IDC and World Economic Forum, the cloud adoption trend between 2021 to 2024 would eliminate approximately 630 MMT of CO2 from entering the atmosphere. This projection is based on the current adoption trends, and if the adoption rate is high, more CO2 emissions can be avoided.
- Microsoft has mentioned that Azure is between 22 and 93% more energy efficient than traditional enterprise data centers, depending on the specific comparison being made
- AWS has claimed that running business applications on AWS, rather than on-premises enterprise datacenters in Europe, could reduce associated energy usage by nearly 80%
- Google has stated that Google Datacenters are 2x more efficient than a typical enterprise data center
So, does this mean that just by moving to the cloud, the sustainability goals can be met? The answer is a big NO.
Understand that even if the workloads are moved to the cloud, the underlying issue of servers operating continuously still exists, resulting in only a marginally reduced carbon footprint. This is where the Cloud Services methodologies play a role, which directly impacts sustainability. Cloud adoption can broadly be divided into:
- Infrastructure as a Service (IaaS)
- Platform as a Service (PaaS)
Hybrid – As the name suggests, the organization still has a sizeable on-prem footprint but has taken the first positive steps to reduce its carbon footprint by moving a part of its workload to the cloud. However, the impact is marginal.
IaaS – Although a significant step in the right direction of emission reduction, the impact will only be moderate at best. The servers (primary and backup) need to be up 24x7. However, with the concept of shared resources and colocation, turning off the power in the on-prem data center is the first step in a positive direction. The user can select options like Auto Scale/Spot (EC2) when choosing the services to optimize resources, even if the DC servers are up.
PaaS – This is the best option for green computing. Even though the underlying hardware would be there, there are ways to minimize server utilization by choosing the right component for the job. PaaS databases like Synapse (dedicated SQL pools), Snowflake, and Google BigQuery have the option to scale resources (manually/automatically), which optimizes server utilization thereby reducing energy consumption as well.
Going for serverless computing within the PaaS would be even better. With serverless computing, the server is up only for the time it is needed to run the application. It has a significant positive impact on energy consumption patterns. Use only when necessary, else, reduce the capacity or halt. In a hyper-scaler data center setting, shared hardware resources can reduce power usage, including cooling needs.
The following example will illustrate how different cloud services using the same workload can impact energy consumption. For data warehousing, we'll be utilizing several SQL Server versions.
- For On-Prem SQL Server, the duration for server uptime is 24 hours, and the baseline energy consumption is X.
- For SQL Server on VM, the server remains online 24x7, but the overall energy consumption (lighting, cooling et al) is reduced by a factor of 20% due to “shared resources” to 0.8X.
- By changing the service to Synapse (dedicated SQL Pools), the server can be paused and resumed based on the processing. The server is only operational for 240 minutes, or 4 hours, assuming an additional 2 minutes for pausing or restarting, resulting in a direct decrease in server power usage to 0.17X.
- Moving to Synapse serverless pools, the system is only active during processing, estimated to be at 8 minutes only per batch or 3.2 hours, thus reducing the power consumption to 0.13X.
Plotting the data below indicates how cloud adoption can lead to a reduction in carbon footprint -
|Hourly batch processing of data with the same volume and transformation complexity is simple. The average time for on-prem to process is 8-10 minutes per batch.|
|Service Type||On-prem (SQL Server)||IaaS (SQL Server on VM)||PaaS (Azure Synapse - dedicated SQL pool)||PaaS + Serverless (Azure Synapse - serverless pool)|
With more and more organizations starting to adopt cloud computing, a significant net reduction in carbon footprint can be achieved. At HCLTech, we are making conscious decisions to adopt environment-friendly, sustainable architectures while advising solutions for our customers. While factoring the business requirements and technical feasibility, our approach includes leveraging components, which has the highest sustainability quotient within the stated technical requirements.