Cloud computing has grown in popularity despite doubts about its efficacy. It continues to do so even after (more than) a decade of implementation and rising efficiencies. From ‘what’ to ‘why’ and ‘how,’ enterprises have embraced cloud-triggered transformation and realized the cost, agility, scalability, and other benefits. The next wave is ‘how many’ – where organizations seek to utilize multiple service providers and deployment models.
As CIOs and users look to leverage multiple (public) cloud service providers and service models for their complex and changing set of workloads, several complications have cropped up. Security and costs are the primary concerns.
Cost reduction may have driven organizations to adopt cloud computing, but the advent of multi-cloud strategy could force them to reconsider their cloud investments. Spiraling cloud costs result in wasted resources This can be due to there being too many buyers or influencers in the organization, hasty decisions, poorly done assessments, or underestimation of the workload.
Can it be tamed or optimized?
Yes, but only if organizations evaluate their workloads well. They should not be too ‘attached’ to their legacy systems, should balance long-term versus short-term planning, and be ready to try new solutions that can help manage siloes under a single pane. ‘Cost optimized’ multi-cloud version will be the new ‘normal’ that organizations would want to have as they embrace digital and the Internet of Things in the future.
What does it take to work with the changing workload scenarios (for instance, more compute required at the edge for certain IoT applications to work) or developers constantly talking about getting more productive time?
Cloud computing will have to evolve into areas such as edge computing and serverless computing.Edge computing, on the one hand, is expected to complement the current version of cloud with more processing at the edge for real-time processing of data; serverless computing, on the other hand, will allow developers and software teams to directly run functions instead of virtual machines (VMs) in the cloud. The concept of cloud provider who takes care of the scalability, deployment of resources in serverless computing or even edge computing itself acting as another cloud (or feeding into the edge directly [in edge computing]) would change. It would really signal our maturity to consume cloud services in a new avatar.
Next important change would be the advent of industrial clouds. Industrial clouds are in a nascent stage of development. Though there has been a lot of talk about them in the past few years, it is only recently that the focus has shifted due to certain big acquisitions and specific industry needs. Industry cloud concentrates on customizations according to a vertical industry (such as FSI and manufacturing) that takes into account specific legal, regulatory, and business and security requirements.
Industry clouds are important for two reasons: first, the industries that were traditionally neglected (for innovative solutions) — for example, real estate — will start to get some renewed focus. Second, as data starts to grow in each vertical due to IoT and digital aspects, tailored insights backed by more stringent regulations will be the need of the hour.
Will the traditional flavor of cloud computing be able to support this shift toward future of cloud computing?
Not necessarily as companies look for new ways to improve efficiency and automation customized to their industry and organization rather than a ‘one size fits all’ solution. Industry clouds backed by machine learning and artificial intelligence will gain traction as companies start to shift their focus with respect to their definition of ‘value.’
Further, ‘cloud brokerage’ will assume more importance as system integrators look to reinvent while user organizations struggle with monitoring and management aspects of their multiple virtual worlds. This will be further aggravated by two scenarios: first, with specific regulations such as General Data Protection Regulation (GDPR) coming our way and making the processor also accountable for the first time, forcing organizations to reconsider the placement, governance, and security in this new paradigm.
Second, the scarcity of ‘fit for purpose’ skills will prompt organizations to look for ‘service brokers’ who are able to manage these multi-cloud or other scenarios for them. But would all these changes also force organizations to become more stringent in their evaluations and requirements? Certainly, and that’s why the last bit is even more important.
Last, will this multi-vendor theory confine itself to a few large public cloud providers? Apparently not, because as we see the rise of complexities and changing workload requirements, regulations, low entry barriers (in some cases), specific requirements, and security concerns, giving rise to more regional or niche players. Hybrid scenarios will not always entail a large cloud provider but, at times, a regional/niche one as well to cater to the specific flexibilities. This uptake will ultimately allow the regional players to fan inside out to complement large service providers in the future and come up with innovations on concepts provided by other cloud service providers as well.
Finally, these developments will not only change the way we consume services but will also help to innovate faster. Those who embrace these changes, both service providers, and consumers, will continue to grow and stay relevant. Others will suffer as cloud computing technology gains momentum and starts to touch our everyday lives (indirectly or directly) in a more customized fashion. The need of the hour would be to stay focused in this cloudy world.