×
  • Features
  • Updated: March 20, 2023

Diminishing Cloud Patronage: Big Data Returning To On-Premises Compute 

Diminishing Cloud Patronage: Big Data Returning To On-Premis

As hinted in a previous article on this stable, the primary feature that earned cloud technology its prominence is virtualization. 

With virtualization, virtual representations of servers, storage, networks, and other physical machines are replicated under one facility in a third party's data centre in such a way that on-premises physical IT infrastructural spending will be both controllable, mitigated, or completely eradicated.

During the early days of virtualization, it was easy for companies to easily avoid huge sums of capital expenditure on the procurement of high-cost on-premises IT hardware as well as its commissioning and maintenance in such a way that these costs could be reasonably channelled into other business activities. 

In the end, there was cost-centre decentralization over on-premises physical IT infrastructure and this radically drove down costs for businesses.

All of these selling points seem to have evaporated now as big data firms appear set to return to on-premises options. 

According to a recent survey from ESG, more than half of companies say that their spending on public cloud apps will increase in 2023 while 56% expect their public cloud infrastructure services spending will go up this year. 

A separate report from Gartner forecasts that worldwide spending on public cloud services will grow to a total of $591.8 billion in 2023, up from $490.3 billion in 2022.

The cost burden is such that many companies end up exceeding their budgets for the cloud. 

Veritas, a cloud data management vendor, found in a 2022 poll that upwards of 94% of organizations incur higher costs than anticipated when using a public cloud service provider and overspend by an average of 43%. 

Cloud-first strategies may be hitting the limits of their efficacy, and in many cases, ROIs are diminishing, triggering a major cloud backlash.

Thomas Robinson, COO of Domino Data Lab, is responsible for revenue and go-to-market, leading sales, marketing, professional services, customer support, and partnerships.

According to him, the great cloud migration has revolutionized IT but after a decade of cloud transformations, the most sophisticated enterprises are now taking the next generational leap: developing true hybrid strategies to support increasingly business-critical data science initiatives and repatriating workloads from the cloud back to on-premises systems.

Enterprises that haven’t begun this process are already behind.

Periscoping The Events Leading to The Great Cloud Migration

Ten years ago, the cloud was mostly used by small startups that didn’t have the resources to build and operate a physical infrastructure and for businesses that wanted to move their collaboration services to a managed infrastructure.

Public cloud services (and cheap capital in a low interest-rate economy) meant such customers could serve a growing number of users relatively inexpensively.

This environment enabled cloud-native startups such as Uber and Airbnb to scale and thrive.

Over the next decade, companies flocked en masse to the cloud because it lowered costs and expedited innovation.

This was truly a paradigm shift and company after the company announced “cloud-first” strategies and moved infrastructures wholesale to cloud service providers.


The Unabating Backlash

However, the euphoria of cloud-first strategies may be fading away as these strategies are hitting the limits of their efficacy, and in many cases, ROIs are diminishing, triggering a major cloud backlash.

Ubiquitous cloud adoption has given rise to new challenges, namely out-of-control costs, deepening complexity, and restrictive vendor lock-in. We call this cloud sprawl.

The sheer quantity of workloads in the cloud is causing cloud expenses to skyrocket.

Enterprises are now running core compute workloads and massive storage volumes in the cloud — not to mention ML, AI, and deep learning programs that require dozens or even hundreds of GPUs and terabytes or even petabytes of data.

The costs keep climbing with no end in sight. In fact, some companies are now spending up to twice as much on cloud services as they were before they migrated their workloads from on-prem systems.

Nvidia estimates that moving large, specialized AI and ML workloads back on premises can yield a 30% savings.

Furthermore, new regulations are complicating cloud environments.

The US and European data sovereignty laws require enterprises to manage and isolate data in multiple regions according to varying compliance regulations, with compute attached to each one.

This makes a single-region, single-cloud design no longer feasible for sophisticated global enterprises, further adding to the cost and complexity of infrastructure.

Lastly, cloud service providers have continued to move up the stack from providing not just infrastructure as a service (IaaS), but also platform as a service (PaaS) and software as a service (SaaS) in one convenient, integrated cloud deployment.

These PaaS and SaaS offerings are a double-edged sword; while they provide ease of use and expedite time-to-value, they also have higher prices/margins and lead to vendor lock-in.

Unlike S3, AWS’ storage layer with an API that has become standard across clouds and on-prem storage providers, higher-level services like GCP Vertex are unique offerings with no intra-cloud compatibility.

The net effect of building these higher-in-the-stack services into your IT architecture is being locked into a specific cloud provider.

The Humbling and Numbing Return to On-premises (On-prem) Options

Thankfully, many of today’s most sophisticated companies realize that this new cloud paradigm is untenable and are developing a hybrid multi-cloud approach consisting of more than one public cloud provider and on-prem systems.

Andreessen Horowitz studied 50 top publicly traded software companies’ disclosed cloud costs and found that “for every dollar of gross profit saved, market caps rise on average 24-25X the net cost savings from cloud repatriation.

This means an additional $4B of gross profit can be estimated to yield an additional $100B of market capitalization among these 50 companies alone.

Extending this analysis to the broader universe of scale public companies that stand to benefit from related savings, we estimate that the total impact is potentially greater than $500 billion.

Walmart, for example, recently disclosed a years-long project to diversify its infrastructure to include edge compute at its store locations to augment what was previously provided by their Azure and GCP cloud infrastructure.

Walmart’s new multi-cloud structure enables it to “switch seamlessly” between Google’s and Microsoft’s web-based services and its proprietary servers.

Walmart said: “The system has saved as much as 18% annually on overall cloud expenditures and mitigates the potential for outages.”

If the Fortune 50 organizations repatriating workloads back to on-prem isn’t convincing, take a look at cloud service providers.

All of the major cloud providers have invested in new products to add on-prem resources to their cloud stacks.

AWS has debuted Outposts with K8s, Google is developing Anthos, and Azure is touting ARC.

If Amazon, Google, and Microsoft are pouring vast sums into product development for on-prem capabilities, clearly the writing is on the wall for future customer demand.

Even cloud stalwarts like Snowflake are making pivots into the hybrid/on-prem space.

Thus, clearly, multi-cloud is the new cloud, and multi-cloud now includes on-prem.

The Hanging Low Fruits In Front Of Companies

The good news is that now more than ever, companies have more flexibility in how they develop their infrastructures.

One of the great second-order benefits of the great cloud migration has been the development of a whole new category of technologies incubated by cloud service providers.

Better DevOps automation tools and cloud-native application design with technologies like containerization and Kubernetes have proliferated from the cloud world and become accessible to any organization (e.g., Google had the wisdom to allow Craig McLuckie and Joe Beda to develop Kubernetes and open-source it to the world).

These new tools and technologies have reduced the cost and operational overhead for companies to manage their own infrastructure.

The cloud-native approach provides more flexibility for organizations to move workloads between different underlying IaaS stacks.

And while not all companies are ready for wholesale repatriation, there are a few things that can be done to ensure future flexibility for where workloads are created.

Don’t lock yourself in. Avoid cloud services that are higher up the stack and designed to create lock-in.

IaaS services have given way to PaaS and SaaS offerings.

Each of these is only available from a single cloud service provider and serves to create a lock-in to that particular cloud.

If you spend time migrating your data science and ML workloads to AWS Sagemaker, for example, then migrating that back to on-prem or another cloud won’t be possible.

The PaaS stack is completely different from Google Vertex and there are no easy paths to migrate those workloads.

Instead, opt for a vendor-agnostic stack that has portability to other clouds.

Make portability a priority for your architectural review committee.

When developing applications, establish an architecture review process to determine if the software will have hybrid-compatible architecture underpinnings and ensure that applications embrace the architectural principles of the cloud to deliver cost savings, security, and efficiencies, whether in the cloud or on-prem.

Start investing in hybrid multi-cloud and decide for yourself. This can take the form of beachhead projects — e.g., picking one large AI training workload to run on-prem — or it can take the form of vendor exploration from providers who are betting on hybrids, such as VMware, RedHat, and NetApp.

Your Best Bet is to Choose wisely

While cloud computing was once a panacea for savings and innovation, returns are now diminishing as AI and ML workloads drive both data volumes and the need for accelerated compute upwards, impacting bottom lines. However, the remarkable success of the cloud has given us powerful new ways of building and managing IT.

Today, the flexible architectural underpinnings enabled cloud-first giants to permeate enterprise stacks on and off-premises, making them just as nimble for addressing architectural needs.

The choice between data centres and clouds — even which cloud to use — is yours.

Make it wisely.

Related Topics

Join our Telegram platform to get news update Join Now

0 Comment(s)

See this post in...

Notice

We have selected third parties to use cookies for technical purposes as specified in the Cookie Policy. Use the “Accept All” button to consent or “Customize” button to set your cookie tracking settings