search
close-icon
Data Centers
PlatformDIGITAL®
Partners
Worth knowing
About
Language
Login
Connect with us
search
globe-iconAsia (EN)
Connect with us
banner
Article

Quantifying Data Gravity

Dave McCrory, VP Growth Platform Planning & Solutions
September 21, 2023

IT leaders face three perpetual challenges: the need to reduce risks, while lowering costs and the push to innovate faster. As they evaluate their IT and data strategies, forecasting for IT capacity brings these three challenges to a head.

The growing intensity of data usage as more data is created and exchanged (Data Gravity) causes further challenges as it undermines efficient global exchange of data. Because rapid data creation and escalating needs for storage and processing capacity put tremendous strain on legacy servers and applications, Data Gravity has significant implications for enterprises.

As you calibrate your capacity plans to accommodate the surge of data, The Data Gravity Index™ 2.0 can empower your decision-making. In this article, we’ll answer the following questions about how we quantify Data Gravity in the report — and why it matters to IT leaders:

  • What is the methodology for measuring Data Gravity?
  • How do we predict incremental storage and processing needs?
  • What’s the difference between cloud data versus non-cloud data, and why does it matter?
  • How does the Data Gravity Index™ help with capacity planning?
What is our methodology for measuring Data Gravity?

Using a macroeconomic lens to aggregate global enterprise data, the Data Gravity Index™ 2.0 calculates all enterprise data created and utilized across cloud and non-cloud data centers at the global, regional, and metro level.

We then correlate the measurement of Data Gravity with the growth of Gross Domestic Product (GDP) and its impact on companies globally and regionally—a new layer of insight we’ve added since releasing the Data Gravity Index™ 1.5 in 2020.

Our calculation considers attributes from 11,000 large multi-national companies with more than $1 billion in revenue, including firmographic data, industry benchmarks, and technographic data. The data is sourced from the International Data Corporation (IDC), the International Monetary Fund (IMF), the Organization for Economic Cooperation and Development (OECD), the US Bureau of Economic Affairs, and economic reports provided by individual country governments.

How do we predict incremental storage and processing needs?

Enterprise storage and processing needs, measured in number of storage devices and servers, are calculated through the combined projection of enterprise data mass and activity and the acceleration of enterprise storage and processing capacity.

Globally, we expect exponential growth in data created, processed, and utilized across public cloud and private data centers, totaling approximately 1.2 million exabytes by 2025.

Enterprise storage

Enterprise storage measures the predicted number of storage devices needed to support forecasted enterprise data. Globally, we predict a significant incremental number of active storage devices—243 million by 2025—will be required to address enterprise Data Gravity. The predictions reflect a blend of public cloud and non-cloud, revealing needs for cloud-adjacent placement of storage for many enterprises. Factors driving the growth include the presence of data-intensive industries and region and country-specific data regulations.

Enterprise compute

Enterprise compute looks at the predicted number of compute servers needed to support data processing on the global scale. Our predictions indicate 15.3 million compute servers (defined as being a multiprocessor with approximately 50 cores on average) will be required by 2025 to support global data processing needs. This is a substantial number of incremental servers, with a large percentage of those required outside, yet adjacent, to public cloud. Several factors drive growth of enterprise compute, including the presence of data intensive industries and the growing use of AI, analytics, and big data.

What’s the difference between cloud data versus non-cloud data, and why does it matter?

Data stored or generated in the cloud does not require storage and processing power at the network edge. While many businesses have undertaken cloud migration for their data systems, new points of data generation—such as the rapid growth of enterprise AI adoption—along with security and compliance requirements have intensified data collection and processing at the edge.

Notably, 93% of new data over the next few years will be created outside the public cloud, indicating an increasing need for local storage and processing capacity. IT leaders who recognize this shift can invest in the data systems their enterprises will need to accommodate escalating needs at the edge.

How does the Data Gravity Index™ 2.0 help with capacity planning?

The report gives IT leaders targeted insights as they adopt data-centric approaches and strive to stay ahead in an interconnected world where data is at the core of every decision.

It also gives you insights to inform capacity planning—forecasting additional IT architecture needs based on geographic locations. Using the report’s global, regional, and metro forecasts, you can anticipate how much more cloud and cloud-adjacent storage and processing capacity your enterprise will require in the near future.

To dive deeper into Data Gravity insights and forecasts, download the Data Gravity Index™ 2.0 today.

Tags