Decentralized Compute Is The Foundation Of The Metaverse

The metaverse is still a thing, an experience, a service in the making, an envisioned 3D world fueled in large part by artificial intelligence and immersive graphics that will, many hope, be a place where consumers can play games and interact with others and companies can do business in ways that can’t be done today.

The metaverse has a host of disparate descriptions based on who is doing the talking and is generating a wide range of opinions from industry observers, from it being a game-changing technology to be little more than the second coming of the flamed-out Second Life, with the promise of a similar outcome.

That said, billions of dollars are being spent on creating the underpinnings for the metaverse, with Meta Platforms, formerly known as Facebook, leading the hype, with co-founder and chief executive officer Mark Zuckerberg and others in the company keeping up a strong drumbeat. Microsoft last month announced it was spending $69 billion buying game maker Activision Blizzard and its immersive experience technologies. Apple and Google also are taking steps to create immersive computing experiences; Nvidia is leveraging its GPUs, DPUs, and soon CPUs to build out its 3D Omniverse simulation platform, its own rendering of the metaverse.

It is going to take years to see how the metaverse plays out, but a certainty is that it will accelerate the decentralization of computing, the latest in the accordion-like back-and-forth that already is underway thanks to such trends as IoT and edge computing. At the Ignite conference a year ago, Microsoft chief executive officer Satya Nadella noted this, saying the industry had reached “peak centralization.” The type of immersive experiences proponents of the metaverse are talking about will need high-bandwidth and low-latency capabilities and that will require compute to be closer to the end user.

“The idea is that if you’re going to have an immersive experience, even like Meta is showing, you’re going to have to execute a lot of that locally,” Matt Baker, senior vice president of corporate strategy at Dell Technologies, tells The Next Platform. “If you look at the current things that we would say are akin to the metaverse, there’s a reason why it is largely totems and avatars. It’s because it’s easy to render these almost comic-like animations of me as an avatar that sort of looks blocky. Why do you do that? It’s because you can’t render something really sophisticated and three dimensional over distance.”

Matt Baker, senior vice president of corporate strategy at Dell Technologies

As laid out in a recent blog post by Baker, compute and data will be highly distributed, which will drive the need for a lot of processing capacity in datacenters and at the edge and more powerful PCs and other clients that come with accelerators, more memory and higher core counts. There also will be a need for standards and open interfaces.

The enterprise demand for highly realistic, real-time immersive experiences is “ultimately going to draw computing power out of hyper-centralized datacenters and into the world around us. It could be sites for telcos or more likely cell tower sites. That’s happening right now. There’s a whole host of Tier 2 and Tier 3 co-location providers that are trying to scratch out a niche from the big guys like Equinix to create small datacenters in smaller markets because they want to be able to deliver that.”

Trying to create this 3D world from centralized, far-away datacenters would defy the laws of nature, Baker says. He points to the generally accepted limits to real-time operations of 5 milliseconds to 9 milliseconds of latency. The two Amazon Web Services regions that are relatively close to each other – in Northern Virginia and Ohio – are still 300 or more miles away with a latency of no lower than 28 milliseconds.

This will mean smaller datacenters – or datacenter-like environments – in more places, such as the wireless towers carriers operate, complete with compute, storage and networking. It also will mean more computer in such places as retail stores and other sites, Baker says. It also will create a world that hyperscalers like AWS, Microsoft Azure and Google Cloud will have to adapt to.

“You’re not going to see this zero-sum emptying of hyperscale datacenters,” he says. “As stuff goes out, it’s going to be more that these are all interrelated with one another, which is why this idea of multicloud that we keep talking about has been viewed as a war between on- and off-prem. The real action is potentially happening in the ‘third premises.’ Everything that’s not these two.”

It could include creating smaller datacenters and leveraging co-location facilities. It also likely will mean the cloud providers continuing to extend their services outside of their own datacenters, Baker says. AWS has done it with Outposts – hardware placed on premises to give enterprises access to cloud services from their datacenters – and more recently with its EKS and ECS Anywhere program to offer AWS services in mainstream servers. Microsoft is doing the same with Azure Arc and Google Cloud with Anthos.

Conversely, as we’ve talked about before, traditional infrastructure companies like Dell, Hewlett Packard Enterprise, Lenovo and Cisco are bringing their environments into the cloud. Dell has been partnering with VMware to cloud-enable their technologies and is offering increasingly more of its products as a service via its Apex initiative. Others, such as IBM with Red Hat, we well as HPE with GreenLake, Lenovo with TruScale, and Cisco Systems with Cisco+ are doing the same.

As the distributed nature of IT accelerates with the metaverse, the edge and similar forces, hardware and software architectures will adapt. There is increasing innovation around silicon, with the rise of chips optimized for AI, analytics and other workloads and that can fuel the move to more composable systems. In addition, enterprises continue to embrace software-driven architectures like hyperconverged infrastructures – in Dell’s case, VxRail – to reduce complexity and improve flexibility.

“We will start to see systems with more fine-grained composability, where you can mix and match, architectures in new and interesting ways to achieve new types of outcomes and also being able to dynamically bring more memory into this application,” he says. “The way that I like to think about it is that future architectures will basically blow the sheet metal off of servers and storage. Those boundaries will go away and they will be connected over really high-speed, low-latency fabrics. There are glimpses of this in what we what we’re doing with NVMe-over-Fabric. Those are the forces are driving [changes in architecture]. It’s the insatiable thirst for the computing power that’s driving architectural changes.”

Containers, microservices, cloud-native software, and the rise of the twelve-factor methodology for building software-as-a-service (SaaS) applications also will help drive the decentralization of compute that will be needed as the metaverse comes into focus.

“They were built to facilitate parallelism in a datacenter,” Baker says. “But the same approach could be used for segmenting the execution of applications – two different services running in different localities to achieve an outcome. So containerization, microservices, all of this stuff is a tool that was not necessarily designed to facilitate decentralization, but it certainly helps with decentralization as we now have freed our minds from thinking about monolithic applications to segmented applications.”

The pendulum swing back to a more distributed IT architecture will benefit vendors like Dell, HPE and Lenovo, which at the dawn of the cloud era were seen as dinosaurs that would be unable to compete the Googles and Amazons and their massive datacenters. Hyperscalers got adept at managing massive amounts of compute in a small number of locations. Dell and others historically have been “incredibly good at deploying and managing systems in handfuls and rackfuls and datacenterfuls from millions of locations,” he says.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

5 Comments

  1. I’m not so sure about this. When I put on my Meta Quest 2 and I connect with Airlink to my PC, just a couple feet away, I, like many other users, average around 60 milliseconds of motion-to-photon latency, and that works pretty well! I struggle with less than 60 hertz, but a high latency is actually pretty tolerable. In this case, a think a majority of that latency is because WiFi 5 wasn’t built for low latency. If we wanted to support on-the-go XR experiences, I think the cellular radio latency is gonna be the biggest issue. Hopefully 5G solves this? If our motion-to-photon latency tolerance is 60 ms, we could very reasonably allocate 20 of those milliseconds to ferrying data from the cell tower to the hyper scale data center and back. At the speed of light, my math says that’s around two thousand miles each way!

    The other argument made was that bringing the servers closer makes more bandwidth available, but I don’t buy that. I think that for a cellular carrier, building and operating the radio towers is the expensive part, and passing data around the country to the hyperscaler’s most convenient datacenter is relatively cheap and congestion free. I could be wrong, of course!

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.