At some point, when the cloud bills get very high and the AI usage starts going up and up, the enterprise is going to want to stop paying a cloud and get their own iron and co-locate it in a rentable datacenter. The reason why is simple: Most of the time, enterprises will not be able to shoehorn AI systems into their webscale and back office processing datacenters. They simply will not have the power and cooling to do this, even if they get pretrained models, because inference is surely going to be a bigger problem and more local.
Equinix has been around since the Dot Com boom and was founded by two datacenter managers at Digital Equipment Corp, Al Avery and Jay Adelson, who correctly understood that creating a kind of Switzerland for networks to be interlinked and also a place to house systems would allow companies to more directly connect their applications to each other from all over the world.
Now, fast forward three decades, and Equinix has more than 10,000 customers and operates 270 datacenters in 75 major metropolitan areas around Earth, and that vast network of interlinked facilities, which are cross-connected to all of the major clouds, neoclouds, and hyperscalers, is drove $8.75 billion in revenues and $815 million in net income last year. More than a few of its facilities are rented out by tech titans who need a presence in a market but who do not want the grief of creating it themselves.
But at its heart, Equinix is a network to interlink companies together, and the company has recently announced a network overlay called Fabric Intelligence that allows for companies to orchestrate AI processing across their facilities, rented facilities from Equinix, and the big clouds and hyperscalers where they run applications and store data.
We sat down with DD Dasgupta, vice president of product marketing at Equinix, to talk about the homegrown Equinix network and this new AI-infused Fabric Intelligence layer. Hit the play button above to check it out.