More Power – And Cooling – To You

Just below the massive hyperscalers and cloud builders there is another set of dozens of datacenter operators who provide cloud and co-location services on a multinational basis to enterprises, governments, and academic institutions. Cyxtera Technologies, a spinout of the former CenturyLink telco and Internet service provider (now known as Lumen Technologies), is one of them.

Cyxtera operates over 60 datacenters with around 250 megawatts of aggregate power capacity, which is how you start measuring infrastructure at scale because keeping track of the boxes is too difficult at that point. Being descended from a telco, Cyxtera also operates one of the largest networks on Earth and is cross-connected with everything any company might need to link to, be it other clouds and co-los or their own facilities scattered around the globe.

Like other datacenter operators, Cyxtera is wrestling with the compute and power density of modern AI training systems, and we have been on a mission to see how some of the bigger datacenter operators are handling these hot and heavy beasts as they swing their tails around the glass house.

We sat down with Holland Barry, field chief technology officer at Cyxtera, so find out what is going on with datacenter expansion and if customers are just thinking about putting their AI systems in co-los from the start rather than trying to shoehorn them into homebuilt datacenters that are probably not able to handle the power and heat necessary for these AI systems.

This is especially true as many of these systems are starting to require liquid cooling, which many datacenters can’t handle gracefully. There is also interest in immersion cooling, which we think is neat but which we think many enterprises will no go for because dunking systems and storage in a vat of something cooked up by 3M voids the warranty. This is the kind of behavior you expect from cryptocurrency miners, perhaps, but not enterprises who have to make their machines last a long time and have proper tech support and warranty services on them or it will force a “resume generating event” for the CIOs and CTOs.

Take a break and have a listen.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now


  1. …Immersion cooling: Have a set of clothing dedicated for maintenance chores in the vats. I worked with “swimming” servers from 2018 – 2021. Very different. Modest yuck factor (mostly from convincing yourself that you can do this one quick vat-based chore without getting some of the “3M salad dressing” on your clothes…Working with 2U/4-node servers hanging face down in a vat: Lifting a node out to install a 2nd M.2 SSD (as I did 400 times), wearing disposable gloves while lifting and holding the node out of the vat with your arm extended horizontally to let the majority of the viscous fluid drain off (its NOT water!) WITHOUT it slipping out of your hand is somewhat challenging…But yeah, relative to all the complex per-server plumbing involved with other forms of liquid cooling at any level of scale, immersion would seem to be the future of server cooling…Oh and don’t forget about coax cable wicking (sigh!)

  2. Cool interview! If I understand well then, the quite impressive 2,500 Gb/s link to clouds is something like 25 x 100 Gb/s ports (rather than 3 x 800 Gb/s or 6 x 400 Gb/s, both of which come just a tad short of 2,500)?

Leave a Reply

Your email address will not be published.


This site uses Akismet to reduce spam. Learn how your comment data is processed.