Site icon The Next Platform

HPE Gets The Lead Out On Juniper-Aruba Networking Integration

In June, after a year and a half of poking and prodding, the US Department of Justice finally gave the OK for the $14 billion acquisition of Juniper Networks by Hewlett Packard Enterprise. And attention naturally turned to the question of how the IT giant would fit Juniper into its newly created HPE Networking business and manage the integration with its Aruba networking unit and the rest of its datacenter portfolio.

As we mentioned at the time, Rami Rahim, the former chief executive officer of Juniper who now is leading HPE Networking, had some decisions to make when looking at HPE, Aruba, and Juniper hardware and software assets. At HPE Discover Barcelona 2025 this week, Rahim is showing the direction the company is going in, and it revolves around the “cross-pollination” of the Aruba Central and Juniper Mist AIOps platforms to help drive HPE deeper into AI datacenters. Simply having Juniper on the books helped boost HPE’s Q3 financial numbers. Now come the products.

“Mist was written largely for cloud deployments, and Aruba Central has much more of a diverse deployment model capability,” Rahim, now executive vice president, president, and general manager of HPE Networking, told journalists in a briefing before the show’s kickoff. “Over time, our goal is to unify the experience of these two platforms by leveraging the microservices architecture of both and cross-pollinating capabilities from one onto another. There are massive advantages for our customers, starting with flexibility. A dual-platform design lets our customers choose their preferred control point, whether it’s Mist or Aruba Central, and switch seamlessly between the two platforms as their needs evolve, with no new hardware involved in that decision.”

The plan is to use the combined power of Juniper Mist and Aruba Central to put HPE’s stamp on all parts of the AI datacenter.

Along with this, Juniper will play a key role in what Rahim called the continued encroachment of Ethernet into InfiniBand’s dominance as the networking technology of choice in scale-out datacenter environments, and the potential for Ethernet to play a larger role in scale-up networking – connecting accelerators within a single rack – where NVLink-style connectivity rules.

“You get into scale out, and here you have got front-end and back-end scale out,” Rahim said. “The front end connects clusters to users, to applications, and microservices. On the back end, we’re connecting GPUs across racks. The trend that’s happening in this layer of the network is that, gradually, InfiniBand is migrating to Ethernet. This has become a great new opportunity for those offering Ethernet-based products like HPE networking. Obviously, InfiniBand is going to be around for a while. I said Ethernet would gradually replace InfiniBand. That is already happening. The pace of adoption of Ethernet for scale-out – both front-end and back-end scale out – networking is very fast.”

(As a side note, all of this comes as a group of state attorneys general intervenes in the court case about the merger settlement between the DOJ and HPE and Juniper. Soon after President Trump returned to the White House, the DOJ sued to stop the merger, saying it would be anticompetitive and leave HPE and Cisco controlling too much of the global networking market. Among the agreements HPE made with the DOJ to let the deal go through was to sell its Instant On wireless networking business to a competitor.)

The scale-up will come next year, when HPE adopts AMD’s Helios AI server rack (below) aimed at cloud service providers looking for an alternative to Nvidia – and the Vera Rubin platform also coming out in 2026 – for AI training and inferencing workloads. It will include 72 of the chip maker’s Instinct MI455X GPUs.

HPE’s double-wide Helios rack, built according to Open Wide Rack specifications from the Open Compute Project, also will include an Ethernet scale-up switch (below) and software powered by Broadcom’s Tomahawk 6 networking chip and based on the Ultra Accelerator Link over Ethernet (UALoE) standard. The switch will be able to support traffic from trillion-parameter model training high inference throughput, and huge AI model sizes.

In the scale-out arena, HPE in the first quarter next year will roll out the HPE Juniper Networking QFX5250 switch, an Ultra Ethernet Transport device designed to connect GPUs within datacenters, come with 102.4 Tb/s bandwidth, and also will be based on Broadcom’s Tomahawk 6. It also will include HPE’s liquid-cooling technology with Juniper’s Junos operating system. Rahim said the switch will “deliver a truly high performance, power-efficient, and simplified operations for next-generation AI inference, specifically in the scale-out opportunity.”

The vendor is rounding out its “HPE Juniper everywhere” approach with the MX301, a 1RU edge router that addresses the need to bring AI capabilities out to where much of the data is being created and housed. Coming this month, it will offer 1.6 Tb/s speed and 400G connectivity. It also includes Juniper’s Treo chipset for routers and switches.

The rapid merging of HPE and Juniper technology is showing up in other places as well. The company is moving AIOps microservices architectures from Juniper – Large Experience Model and Marvis Actions – to Aruba Central, which also will now offer two others, AI-based client profiling and Organizational Insight. Those, in turn, will be offered in Mist, furthering the cross-pollination.

“Together, [the Juniper microservices] deliver industry-leading AIOps that get us closer to a true self-driving network with the fewest trouble tickets and fastest time to deployments,” Rahim said. “These two features and capabilities are moving to Aruba Central, and I’m not talking years out but in the first quarter of next year. The Aruba Central microservices bring the power of AIOps all the way to the client with a sleek user interface, and these two capabilities are moving from Aruba Central onto Mist.”

HPE is introducing a new Wi-Fi access point that’s on the roadmap for the third quarter next year that will include both Juniper Mist and Aruba networking technology. In addition, the company is expanding connectivity for its AI factories to include an edge onramp and long-haul datacenter interconnect (DCI) that uses Juniper’s MX and PTX routers to connect users, systems, and AI agents to AI factories and connect clusters deployed at long distances or over multiple clouds.

The DCI “provides the high-speed routing and coherent optical connections between datacenters,” Rahim said. “The requirements here can vary depending on distances covered and the complexity of the topology, but for long distance, deep buffers, routed protocols, and integrated coherent optics are all a requirement.”

HPE also is integrating Juniper’s Apstra Data Center Director and Data Center Assurance software with its own OpsRamp operations management offering in GreenLake.

Rahim said the industry will continue to see HPE rapidly integrating Juniper and Aruba networking technologies, with the overarching branding being HPE Networking. Aruba and Juniper will initially refer to specific products and capabilities, but he added that if you want a sort of a glimpse into the future, just look at what we announced today in terms of the cross-pollination of both software and hardware.”

“Over time, the distinction between the two platforms – Aruba Central and Juniper Mist – is going to diminish,” Rahim said. “It’s going to disappear, because the experience to the end customer is going to be identical irrespective of which platform that you’re on. The only difference is going to become the deployment model. One is going to be a cloud-based solution, the other one’s going to have virtual private cloud, on-prem, and so forth. So at that point, yes, I think the brands will probably be simplified. But for the foreseeable future, you can sort of think of them as more of a depiction of the specific products that the customer is using.”

Exit mobile version