Cisco navigated a rocky road in its first quarter of the year as evidenced by the dips in share price for the networking giant this morning. However, like its other public tech peers, Cisco is pointing to AI as the savior of all systems spending, claiming $1 billion in AI-centric orders is on the horizon.
Back in the real world, the quarter painted a picture of a tech giant grappling with the aftereffects of a strained global supply chain, shifting demand dynamics, and the slow implementation of products by customers.
Cisco reported a 20 percent year-over-year decline in product orders, with a notable dip in the enterprise and service provider sectors. The Americas, EMEA, and APJC regions all saw a downturn in product orders, with APJC experiencing the most significant drop at 38 percent. This trend was particularly evident among larger enterprises and service providers, which are focusing on installing and implementing previously delivered products, leading to a slowdown in new orders.
While that all might sound rather grim, Cisco did deliver the strongest first quarter in its history for revenue and profitability. The company’s total revenue stood at $14.7 billion, an 8 percent increase from the previous year, driven by growth in networking, security, collaboration, and observability. The firm also reported a record non-GAAP operating margin and a remarkable non-GAAP gross margin, the highest in over 17 years. Cisco cited disciplined expense management as the key to its financial stability during this period
The quarter also highlighted Cisco’s ongoing business transformation toward more software and recurring revenue streams. This shift indicates a strategic alignment with contemporary market demands and future growth opportunities, particularly in generative AI, cloud, and full-stack observability.
Cisco’s reach in security, long considered the path to future growth, was up a modest 4 percent, driven by zero-trust and threat intelligence. Collaboration increased 3 percent, mainly thanks to advancements in calling and contact center technologies, although this was partially offset by a decline in meetings tech. Observability stood out with a remarkable 21 percent growth across its portfolio.
The other upside for Cisco’s future is how it has altered its revenue streams. According to CFO Scott Herren, this is reflected in a 5 percent increase in its Annual Recurring Revenue (ARR) to $24.5 billion, with a 10 percent growth in product ARR. Software revenues rose by 13 percent to $4.4 billion, with an equal increase in software subscription revenues.
Herren said subscription-based revenues now constitute 85 percent of Cisco’s total software revenue, but the right strategy in the wrong moment does little. Despite these gains, the company faced a challenging environment with a 20 percent overall decline in product orders. This downturn was most pronounced in the Asia-Pacific, Japan, and China (APJC) region, which saw a 38 percent decrease, and in the service provider and enterprise sectors, which dropped by 38 and 26 percent, respectively.
The CFO added that as there is seasonality to the market, he does see better times ahead, noting Cisco expects “the impact to be greatest in Q2 and in Q3. But we do see a return to order growth in the second half of the year, both sequentially and year-on-year as we get there.”
Where the real promise lies is just where you’d expect – in building out AI systems. When asked about a claim that Cisco expects to see $1 billion in AI-centric orders, Herren revealed that the company has “taken orders for over $500 million for infrastructure to support AI networks, AI GPUs inside the cloud players.”
He added that Cisco has “line of sight to $1 billion-plus of orders that our teams feel pretty good that we’re going to get and/or we’ve been designed in already … We now have our Ethernet fabric deployed underneath GPUs in three of the four hyperscalers – major hyperscalers in the United States. We also are working very closely with AMD, Intel, and Nvidia to create solutions, including Ethernet technologies, GPU-enabled infrastructure jointly tested and validated reference architectures.”
And there you have it, yet again. AI to the rescue – if not of all humanity, at least share price bumps. ®
Great and very timely! The MS Azure Eagle placing number 3 on the Top500 strongly suggests (to me at least) that the prospect of cloud HPC, and cloud AI, is becoming much more realistic (unlike RISC-V). Taking that diagonally over to the excellent SC23 GBP-candidate paper on the E3SM Frontier Summit SCREAM (Taylor, Guba, Sarat Sreepathi, … https://dl.acm.org/doi/10.1145/3581784.3627044 ), and particularily the line with blue circles and the one with red squares in their Figure 3, one finds the following (Section 7.1):
“Running at the strong scaling limit, the performance is dominated by communication costs, and GPU speedups are diminished”
The CPU-only system actually overtakes the GPU-accelerated one at this limit (40 nodes, 110km coarse config.) because networking can’t keep up (this is for Perlmutter; Frontier’s networking is just fine as shown later in Fig. 4)! All this to say that the user-available performance of the gargantuan machines being fielded these days (for HPC and AI) relies in no small way on equivalently performant networking (not to be forgotten as we copiously salivate over the specs of mammoth CPUs and elephantine accelerators!)!
Right on! Lambos don’t get great performance on dirt roads … In the SCREAM paper, the coarse problem’s (110km) strong-scaling looks best at around 8 nodes (GPU speedup approx 3.5) which gives a compute density near 700 spectral elements per node (Table 1), that obtains also at the upper end of the scaling done on Frontier for 3.25km (6.3 million spectral elements, over 8,192 nodes). Neat! Can’t wait for the 10 EF/s box that will run the 1km-grid version of this (with great comps & comms)!