The Next Platform
  • Home
  • Compute
  • Store
  • Connect
  • Control
  • Code
  • AI
  • HPC
  • Enterprise
  • Hyperscale
  • Cloud
  • Edge
Latest
  • [ June 13, 2025 ] Laser-Based Compute Promises To Light The Way To Faster Physics Sims Compute
  • [ June 12, 2025 ] AMD Plots Interception Course With Nvidia GPU And System Roadmaps Compute
  • [ June 12, 2025 ] It’s Been Three Years, So It’s Time For Another PCI-Express Speed Bump Connect
  • [ June 12, 2025 ] Cisco’s Hyperscale And Cloud AI Push Will Give It Enterprise Clout Compute
  • [ June 11, 2025 ] Peeling The Covers Off Germany’s Exascale “Jupiter” Supercomputer HPC
  • [ June 10, 2025 ] Top500 Supers: Even Accelerators Can’t Bend Performance Up To The Moore’s Law Line HPC
  • [ June 9, 2025 ] Broadcom At The Crossroads Between Merchant And Custom Silicon AI
  • [ June 6, 2025 ] Dell’s Advice To Enterprises: Buy AI, Don’t Try To Build It Compute
HomeNeMO

NeMO

Code

Nvidia NeMo Microservices For AI Agents Hits The Market

April 23, 2025 Jeffrey Burt 0

Last year, amid all the talk of the “Blackwell” datacenter GPUs that were launched at last year’s GPU Technicval Conference, Nvidia also introduced the idea of Nvidia Inference Microservices, or NIMs, which are prepackaged enterprise-grade generative AI software stacks that companies can use as virtual copilots to add custom AI software to their own applications. …

AI

Using NIM Guardrails To Keep Agentic AI From Jumping To Wrong Conclusions

January 16, 2025 Jeffrey Burt 1

AI agents are the latest evolution in the relatively short life span of generative AI, and while some organizations are still trying to figure out how the emerging technology fits in their operations, others are making strides into agentic AI. …

HPC

Nvidia’s “Grace” Arm CPU Holds Its Own Against X86 For HPC

February 6, 2024 Timothy Prickett Morgan 14

In many ways, the “Grace” CG100 server processor created by Nvidia – its first true server CPU and a very useful adjunct for extending the memory space of its “Hopper” GH100 GPU accelerators – was designed perfectly for HPC simulation and modeling workloads. …

AI

Finding NeMo Features for Fresh LLM Building Boost

December 5, 2023 Nicole Hemsoth Prickett 2

This week Nvidia shared details about upcoming updates to its platform for building, tuning, and deploying generative AI models. …

AI

Keeping Large Language Models From Running Off The Rails

April 26, 2023 Jeffrey Burt 0

The heady, exciting days of ChatGPT and other generative AI and large-language models (LLMs) is beginning to give way to the understanding that enterprises will need to get a tight grasp on how these models are being used in their operations or they will risk privacy, security, legal, and other problems down the road. …

About

The Next Platform is part of the Situation Publishing family, which includes the enterprise and business technology publication, The Register.

TNP  offers in-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Read more…

Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

  • RSS
  • Twitter
  • Facebook
  • LinkedIn
  • Email the editor
  • About
  • Contributors
  • Contact
  • Sales
  • Newsletter
  • Books
  • Events
  • Privacy
  • Ts&Cs
  • Cookies
  • Do not sell my personal information

All Content Copyright The Next Platform