Nvidia Says OpenClaw Is To Agentic AI What GPT Was To Chattybots

Published

The adoption of OpenClaw has been unlike anything seen before in the software industry. The agentic AI personal assistant started life in late 2025, first under the name “Clawdbot” and then “Moltbot.”

However, after taking the OpenClaw name in January and picking up some momentum, it struck like a thunderbolt, taking fewer than four months to surpass 250,000 stars in GitHub and moving past React as the most starred non-aggregator software project. There were times when it had more than 2 million views in a single week.

Look at this growth. It is so straight upwards it looks like a border on the chart:

It also scared the hell out of security professionals. It’s a self-hosted AI agent integrated with some apps as WhatsApp, Telegram, and Discord that people let loose on their systems for such tasks as summarizing conversation, scheduling meeting, executing code, and booking flight. Letting AI agents with access to sensitive information and external data take actions on their own, with little if any human oversight, can be a recipe for disaster.

Gartner analysts said OpenClaw’s design was “insecure by default” and called its security risks “unacceptable.” Security analysts with Cisco Systems said it is a “security nightmare,” and myriad security vendors wrote about the rush of threat actors looking to cash in on its various security vulnerabilities.

However, those steeped in the AI space saw something else. OpenAI co-founder chief executive officer Sam Altman was so taken by OpenClaw – despite the security worries – that he hired the AI agent’s creator, Peter Steinberger, calling him “a genius with a lot of amazing ideas about the future of very smart agents interacting with each other to do very useful things for people. We expect this will quickly become core to our product offerings.” OpenClaw will move to a foundation to keep it an open project, Steinberger wrote.

Single Most Important Release

For his part, Jensen Huang, Nvidia’s founder and chief executive officer, earlier this month said OpenClaw was “probably the single most important release of software, you know, probably ever,” noting it only took weeks to reach a level of adoption that Linux didn’t hit for three decades. Nvidia has OpenClaw running throughout the company, from developing tools to writing code, he said.

Huang was just as effusive about the AI agent in his keynote this week at Nvidia’s GTC 2026 conference in San Jose, California, saying that OpenClaw will be as important a tool as Linux, Kubernetes, and HTML.

“Claude Code and OpenClaw have sparked the agent inflection point, extending AI beyond generation and reasoning into action,” he said, adding that OpenClaw has “opened the next frontier of AI to everyone. Every company now needs to have an OpenClaw strategy.”

Huang said he understood the security concerns, but also saw the promise that OpenClaw offers.

“Systems in the corporate network can have access to sensitive information, it can execute code, and it can communicate externally,” the CEO said. “Just say that out loud. Think about it: Access sensitive information, execute code, communicate externally. You can, of course, access employee information, access the supply chain, access finance information, sensitive information, and send it out, communicate externally. Obviously, this can't possibly be allowed.”

NemoClaw And OpenShell

To make that possible, Nvidia is hanging a range of security and privacy controls around OpenClaw and calling NemoClaw, a stack that can install the vendor’s Nemotron agentic AI models and its OpenShell, a runtime that includes a sandbox and which will make autonomous agents – what are becoming known as “claws” – safer to deploy and more scalable by enforcing security, network, and privacy guardrails.

Huang called NemoClaw with OpenShell a reference architecture.

“You could download it, play with it, connect to it the policy engine of all of the SaaS companies in the world, and your policy engines are super important, super valuable,” he said. “The policy engines could be connected [and] NemoClaw or OpenClaw with OpenShell would be able to execute that policy engine. It has a network guardrail, it has a privacy router, and, as a result, we could protect and keep the claws from executing inside [your] company and do it safely.”

Nvidia also is partnering with a number of cybersecurity firms, such as CrowdStrike, and cloud and infrastructure companies Cisco, Google, and Microsoft Security, to ensure compatibility with OpenShell in their security tools.

NemoClaw is an open tool that can use any coding agents and open models – including Nemotron – to run either locally on a user’s system or with frontier models in the cloud via a privacy router. On the hardware side, NemoClaw can run on any dedicated platform, including Nvidia’s GeForce RTX PCs and laptops, RTX PRO-powered workstations, and DGEX Station and DGX Spark AI supercomputers.

OpenShell is a new capability in Nvidia’s Agent Toolkit, the latest iteration of what had been called the NeMo Agent Toolkit. The toolkit includes a range of open agentic offerings, from Nemotron and other models to AI-Q, which developers can use as a blueprint for building custom AI agents for enterprise deployments. There also is AI-Q, an architecture that includes frontier models for orchestration.

King Of The Agents

NemoClaw and OpenShell are only the latest step in Nvidia’s push to dominate the burgeoning agentic AI market, which is a central talking point in this year’s GTC event.

Huang spoke about milestone platform shifts in AI, with the first being OpenAI’s introduction of ChatGPT in late 2022 and the rapid adoption throughout the following year. It started the generation AI era, creating a computing environment in which the system could not only understand and perceive, but also translate and generate unique content.

In 2024 came OpenAI’s o1 and then o3m, chaperoning in AI reasoning.

“Reasoning allowed it to reflect, allows it to think to itself, allowed it the plan, break down, break down problems, and decompose a problem it couldn't understand into steps or parts that it could understand,” he said. “It could ground itself on research.”

Then came Claude Code last year, bringing the industry into the agentic AI era. The model could read files and generate, compile, test, and evaluate code, then go back and rework it. That was followed by OpenAI’s Codex coding tool and Cursor’s tool, all of which are being used throughout Nvidia.

“For the first time, you don't ask an AI, what, where, when, how,” Huang said. “You ask it, create. You ask it to use tools, take your context, read files. It's able to agentically break down a problem, reason about it, reflect on it. It's able to solve problems and actually perform tasks. An AI that was able to perceive became an AI that could generate. An AI that could generate became an AI that could reason and an AI that can reason now became an AI that can actually do work. Very productive work.”

This helped fueled already high demand for Nvidia’s GPUs and shift the focus AI from training to inference. The more inferencing being done, the more tokens are being used and the more money Anthropic, OpenAI, and other AI companies will make. Huang said Nvidia saw $500 billion last year in demand for its Blackwell and Rubin GPUs, and he’s projecting that demand will continue to skyrocket, estimating that the demand for compute could grow next year to $1 trillion or more, due to the capacity needed for agentic AI.

This is where Nvidia can prove value to organizations, the CEO said. Nvidia’s GPUs can be extremely expensive, but the company’s infrastructure saturates essentially every key market, which should give buyers confidence in the equipment they’re paying so much money for. In addition, the useful life of the hardware is long and the components are backward-compatible, so the investment can scale out as long as needed, making the cost over the course of the system’s lifetime relatively low.