AI

MythWorx Mashes Up Neuromorphic And GenAI To Take On Model Giants

Published

There is constant chatter surrounding the promise of generative AI, agentic AI, and – eventually – artificial general intelligence, but the need for the massive, expensive, high-density compute and the datacenters to wrap around them and the juice to feed them are incredible, particularly with models OpenAI GPT-4, Alibaba’s Qwen-3, Moonshot AI’s Kimi K2, xAI’s Grok4, and Amazon’s Olympus all cresting the trillion-parameter mark and aiming higher.

Some expect the accelerating adoption of AI will increase the global power demanded by datacenters to jump in 2027 by 50 percent over 2023 figures and by as much as 165 percent by the end of the decade. Ongoing improvements in AI and datacenter processing efficiency could put energy consumption by datacenters at about 1,000 terawatt-hours (TWh) by 2030. That said, if those improvements don’t happen, that number could be more 1,300 TWh.

Worries about model size, cost, and power have led some organizations to develop and adopt other platforms like small language models (SMLs), which have fewer parameters, simpler architectures, and are trained on less – and often specific – data.

Jumping into the fray is MythWorx, a three-year-old startup that came out of stealth this month with two AI frameworks that are designed to operate differently than traditional LLMs and large reasoning models (LRMs) in a way that allow them to be smaller, faster, and significantly more energy-efficient. The company is building AI platforms that mimic how the human brain operates, embracing biomimetic learning methods – to “think” and adapt – rather than the transformers and static parameter tuning that come with LLMs.

The company says it is the “first initial stage of artificial general intelligence,” that time when AI machines will be able to think, reason, adapt, and solve complex problems as humans do, only it will be significantly faster.

Essentially, MythWorx is applying neuromorphic computing principles – something we have written about before – to their two AI platforms, according to Jason Williamson, MythWorx’s newly named chief executive officer.

“You can kind of think about that in the context of mirroring how the brain works than the traditional LLM,” Williamson tells The Next Platform. “We’re not trying to say one is replacing the other. It’s just a different kind of way of doing things. It’s not totally dependent on transformers, which is what a traditional LLM is trying to do. A LLM is essentially just a really good prediction model. You’re trying to predict the best next word, and the context you would have done before. You need transformers to understand context, all that kind of stuff.”

“For us, we do things a little bit differently. Yes, there are some transformers. However, the way that the learning works is more aligned to the way that the brain works.”

Biomimicry And AI

The role of biomimicry can have in the development of AI is being explored as a way of ensuring “sustainable” and “ethical” AI. A study published in June – Towards beneficial AI: biomimicry framework to design intelligence cooperating with biological entities – argued that current LLMs emphasize speed and scale rather than sustainability, and that building systems that emulate “natural computation” could lead to equally capable but more energy-conscious AI.

MythWorx is working on two platforms. It’s Echo framework – the latest version being Echo Ego v2 – mimics biological neural networks, with attention, memory consolidation, and hierarchical abstraction. Echo also is designed to align with human ethical values, such as fairness, responsibility, and empathy.

It emphasizes adaptive learning, contextual reason, and self-optimization and comes with such capabilities as advanced reasoning, ethical alignment, and multi-modal understanding, including text, code, and audio. It can adjust its architecture depending on external feedback and new data, which the company claims reduce the requirements for memory and storage.

Still in development is Kronos, which is based on the neuroplasticity in the human brain, mimicking its ability to evolve and rewire itself with the introduction of new inputs, an important capability when continuous adaptation is needed, as with ever-changing cybersecurity threats or business strategies.

“Our model will update itself in real time,” says Williamson, who comes to MythWorx with experience from Oracle, Amazon Web Services, and teaching positions with such institutions as the University of Virginia and North Carolina Central University. “We don’t use retrieval-augmented generation on a static model. It’s continuously rewiring itself. As it merges and learns, the logic layers dynamically change. Neuroplasticity allows it to do that.”

Pruning And Deduplication

With these architectures, MythWorx scientists created features that ape processes in the human brain. That includes selective pruning, which addresses a challenge that LLMs face: As more data comes in, it drives the need for more infrastructure capacity and datacenter space, he says.

They “have to get bigger datacenters, more storage, more capacity,” Williamson says. “So, one of the techniques we use to keep things lean and mean is the theory of pruning. The AI decides what it thinks is not important, and prunes that off of its thinking. Pruning is a reason why our models are a lot smaller and why we take a lot less energy to do things. It’s like what you do when you sleep. Your brain strategically cuts out information it thinks it doesn’t care about.”

Similarly, there’s deduplication, which lets the model shed duplicate information that it doesn’t need anymore rather than keep it around.

“The natural processes – dedupe this stuff, prune this stuff – that’s another example of how that works,” he said. “The neurons that are learning are training itself on – strategically – what it thinks it needs before and what it needs next. That’s how we’re able to achieve some of these, essentially, big, fat claims that we’re making.”

Not for nothing, but this is precisely why the US Defense Advanced Research Projects Agency stopped investing in the development of supercomputers more than a decade ago, which definitely hurt the finances of Cray, IBM, and others. The Defense Department needs a petaflops supercomputer that it can put on a soldier’s back in battle, not an exaflops machine sitting in some ruggedized site far behind the front lines or buried under a mountain far away. And not surprisingly, MythWorx has caught the attention of the Defense Department and was ranked number one in a bake-off by Special Operations Command involving a hundred different AI models. MythWorx says Echo can do “real-time threat analysis, autonomous drone coordination, and tactical decision making in unpredictable as well as adversarial denied, intermittent, degraded and limited environments, while Kronos “enhances adaptive warfare systems, such as AI-driven cyber-physical defense networks.”

Those current performance claimed being made by MythWorx using the Echo Ego V2 platform in the MMLU-Pro benchmark, designed by Hugging Face to evaluate language understanding models using more than 12,000 question from academic exams and textbooks in multiple domains from engineering to economics to history. According to MythWorx, the benchmark showed that its architecture used about one-tenth the power of a typical LLM for the same workload, showed 71.24 percent accuracy with no pretraining or chain-of-thought prompting, and answered questions in about 1.2 seconds per query, better than the five to 10 seconds for LLMs.

It also used only 14 billion parameters, compared to more than 600 billion in DeepSeek AI and similar LLMs.

On its website, MythWorx boasts that on the ARC-AGI-1 benchmark for testing progress toward general intelligence, its platform scored 100 percent accuracy using 208 watts of compute power and took four hours. By contrast, OpenAl scored 87.5 percent with pretraining using an estimated 9.5 million watts of compute and taking 23 hours.

The company doesn’t pretrain the models with massive amounts of data, Williamson said. Instead, they are trained on “primers,” giving each model base knowledge in science, math, and other subjects, and then letting it teach itself.

In all, the combination of the biomimetic and neuroplastic design, adaptive learning capabilities, and prioritizing the resource efficiency and moral responsibility allows MythWorx to reach 1,000X greater efficiency – at least 10X less infrastructure and power – than traditional LLMs, the company claims.

Given its size and efficiency, Echo Ego v2 also can get much of its work done on standard CPUs, rather than relying heavily on powerful and power-hungry GPUs from Nvidia. According to Williamson, MythWorx is “going to be technology-agnostic when it comes to the chip architecture. We run GPUs on our internal on-prem infrastructure that we have. It’s great, because then we’re able to take advantage of those accelerations. But if we don’t have them, that’s still okay. If you’re running an Intel, or if you’re running a Cerebras – which is much more oriented to the edge – or AMD, that’s wonderful for us.”

And the edge is where the company sees a lot of opportunity for a platform that has a small footprint, is energy-efficient, and is adaptable.

It “allows us to get things done at low power. That’s what we’re so interested in,” he said. “Traditional AI is power-hungry, space-hungry, heat-hungry. Being at the edge is excellent, because if we can bring a similar mode, a similar amount of capability, to the edge, then you open up a lot for manufacturing, for factories, for telcos, for all those kinds of things. The models that we’re doing are not bloated. They’re tight, and so we’re able to accomplish something that might be at several billion parameters, we’re doing it at several hundred million parameters because of the way these models are generated.”