Add to the list of AI chip startups some forthcoming new silicon based on the MiLDreD platform.
The architecture, “Memory Intensive Direct Reduced Enhancement Device” features massive memory capacity in terms of the chip itself although early critics note that the interconnect performance hinders what the chip might be capable of.
Memory capacity and performance are two keys to AI chip success. A device can have all the capacity in the world, but without high throughput and the ability to crunch through massive training sets, it will have a difficult time competing against GPUs and other accelerators.
We are also hearing the rack design required to accommodate the chips in a custom appliance has raised some questions about integrity although the cooling/airflow with patent-pending OxygenBurst™ technology looks innovative with plenty of room for cabling.
Few analysts have been able to examine the architecture in detail but one, who refuses to go on record, tells us that the chip is not without some weaknesses based on early benchmarks and testing. It is not clear whether we can expect to see this listed with other architectures on the next MLperf iteration and if we do, we might need to begin thinking about how AI chip startups get funded and pushed to market.
“It’s almost like it takes in the metadata and just dumps it before training completes. Not all of it, mind you, but just enough to make it frustrating. Honestly, it drops bits worse than Ethernet,” our off-record analyst explains.
“The chip itself is quite large and seems like it should have optimal memory capacity for AI training workloads. Results of some early tests show that with large-scale training exercises the chip could speed through a million-sized data set but when shown a picture of a girl with metadata signifying the name “Nikki” there would be minor errors that invalidated the results, labelling it “Micki” (see Figure A for example).
“It actually makes no sense that we would see these kinds of errors following training. The name was repeated millions of times but it’s like the algorithm just decides it’s a different name completely. The algorithm is part of the problem, but I think the hardware is the real issue,” says Dr. Mab Chamet of the AI Performance Institute.
To be fair, the results were slightly better with other image-based datasets, including a million-plus sized set featuring various pills and medications. Further results based on text and GANs also showed remarkable efficiency and performance when building a full-text story based only on the beginning sample phrase, “Back then, we used to…”.
Favorable results were also found in unsupervised training featured various tagged pureed fruits and vegetables. “The applesauce identification shows what this chip is capable of; we threw in some custard, mashed potatoes, and grits just to see how well the training could capture minute differences and the accuracy was over 98% (see Figure B).
When reached for comment at her home in Ames, Iowa this morning, CEO and co-founder, Clara Peller, said her team sees immense synergies in AI. “We are truly excited to announce this revolution in AI hardware. I have five years, tops, so by the time artificial intelligence is truly dangerous I’ll be resting peacefully in His arms.”
Peller says the architectural inspiration for the 800nm device was inspired by a brown and orange rug she bought in the eighties.
The funding round, led by deep-pocketed small investment firm Wilford & Brimley, is expected to be made public later today following final legal review and afternoon bingo.
Update 4/2/19: To all the folks emailing about how we must be mistaken, there’s no way they used a 800nm chip, please note this was our April Fools article.