What Happens When LLMs Design AI Accelerators?

Although our appetite for a vast range of AI accelerators appears to be waning, or at least condensing down to a few options, there might be methods on the horizon to let accelerator designers explore new concepts in an interesting way.

Such accelerators are generally human designed but there have been big efforts underway from EDA and chip companies alike to add AI to the design pipeline.

Cadence has its Cerebrus Intelligent Chip Explorer, which automates the optimization of power, performance, and area (PPA) in chip design flows, allowing engineers to work on multiple blocks concurrently, which they say is a significant advantage for complex system-on-chip (SoC) designs.

Similarly, Synopsys recently launched Synopsys.ai, a comprehensive AI-powered electronic design automation (EDA) toolset aimed at accelerating the chip development process at several points along the design route. Google’s DeepMind has also been making strides in this domain, exploring methods to improve TPU, claiming their AI algorithms can bring chips to the field faster and more cost-effectively, fueling competition against specialized chipmakers like Nvidia and AMD.

Nvidia itself has also shown research on how AI can determine the optimal placement of transistors on a silicon wafer, thereby affecting a chip’s cost, speed, and power consumption. By employing AI in these innovative ways, these companies are significantly speeding up the traditionally time-consuming and complex process of chip design.

What these efforts share are highly-tuned, specific pipeline flow speedups but we haven’t seen much to date about how large language models, a slightly different approach to the AI/EDA problem, actually work.

We got some insight via researchers at Georgia Tech, who have started exploring the use of Large Language Models (LLMs) including GPT-4 to automate the design process for AI accelerators in particular. The goal of their recent work was to see if AI can design its own “brains,” so to speak, in a way that’s more efficient and effective than the meatbag approach.

The Georgia Tech team has introduced a framework called GPT4AIGChip to investigate how LLMs stack up in accelerator design. The framework uses GPT-4 and a High-Level Synthesis (HLS) language with results via a unique performance metric called “Pass@k” to evaluate how well the AI’s designs compile successfully.

Training the model to take on this specialized task involved a two-stage process. First, it was finetuned using 7,000 publicly available HLS code snippets to enhance the base hardware knowledge. It was further trained using custom HLS templates to equip it with the specifics of AI accelerator design.

The team found that LLMs like GPT-4 could indeed design AI accelerators, but there were (surprise) some critical limitations. These models struggled with complex tasks and couldn’t quite match the performance of specialized, closed-source models. However, they did find that these limitations could be mitigated by using a modular approach to design and by giving the AI “demonstration” code snippets for context.

One of the noteworthy aspects of the experiment is the introduction of a modular and decoupled hardware template, which the team had to come through due to the above challenges. This template let them address the limitations of LLMs by breaking down the complex design space into smaller, more manageable modules, giving focus on each part individually, thus simplifying the overall complexity.

Another notable aspect is the “Demo-Augmented Prompt Generator,” which helps guide the model in creating new designs by selecting relevant demonstrations from a curated library of past designs. The team explains why this is particularly useful given the limitations on how much data the AI can consider at one time in the full paper.

Despite the challenges, the Georgia Tech crew thinks the AI-designed accelerators were competitive with those created by human experts and gives some examples of how they outperformed accelerators generated by existing automated design tools.

They argue that this kind of work shows that AI can be a viable tool for designing complex hardware, potentially speeding up the development cycle as well as lowering the barriers to entry. In short, they think the need for specialized human expertise is reduced, making it easier for more people to engage in the development of AI accelerators.

Time will tell.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

4 Comments

  1. Well, you know what I’m going to say here … but I’ll say it anyways, unanimously (^q8). Like an ugly child, AI won’t live up to my expectations of it (esp. Super-Intelligence) until it can autonomously (zero-shot) design, prototype, verify, and mass produce a superluminal Alcubierre metric tensor warp drive, relying on Casimir vacuum dark energy, that disproves the Chronology Protection Conjecture (CPC), and so-challenges the related 3-letter censorship Agency (CPA; from S. Hawking!)!

    When this happens then, yes, I’ll agree that we haven’t lost our collective m&m marbles over ML, and that AI actually has more smarties in its bag than we peoples have fig Newtons! “Time will tell” as concluded in this TNP article, particularly because, as future AI figured out time-travel for us, I’ve been able to come back here with great precision and edit this comment to accurately reflect the related consequences (or not? ahem!). Note also that time-travel is a known contributing factor for dyslexia … or orthograph-ic wave dispersion (8p^) … exircese cuati!on

    I might be the complete cucurbita party-pooping curmudgeon of the gourd family at this temporal juncture, but my opinion will surely change, in the past, with future advances in AI-mediated time travel! (or not?) ^8p 8d^ 8^p

    P.S. GIT study looks interesting, except maybe the unpronounceable 11-letter klingon mixed-acronym mouthful: GPT4AIGChip.

  2. Thanks for bringing this Georgia Tech paper to the TNP readership! I’m not 100% on why one would want non-expert folks who are not well-versed in hardware to design AI accelerators using imprecise human languages parsed stochastically by LLMs. Would we want this in other demanding fields such as dentistry, neurosurgery, or rocket science? Or would we prefer for highly-trained experts to be in charge there, assisted as needed by the most precise and accurate tech?

    I find it unfortunate that the GTech authors chose to refer to LLMs with locutions that are more typical of sales-pitches than of scientific and engineering investigative discourse. This gives the reader an impression that the study has been carried out in a biased, naive, or uncritical manner, shadowing the potential seriousness and adequacy of the methods, and diluting the potential impacts of the results. In this reviewer’s opinion, expressions such as “remarkable capabilities”, “intricate nature”, “amazing capacity”, “tantalizing prospect”, “astonishing potential”, and others (eg. “vital insights”), should be appropriately sedated to better match the target audience of intelligent professionals.

    The paper should also clearly state the conclusion that results from its investigation, namely that LLMs cannot be used by non-experts to design AI accelerators, but, in the future, they may possibly be used by experts to assist in various aspects of accelerator design. Expertise is needed to “expertly” split the design into consistent functional subsystems that an LLM may then perform additional work on. Expertise is also needed to prompt engineer the LLM to aid it perform its targeted support role. Expertise is further needed to develop and provide the HLS examples required for model training. At present, the authors’ proposed pipeline for LLM-assisted EDA effectively inverts the assistance relationship by requiring experts to assist the LLM, in the eventual hope that LLMs would later assist non-experts do a job that is likely not suited to them.

    • Point taken! But do think of the astonishing potential for savings in healthcare costs if this remarkably capable AI tech could enable your non-expert nosy neighbors to successful prompt-engineer an only slightly knobby 3-D printed neurosurgery of intricate nature for your invasive in-laws … an opportunity not to be missed! A most tantalizing prospect! 8^b

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.