Human intelligence and our collective wisdom are already becoming limiting factors in the rise of AI. Indeed, the only smart move at this point seems to be letting AIs design their own future hardware, right down to the microchip level.
AI has already been unlocking new technological advancements and progress in challenging fields lately – from accelerating renewable energy research by a matter of years to accurately detecting cancer. So when I attended the WCIT conference in Armenia last month, a talk on how AI assists chip design caught my attention.
Meeting the world's ever-increasing demand for computing capabilities is quite a task. The processors in your phone, your laptop, and your car are already infinitesimally tiny and quick enough to precisely execute billions of instructions per second. And yet, we want the latest gadgets to do more and run faster than last year's models, every year.
What does that look like on a processor? Consider that we've gone from packing 21 million transistors on the Nintendo GameCube's Gekko processor in 2001, to fitting 50 billion transistors on a chip the size of a fingernail in 2021. Although we're certainly standing on the shoulders of giants, today's chips are way more complex to design and manufacture than those from previous decades.
We also need specialized chips now. For example, laptops and cloud servers have Neural Processing Units (NPUs) designed to efficiently run machine learning tasks. There are also 3D chips, that are essentially combinations of chiplets stacked together for increased performance.
All these complex components require more precise designs than we can conjure up with conventional algorithms. And that's where AI comes in.
AI speeds up and optimizes the gargantuan task of designing chips in several ways. For processor makers, AI-assisted design means they can create significantly better chips, with far fewer engineers than before and much quicker turnaround times.
At the WCIT Conference, I spoke with Dr. Yervant Zorian, chief architect at Synopsys and president of Synopsys Armenia. It sparked my curiosity, sending me down a fascinating rabbit hole into the extraordinary technology that’ll power future generations of AI.
I've tried to distill what I came across in numerous papers, talks, podcasts, and articles on the subject.
But first, let's talk about EDA
Chip design is made possible using a category of software, hardware, and services collectively known as Electronic Design Automation (EDA). These enable engineers to define the specifications of a chip and its functions, design it, plan its assembly, verify it’ll work correctly when manufactured, and take it to production.
EDA tools offer power simulation tools, letting designers try different ideas virtually while optimizing for high processor performance, low power consumption and ideal heat dissipation.
They also help once a chip has been prototyped, verifying performance and reliability so that the complex and expensive manufacturing process delivers as high a yield of functioning chips as possible.
There are a bunch of companies that make EDA tools, including Autodesk, Keysight Technologies, Cadence Design Systems, and Synopsys. The last two are arguably the largest players in this space. For context, Synopsys’ tools are used by the likes of Tesla, Arm, AMD, Microsoft, Intel, Samsung, and TSMC.
How complex can chip design get?
You’ve probably heard of Moore’s Law. It’s more of an observation than a law, made by Intel co-founder Dr. Gordon Moore back in 1965. He stated that semiconductor companies could double the number of discrete components on a square inch of silicon every 12 months.
These days, we play pretty fast and loose with this so-called law. It’s currently reworded to say that we see processors double in computing power every 18-24 months.
It was perhaps more straightforward to double the number of transistors in the 60s and 70s than it is today. That’s because, as you’ll recall, we currently fit several billion nanoscale transistors on a single tiny chip. These transistors are now so small that they’re starting to run into physical limits; electrons stop behaving themselves in these nanoscale designs, and start “tunneling” through barriers due to quantum effects, impacting the accuracy and efficiency of computations. So making them smaller is getting very, very difficult.
What's so hard about chip design?
As the demand for computing power increases, chips are getting larger and more complex to meet those needs. For example, making smaller chips, like upcoming 2-nm processors and vertically stacked 3D integrated circuits, makes the design process more challenging and time-consuming. Larger chips also require more power.
With advances in chip manufacturing technology come new design rules for producing chips. These rules refer to a set of geometric constraints that ensure chip manufacturability while accounting for limitations in fabrication processes.
This means that chip designers need to constantly retrain and update their knowledge to stay ahead of the curve.
The design process is long and labor-intensive. Designing a modern chip can take more than three years, and involve hundreds or even thousands of people. As you'd expect, that can end up being awfully expensive, and there’s no room for error.
3 ways AI assists in chip design
1) AI tackles a whole lot of grunt work
The chips you see today could not possibly be designed without AI somewhere in the mix.
AI has been assisting in various parts of the chip design process for years now – since at least 2016, according to Synopsys. That's because of how complex our computing needs have gotten, along with our demands for efficiency and small form factors.
All that comes down to fitting many more transistors onto chips than previous years, ensuring they don't get too hot, and that they function reliably when manufactured according to that design. AI has proven effective in solving such challenges quickly enough for manufacturers to bring their products to market on time.
Synopsys offers chip manufacturers a suite of AI-driven EDA tools. These products cover virtually every facet of the chip design process, including complex tasks like defining system architecture, design implementation, verification, and manufacturing. They can also take on repetitive and time-consuming tasks like simulation, and perform them faster and with high accuracy.
In 2021, we saw the first commercial chip designed with AI from Samsung. And last year, Stelios Diamantidis, distinguished architect and head of Synopsys' Generative AI Center of Excellence, said, "Over 300 commercial chips designed with Synopsys' AI technology are now in production."
2) AI designs specialized chips and optimizes complex layouts
In chip design, reinforcement learning is the foundation of the AI that assists designers in their work.
Here, an AI model uses a trial-and-error learning process to make decisions in order to achieve the most optimal results in a given scenario.
Currently, AI is especially good at helping with:
- Design space optimization (to ensure high performance and clock frequency, efficient use of space on the chip, and dynamic power consumption).
- Data analytics for better yield (meaning greater manufacturing efficiency).
Let’s take a prominent example to understand this better. In 2020, researchers at Google DeepMind presented a paper outlining how their open-source neural architecture – AlphaChip – could tackle the challenging task of chip placement. Built using reinforcement learning from previous designs, it slashed the time needed by human designers to create chip layouts from weeks, to just a few hours.
Broadly speaking, AI models are “pre-trained” on basic chip design tasks like placing circuit components on a layout. Next, they’re made to connect these components together and understand the relationships between them.
In the case of AlphaChip, it first trains on a variety of different chip blocks from previous generations before it can take on more complex chip layouts. It gets better and faster each time it completes a design.
Following an addendum to that paper that came out in September, the team noted that AlphaChip has generated “superhuman” chip designs used in every iteration of Google’s Tensor Processing Units (TPU) since 2020. These are the chips that power many popular AI models from Google – including its ChatGPT rival, Gemini.
3) There's even a ChatGPT for chip design
You’re probably familiar with Generative AI (genAI) – which refers to artificial intelligence systems that can create new content (like text, images, code, or music) by learning patterns from existing data, such as ChatGPT and Midjourney. This tech has also begun playing a role in chip design.
It’s currently in its early stages in the semiconductor industry. Synopsys’ genAI tech, for example, acts as a knowledge query system. It answers designers’ questions about EDA tools and their current project, and provides insights based on Synopsys’ library of resources.
GenAI is particularly useful in chip design because it’s such a complex field. In a recent episode of Sama’s How AI Happens podcast, Synopsys’ VP of AI and ML Thomas Andersen described how a chatbot can quickly look through a lengthy list of specs spanning some 100 pages, and summarize it so it’s easy for a designer to approach. It can also extract important bits like verification constraints, and assist with code optimization.
Over time, these tools will learn from user workflows and provide more prescriptive guidance and recommendations.
Describing some early results from demoing Synopsys' genAI-based copilot with its clients, Andersen said:
"We're seeing something like 30% to 50% productivity improvement. In fact, there's a quote that talks about junior engineers are now operating at the level of expert engineers. And that's exactly what we want. You put the power into everybody's hand. And responses are much faster, of course, than having to ask a person or looking up in some documentation and find an answer there."
The next step is to develop and fine-tune agentic systems that can go beyond generating content and autonomously execute tasks. So, for example, let’s say a designer asks an EDA chatbot how to debug a certain issue. A genAI system would just describe a potential solution. An agentic system, on the other hand, would go off and get the job done; it’d actually run a test or simulation, identify the problem, correct the design to fix it, and run its own comprehensive test program before presenting it back to the designer.
Where does that leave human chip designers?
Dr. Zorian explains how fundamentally the experience of starting out as a chip designer has changed in recent years.
“There's so much knowledge and assistance that is provided to you,” he tells me, “that there are a lot of things you don't need to do yourself.
“So, you have to know how to delegate. You have to know what to delegate to genAI, and what to do yourself. So, what we do today is train our people to know how to delegate, and what to expect from genAI.”
Zorian also emphasizes that when AI is available to take on repetitive tasks, creativity, problem solving and lateral big-picture thinking become the highest-value elements that humans can bring to the table.
“As we're moving forward, we need new architectures,” he says, “or architecture-level innovation. We need new partitioning. Think about the processors in a car. How do you partition your chips in a car? We used to have 200 chips in a single vehicle several years ago. Then, we advanced to zonal architecture. Now, we're doing it all with just one chip.
"These are big architectural decisions that genAI will not suggest. Big decisions like these are still going to be made by human experts.”
Where do we go from here?
I couldn’t help but wonder out loud about why we couldn’t just ask an AI system to simply design ‘better’ chips for us - conjure up designs that are more efficient, stay cooler, and deliver more performance than ever before.
Dr. Zorian explains that the main limitation here is the availability of data. Each company that uses Synopsys’ generative AI tools only trains them on its own chip design and proprietary intellectual property. Intel can’t ask its AI bot how AMD solved a particular design problem, and vice versa.
The tools available to each firm can reference open sourced material – but given the complexity of the problems in this field, that’s currently not enough to help a company design chips that are light years ahead of the competition.
So how will AI impact chip design in the near future?
Synopsys believes AI will continue to enhance engineering productivity, and help companies deal with the dual challenges of rising demand for powerful chips and a shortage of qualified engineers.
AI will also help engineers explore design options more quickly and make better decisions, resulting in more efficient and effective chips.
It can also make chip design more accessible to a wider range of engineers, democratizing this highly technical field by automating many of the more complex tasks. This could also have an effect at the company level, potentially allowing smaller firms to enter the playing field and design custom chips for specialized applications.
The emergence of genAI in chip design is still in its early stages, primarily functioning as a design assistant.
Going further, we're already using AI to design the chips that power it. These models are learning to improve upon previous processors, identifying ways to increase manufacturability and yield, and helping people make better decisions about how to refine their design processes. Given the pace of evolution of these models, it might not be long before end-to-end AI designs are possible. All us humans would need to do is set the target parameters, tell the AIs exactly what the manufacturing tools are capable of, and boom! New chips that outperform the last generation.
Continuing down this path, these processes could form a key reinforcement loop in the asymptotic acceleration that shoots us into that wild idea of the singularity – where AI builds machines that are smarter than humans, and each new generation is able to improve itself faster than the last. It's equal parts frightening and fascinating to think about.
One day, the boundary between the designer and the toolkit may become gloriously indistinct, and we'll craft computational landscapes we can barely conceive. The processor of the future won't just be a tool – it will be a co-creator in humanity's most ambitious technological dreams.