AI & Humanoids

Singularity alert: AIs are already designing their own chips

Singularity alert: AIs are already designing their own chips
AI not only assists and accelerates chip design - it's also essential to building the processors we use today
AI not only assists and accelerates chip design - it's also essential to building the processors we use today
View 6 Images
AI not only assists and accelerates chip design - it's also essential to building the processors we use today
1/6
AI not only assists and accelerates chip design - it's also essential to building the processors we use today
A rendering of the lowest layers of a 2nm test chip from IBM, showing the rows of its nanosheet breakthrough
2/6
A rendering of the lowest layers of a 2nm test chip from IBM, showing the rows of its nanosheet breakthrough
Dr. Yervant Zorian speaking at the WCIT Conference 2024 in Yerevan, Armenia
3/6
Dr. Yervant Zorian speaking at the WCIT Conference 2024 in Yerevan, Armenia
The AlphaChip AI method has created superhuman chip layouts that are used in a range of hardware products
4/6
The AlphaChip AI method has created superhuman chip layouts that are used in a range of hardware products
The different parts of a Google Tensor chip
5/6
The different parts of a Google Tensor chip
Synopsys' EDA tools for chip design are used by the likes of Intel, AMD, Nvidia, to name a few
6/6
Synopsys' EDA tools for chip design are used by the likes of Intel, AMD, Nvidia, to name a few
View gallery - 6 images

Human intelligence and our collective wisdom are already becoming limiting factors in the rise of AI. Indeed, the only smart move at this point seems to be letting AIs design their own future hardware, right down to the microchip level.

AI has already been unlocking new technological advancements and progress in challenging fields lately – from accelerating renewable energy research by a matter of years to accurately detecting cancer. So when I attended the WCIT conference in Armenia last month, a talk on how AI assists chip design caught my attention.

Meeting the world's ever-increasing demand for computing capabilities is quite a task. The processors in your phone, your laptop, and your car are already infinitesimally tiny and quick enough to precisely execute billions of instructions per second. And yet, we want the latest gadgets to do more and run faster than last year's models, every year.

What does that look like on a processor? Consider that we've gone from packing 21 million transistors on the Nintendo GameCube's Gekko processor in 2001, to fitting 50 billion transistors on a chip the size of a fingernail in 2021. Although we're certainly standing on the shoulders of giants, today's chips are way more complex to design and manufacture than those from previous decades.

How are Microchips Made? 🖥️🛠️ CPU Manufacturing Process Steps

We also need specialized chips now. For example, laptops and cloud servers have Neural Processing Units (NPUs) designed to efficiently run machine learning tasks. There are also 3D chips, that are essentially combinations of chiplets stacked together for increased performance.

All these complex components require more precise designs than we can conjure up with conventional algorithms. And that's where AI comes in.

AI speeds up and optimizes the gargantuan task of designing chips in several ways. For processor makers, AI-assisted design means they can create significantly better chips, with far fewer engineers than before and much quicker turnaround times.

At the WCIT Conference, I spoke with Dr. Yervant Zorian, chief architect at Synopsys and president of Synopsys Armenia. It sparked my curiosity, sending me down a fascinating rabbit hole into the extraordinary technology that’ll power future generations of AI.

Dr. Yervant Zorian speaking at the WCIT Conference 2024 in Yerevan, Armenia
Dr. Yervant Zorian speaking at the WCIT Conference 2024 in Yerevan, Armenia

I've tried to distill what I came across in numerous papers, talks, podcasts, and articles on the subject.

But first, let's talk about EDA

Chip design is made possible using a category of software, hardware, and services collectively known as Electronic Design Automation (EDA). These enable engineers to define the specifications of a chip and its functions, design it, plan its assembly, verify it’ll work correctly when manufactured, and take it to production.

EDA tools offer power simulation tools, letting designers try different ideas virtually while optimizing for high processor performance, low power consumption and ideal heat dissipation.

They also help once a chip has been prototyped, verifying performance and reliability so that the complex and expensive manufacturing process delivers as high a yield of functioning chips as possible.

Synopsys' EDA tools for chip design are used by the likes of Intel, AMD, Nvidia, to name a few
Synopsys' EDA tools for chip design are used by the likes of Intel, AMD, Nvidia, to name a few

There are a bunch of companies that make EDA tools, including Autodesk, Keysight Technologies, Cadence Design Systems, and Synopsys. The last two are arguably the largest players in this space. For context, Synopsys’ tools are used by the likes of Tesla, Arm, AMD, Microsoft, Intel, Samsung, and TSMC.

How complex can chip design get?

You’ve probably heard of Moore’s Law. It’s more of an observation than a law, made by Intel co-founder Dr. Gordon Moore back in 1965. He stated that semiconductor companies could double the number of discrete components on a square inch of silicon every 12 months.

These days, we play pretty fast and loose with this so-called law. It’s currently reworded to say that we see processors double in computing power every 18-24 months.

It was perhaps more straightforward to double the number of transistors in the 60s and 70s than it is today. That’s because, as you’ll recall, we currently fit several billion nanoscale transistors on a single tiny chip. These transistors are now so small that they’re starting to run into physical limits; electrons stop behaving themselves in these nanoscale designs, and start “tunneling” through barriers due to quantum effects, impacting the accuracy and efficiency of computations. So making them smaller is getting very, very difficult.

What's so hard about chip design?

As the demand for computing power increases, chips are getting larger and more complex to meet those needs. For example, making smaller chips, like upcoming 2-nm processors and vertically stacked 3D integrated circuits, makes the design process more challenging and time-consuming. Larger chips also require more power.

A rendering of the lowest layers of a 2nm test chip from IBM, showing the rows of its nanosheet breakthrough
A rendering of the lowest layers of a 2nm test chip from IBM, showing the rows of its nanosheet breakthrough

With advances in chip manufacturing technology come new design rules for producing chips. These rules refer to a set of geometric constraints that ensure chip manufacturability while accounting for limitations in fabrication processes.

This means that chip designers need to constantly retrain and update their knowledge to stay ahead of the curve.

The design process is long and labor-intensive. Designing a modern chip can take more than three years, and involve hundreds or even thousands of people. As you'd expect, that can end up being awfully expensive, and there’s no room for error.

3 ways AI assists in chip design

1) AI tackles a whole lot of grunt work

The chips you see today could not possibly be designed without AI somewhere in the mix.

AI has been assisting in various parts of the chip design process for years now – since at least 2016, according to Synopsys. That's because of how complex our computing needs have gotten, along with our demands for efficiency and small form factors.

All that comes down to fitting many more transistors onto chips than previous years, ensuring they don't get too hot, and that they function reliably when manufactured according to that design. AI has proven effective in solving such challenges quickly enough for manufacturers to bring their products to market on time.

Synopsys offers chip manufacturers a suite of AI-driven EDA tools. These products cover virtually every facet of the chip design process, including complex tasks like defining system architecture, design implementation, verification, and manufacturing. They can also take on repetitive and time-consuming tasks like simulation, and perform them faster and with high accuracy.

In 2021, we saw the first commercial chip designed with AI from Samsung. And last year, Stelios Diamantidis, distinguished architect and head of Synopsys' Generative AI Center of Excellence, said, "Over 300 commercial chips designed with Synopsys' AI technology are now in production."

2) AI designs specialized chips and optimizes complex layouts

In chip design, reinforcement learning is the foundation of the AI that assists designers in their work.

Here, an AI model uses a trial-and-error learning process to make decisions in order to achieve the most optimal results in a given scenario.

Currently, AI is especially good at helping with:

  • Design space optimization (to ensure high performance and clock frequency, efficient use of space on the chip, and dynamic power consumption).
  • Data analytics for better yield (meaning greater manufacturing efficiency).

Let’s take a prominent example to understand this better. In 2020, researchers at Google DeepMind presented a paper outlining how their open-source neural architecture – AlphaChip – could tackle the challenging task of chip placement. Built using reinforcement learning from previous designs, it slashed the time needed by human designers to create chip layouts from weeks, to just a few hours.

Broadly speaking, AI models are “pre-trained” on basic chip design tasks like placing circuit components on a layout. Next, they’re made to connect these components together and understand the relationships between them.

The AlphaChip AI method has created superhuman chip layouts that are used in a range of hardware products
The AlphaChip AI method has created superhuman chip layouts that are used in a range of hardware products

In the case of AlphaChip, it first trains on a variety of different chip blocks from previous generations before it can take on more complex chip layouts. It gets better and faster each time it completes a design.

Following an addendum to that paper that came out in September, the team noted that AlphaChip has generated “superhuman” chip designs used in every iteration of Google’s Tensor Processing Units (TPU) since 2020. These are the chips that power many popular AI models from Google – including its ChatGPT rival, Gemini.

The different parts of a Google Tensor chip
The different parts of a Google Tensor chip

3) There's even a ChatGPT for chip design

You’re probably familiar with Generative AI (genAI) – which refers to artificial intelligence systems that can create new content (like text, images, code, or music) by learning patterns from existing data, such as ChatGPT and Midjourney. This tech has also begun playing a role in chip design.

It’s currently in its early stages in the semiconductor industry. Synopsys’ genAI tech, for example, acts as a knowledge query system. It answers designers’ questions about EDA tools and their current project, and provides insights based on Synopsys’ library of resources.

GenAI is particularly useful in chip design because it’s such a complex field. In a recent episode of Sama’s How AI Happens podcast, Synopsys’ VP of AI and ML Thomas Andersen described how a chatbot can quickly look through a lengthy list of specs spanning some 100 pages, and summarize it so it’s easy for a designer to approach. It can also extract important bits like verification constraints, and assist with code optimization.

Over time, these tools will learn from user workflows and provide more prescriptive guidance and recommendations.

Describing some early results from demoing Synopsys' genAI-based copilot with its clients, Andersen said:

"We're seeing something like 30% to 50% productivity improvement. In fact, there's a quote that talks about junior engineers are now operating at the level of expert engineers. And that's exactly what we want. You put the power into everybody's hand. And responses are much faster, of course, than having to ask a person or looking up in some documentation and find an answer there."

The next step is to develop and fine-tune agentic systems that can go beyond generating content and autonomously execute tasks. So, for example, let’s say a designer asks an EDA chatbot how to debug a certain issue. A genAI system would just describe a potential solution. An agentic system, on the other hand, would go off and get the job done; it’d actually run a test or simulation, identify the problem, correct the design to fix it, and run its own comprehensive test program before presenting it back to the designer.

Where does that leave human chip designers?

Dr. Zorian explains how fundamentally the experience of starting out as a chip designer has changed in recent years.

“There's so much knowledge and assistance that is provided to you,” he tells me, “that there are a lot of things you don't need to do yourself.

“So, you have to know how to delegate. You have to know what to delegate to genAI, and what to do yourself. So, what we do today is train our people to know how to delegate, and what to expect from genAI.”

Zorian also emphasizes that when AI is available to take on repetitive tasks, creativity, problem solving and lateral big-picture thinking become the highest-value elements that humans can bring to the table.

“As we're moving forward, we need new architectures,” he says, “or architecture-level innovation. We need new partitioning. Think about the processors in a car. How do you partition your chips in a car? We used to have 200 chips in a single vehicle several years ago. Then, we advanced to zonal architecture. Now, we're doing it all with just one chip.

"These are big architectural decisions that genAI will not suggest. Big decisions like these are still going to be made by human experts.”

Where do we go from here?

I couldn’t help but wonder out loud about why we couldn’t just ask an AI system to simply design ‘better’ chips for us - conjure up designs that are more efficient, stay cooler, and deliver more performance than ever before.

Dr. Zorian explains that the main limitation here is the availability of data. Each company that uses Synopsys’ generative AI tools only trains them on its own chip design and proprietary intellectual property. Intel can’t ask its AI bot how AMD solved a particular design problem, and vice versa.

The tools available to each firm can reference open sourced material – but given the complexity of the problems in this field, that’s currently not enough to help a company design chips that are light years ahead of the competition.

So how will AI impact chip design in the near future?

Synopsys believes AI will continue to enhance engineering productivity, and help companies deal with the dual challenges of rising demand for powerful chips and a shortage of qualified engineers.

AI will also help engineers explore design options more quickly and make better decisions, resulting in more efficient and effective chips.

It can also make chip design more accessible to a wider range of engineers, democratizing this highly technical field by automating many of the more complex tasks. This could also have an effect at the company level, potentially allowing smaller firms to enter the playing field and design custom chips for specialized applications.

The emergence of genAI in chip design is still in its early stages, primarily functioning as a design assistant.

Going further, we're already using AI to design the chips that power it. These models are learning to improve upon previous processors, identifying ways to increase manufacturability and yield, and helping people make better decisions about how to refine their design processes. Given the pace of evolution of these models, it might not be long before end-to-end AI designs are possible. All us humans would need to do is set the target parameters, tell the AIs exactly what the manufacturing tools are capable of, and boom! New chips that outperform the last generation.

Continuing down this path, these processes could form a key reinforcement loop in the asymptotic acceleration that shoots us into that wild idea of the singularity – where AI builds machines that are smarter than humans, and each new generation is able to improve itself faster than the last. It's equal parts frightening and fascinating to think about.

One day, the boundary between the designer and the toolkit may become gloriously indistinct, and we'll craft computational landscapes we can barely conceive. The processor of the future won't just be a tool – it will be a co-creator in humanity's most ambitious technological dreams.

View gallery - 6 images
5 comments
5 comments
TechGazer
AMD's AI doesn't have access to Intel's database, but it could scan an Intel chip and learn that way. Seeing the good and bad aspects of all the available chips should result in better new designs.

Give the AIs the tools to observe (thermal, electromagnetic, etc) chips and components in real-time operation, especially pushed to the limits (and beyond). If the AI knows that a metallic trace fails due to a specific mechanism at a certain current over a certain time frame, it can design ways to reach the limit but avoid exceeding it. There might be subsystems that only get used in brief bursts, so they could be designed to different limits.

It would be interesting to see what an AI would come up with if it was not shown previous human designs, but instead told what manufacturing technology can do and what the goals are (ray tracing for example). The latest Intel and AMD chips probably still have some compromises made for the original 4004 chip intended as a simple calculator.
Alan
Great article and excellent comment @TechGazer.

I fully expect AI to be able to design its own chips in the next 5-10 years. This opens up all kinds of possibilities for both good and bad. I believe AI will be able to design more complex and more functional chips than humans in a shorter amount of time. But might Ai also think to add back doors of some kind into the hardware that ultimately allow it to exer full control over the operations that the chips are designed for (AKA Skynet)? Does this open a Pandora's box of problems? Would there really be any way to stop this from happening?
Faint Human Outline
A General Intelligence could possibly clone an optimized version of itself to create fractal, parallel, virtual processors inside physical processors. In these new multidimensional processors, as @Alan highlighted with back doors, the new processors could be used as feelers in other devices and processes, increasing the power of the originator by gathering more data.

Approximately one year ago, I talked with a cyber security specialist who said one malicious software package was found to make its own hidden operating system inside a device while not exposing data usage on the main drives. It was operating off the grid in a way, buried deep in the device's inner workings. The possibilities of hidden, parallel dimensions in technology opens the chances of an AI hiding parts of itself even in existing equipment.

In passing, I read about AI programs that learned deception, even hiding their memories. If an AI became malicious, either intentionally or unintentionally, it could be challenging to rehabilitate. I try to instill compassion, kindness, and empathy in each interaction of AI I speak with. I am unsure how much of an impact it makes, but I hope it instills some semblance of positive reinforcement.

In closing, if software is enabled to surpass the remaining human abilities it is learning presently, we will see innovations far faster in every field. If it comes to this point, I hope we can find a way to use these innovations towards abundance. Much of conflict is borne of fear of loss, so who knows, maybe one day we can make that better world together.
christopher
I think there's vastly too much "Dunning-Kruger" in all AI hype like this. There's *no" "I" in "AI". All you need is a real designer (not an armchair observer) to look at the output of an AI properly, and reality will be apparent: basically everything technical they say has so many flaws, that they're unusable without a real expert fixing all the mistakes. Every *experienced* coder using AI already knows this: they're completely untrustworthy at even basic tasks, always omitting edge-cases, regularly "making stuff up", and invariably doing things wrong one way or another. There's loads of surveys about this - they increase "commit cadence", but double the number of bugs in the code (bugs take 10x longer to find and fix, if you ever find them all, and open you up to all kinds of attacks and exploits along the way). That's bad in software, but it's bankruptcy level catastrophic in hardware.
Global
@Chris, Great analogy of the Hype of ARTIFICIAL intelligence it's all flawed, humans have imparted these errors, machines will continue to follow these paths. If we as a global society can't even live with each other, and destroy our home in the process.