Well … it was bound to happen.
Artificial intelligence, taking over from humans, when it comes to semiconductor chip design.
Pretty soon, these AI algorithms will be designing our toasters, our TVs, our computers, our cars and maybe even medical equipment.
But according to a new study, Google has developed an artificial intelligence that it says is capable of creating computer chips in “under six hours.”
The UK’s Daily Mail reported, that the research, published in Nature, notes that humans can take “months” to design specialized chips for its tensor processing units — a type of chip used in AI — but the reinforcement learning (RL) algorithm is far superior.
‘The RL agent becomes better and faster at ‘floorplanning optimization’ as it places a greater number of chip netlists,” the researchers wrote in the study.
Given the netlist, the ID of the current node to be placed, and the metadata of the netlist and the semiconductor technology, a policy AI model outputs a probability distribution over available placement locations, while a value model estimates the expected reward for the current placement.
While testing the team started with an empty chip, the abovementioned agent as mentioned above places components sequentially until it completes the netlist and doesn’t receive a reward until the end when a negative weighted sum of proxy wavelength and congestion is tabulated.
AI has already been used to develop the next iteration of Google’s tensor processing unit chips, which are used to run AI-related tasks, Google said.
“Our method has been used in production to design the next generation of Google TPU,” wrote the authors of the paper, led by Google’s co-heads of machine learning for systems, Azalia Mirhoseini and Anna Goldie.
To put it another way, Google is using AI to design chips that can be used to create even more sophisticated AI systems, CNBC reported.
Specifically, Google’s new AI can draw up a chip’s “floorplan.”
This essentially involves plotting where components like CPUs, GPUs, and memory are placed on the silicon die in relation to one another — their positioning on these miniscule boards is important as it affects the chip’s power consumption and processing speed.
The researchers used a dataset of 10,000 chip layouts to feed a machine-learning model, which was then trained with reinforcement learning.
It emerged that in only six hours, the model could generate a design that optimizes the placement of different components on the chip, to create a final layout that satisfies operational requirements such as processing speed and power efficiency.
Google’s engineers noted in the paper that the breakthrough could have “major implications” for the semiconductor sector.
Facebook’s chief AI scientist, Yann LeCun, hailed the research as “very nice work” on Twitter, adding “this is exactly the type of setting in which RL shines.”
The breakthrough was hailed as an “important achievement” that will “be a huge help in speeding up the supply chain” in a Nature editorial.
However, the journal said “the technical expertise must be shared widely to make sure the ‘ecosystem’ of companies becomes genuinely global.”
Modern chips contain billions of different components laid out and connected on a piece of silicon the size of a fingernail, ZDNet.com reported.
For example, a single processor will typically contain tens of millions of logic gates, also called standard cells, and thousands of memory blocks, known as macro blocks – which then have to be wired together.
The placement of standard cells and macro blocks on the chip is crucial to determine how quickly signals can be transmitted on the chip, and therefore how efficient the end device will be.
This is why much of engineers’ work focuses on optimizing the chip’s layout.
The number of possible layouts for macro blocks is colossal: according to researchers, there are a potential ten to the power of 2,500 different configurations to put to the test — that is, 2,500 zeros after the 1.
What’s more … once an engineer has come up with a layout, it is likely that they will have to subsequently tweak and adjust the design as standard cells and wiring are added.
That takes time — each iteration can take up to several weeks.
Given the painstaking complexity of floorplanning, the whole process seems an obvious match for automation. Yet for several decades, researchers have failed to come up with a technology that can remove the burden of floorplanning for engineers.
And the challenge is only getting harder.
The often-cited Moore law predicts that the number of transistors on a chip doubles every year — meaning that engineers are faced with an equation that grows exponentially with time, while still having to meet tight schedules.
This is why Google’s apparently successful attempt to automate floorplanning could be game-changing.
Sources: The Daily Mail, CNBC, Nature, ZDNet.com, TechGraByte.com