Google is using machine learning to design the next generation of machine learning chips. The algorithm’s designs are “similar or superior” to humans, Google’s engineers say, but they can be generated much, much faster. According to the tech giant, work that takes months for humans can be done by AI in less than six hours.
Google has been working on using machine learning to make chips for years, but this recent attempt – described this week in a newspaper in the magazine Nature — appears to be the first time his research has been applied to a commercial product: an upcoming version of Google’s proprietary TPU (tensor processing unit:) chips, which are optimized for AI computation.
“Our method has been used in manufacturing to design the next generation of Google TPU,” write the authors of the paper, led by Azalia Mirhoseini, Google’s head of ML for systems.
In other words, AI helps accelerate the future of AI development.
In the paper, Google’s engineers note that this work has “major implications” for the chip industry. It should enable companies to more quickly explore possible architecture space for future designs and more easily adapt chips for specific workloads.
A editorial in Nature calls the research a “major achievement” and notes that such work could help offset the predicted end of Moore’s Law — a 1970s chip design axiom that states that the number of transistors on a chip doubles every two years. AI will not necessarily use the . to resolve physical challenges to squeeze more and more transistors onto chips, but it might help to find other avenues to improve performance at the same pace.
The particular task that Google’s algorithms have tackled is known as “floor planning.” This usually requires human designers working with computer tools to find the optimal layout on a silicon die for a chip’s subsystems. These components include things like CPUs, GPUs, and memory cores, which are interconnected via tens of miles of tiny wiring. Deciding where to place each part on a mold affects the ultimate speed and efficiency of the chip. And, given both the scale of chip production and computational cycles, nanometer changes in placement can have huge effects.
Google’s engineers note that designing floor plans takes “months of intense effort” for humans, but from a machine learning perspective, there’s a well-known way to approach this problem: as a game.
AI has proven time and again that it can outperform humans at board games like chess and Go, and Google’s engineers note that floor planning is analogous to such challenges. Instead of a game board, you have a silicone die. Instead of pieces like knights and rooks, you have components like CPUs and GPUs. The task then is to simply find the ‘win conditions’ of each board. In chess it might be checkmate, in chip design it’s computational efficiency.
Google engineers trained a gain learning algorithm on a dataset of 10,000 chip maps of varying quality, some of which were randomly generated. Each design was tagged with a specific “reward” feature based on its success in various metrics such as required wire length and power consumption. The algorithm then used this data to distinguish between good and bad floor plans and in turn generate its own designs.
As we’ve seen when AI systems compete against humans at board games, machines don’t necessarily think like humans and often come up with unexpected solutions to known problems. When DeepMind’s AlphaGo played human champion Lee Sedol at Go, this dynamic led to the infamous “move 37” — a seemingly illogical piece placement by the AI that nevertheless led to the win.
Nothing so dramatic has happened with Google’s chip design algorithm, but the floor plans nevertheless look very different from those created by a human. Instead of neat rows of components on the die, subsystems look like they are scattered almost randomly across the silicon. A illustration of Nature show the difference, with the human design on the left and the machine learning design on the right. You can also see the general difference in the image below from Google’s paper (orderly people on the left; jumbled AI on the right), although the layout is blurry because it’s confidential:
This article is noteworthy, especially since its research is now being used commercially by Google. But it’s far from the only aspect of AI-assisted chip design. Google itself has explored the use of AI in other parts of the process, such as “architecture exploration”, and rivals like Nvidia are exploring other methods to speed up the workflow. The AI virtuous cycle designing chips for AI seems only just beginning.