TPU (Tensor Processing Unit) is a processor from Google, which allows you to effectively perform calculations over tensors and, as a result, effectively train neural networks. In fact, this is a slightly pro -APGENE version of the GPU.
The design of processors is a complex process. The design refers to the search for the optimal location of the processor parts on the silicon chip to achieve the greatest efficiency and speed of the processor. (In English, this process is called “Floorplanning”). Usually people are engaged in this, and the process of creating a new, improved design takes months.
In Google, they decided to entrust this process of AI. They trained the RL algorithm (reinforcement training) on a Dataset of 10,000 chips with certain indicators of their effectiveness. These indicators of the effectiveness of the chips acted as Reward Function for the RL algorithm.
The result was cool: the trained algorithm began to offer completely unexpected options for the location of the processor: rather illogical from the point of view of a person, but at the same time workers! People would hardly have thought of this themselves. In the picture to the post, you see two options for the location of the chip details: on the left-proposed by the person-designer; On the right-the proposed AI (pictures are blurred due to NDA). It can be seen that the logic of locations is very different.
Some believe that such an increase in the efficiency of processors with the help of AI will help us maintain the fulfillment of the Moore law in the near future.
Shtosh, now AI solves the problem that takes months in people, in 6 hours =) the authors say that the next generation of the Google TPU will already be based on design from AI. Is this, again, it’s time for someone to look for a new job?
#machinelearning #artificialintelligence #ai #datascience #programming #Technology #Deeplearning