A research team led by Professor Hoi-Jun Yoo from the Department of Electrical Engineering has developed a low-power generative adversarial network (GAN) AI semiconductor chip. The novel semiconductor chip does not require much power, allowing for multi-layer deep neural networks on mobile environments. With this powerful neural chip, the team succeeded in image combination, image recovery, style transform, and much more. The GAN neural network on the processor does not have to transmit data to an external server, thereby increasing data privacy.
PhD candidate Sanghoon Kang, the first author of this research, presented the development of such AI chips at the IEEE Symposium on Computers and Communications (ISCC) conference. Unlike the conventional discriminative model of AI approach, GAN technology is not limited to predisposed inputs. It can generate and regenerate images, which can be applied not only to academic research, but also in technological industries.
However, GAN also has some drawbacks compared to conventional deep learning networks. Because GAN is composed of multi-layer deep neural networks, it is difficult to accelerate GAN chips under certain conditions. Current technology is limited to approximations or single-layer neural networks. Moreover, to make high definition images, GAN requires more calculation power than existing neural networks.
The research team expanded the limits of AI by developing a GAN processing unit which can handle multi-layer deep neural networks in mobile conditions. Professor Hoi-Jun Yoo’s team devised an interactive system using this technology. Users were able to take a picture of their face, and the GAN processor regenerated 17 facial features such as hair, eyes, and eyebrows. Just like how this AI chip is able to create images without any human input, Professor Yoo believes that the semiconductor chip will be able to create original content such as music.