Professor Hoi-Jun Yoo of the School of Electrical Engineering and his research team developed an artificial intelligence (AI) semiconductor that efficiently employs deep learning by applying variable artificial neural network technology. This was accomplished in cooperation with the startup uxfactory, a company Profesor Yoo co-founded. Artificial neural networks are based on a collection of nodes called artificial neurons. These accept weighted inputs from other nodes, perform calculations on them, and produce an output that is then received by another node. The new chip developed by Professor Yoo is able to control energy efficiency and accuracy by altering the artificial neural network weight precision within the semiconductors. The chip accomplishes such changes by using software to adjust inputs and outputs from one bit to 16 bits to achieve optimized operation depending on the context.

Previously, it was a complicated task to implement AI technologies on mobile phones. The high-speed operations had to be carried out in low power, and the heat produced by processing large amounts of data at once could cause accidents like battery explosions. The team stated that its newly-developed chip may alter this existing landscape.

Due to it being easily adjustable, the chip produced by Professor Yoo’s research team can handle both Convolutional Neural Networks (CNNs) and Recursive Neural Networks (RNNs) simultaneously. CNNs are used to classify and detect images, while RNNs are commonly used to deal with data that changes over time, like video and sound data. The team claims that the chip’s CNN and RNN operation performances are 1.15 times and 13.8 times higher, respectively, than those of world leading mobile AI chips. Additionally, through the chip’s Unified Neural Processing Unit (UNPU), it is possible to set different energy efficiencies and accuracies depending on the recognition target, which allows it to achieve an efficiency that is 40 percent higher than that of existing chips.

The research team also developed an emotion recognition system, which uses a smartphone’s camera to recognize and classify facial expressions into seven different emotions: happiness, sadness, surprise, fear, and three others.

Professor Yoo expects that “It will take another year or so to commercialize the technology,” and that “The technology will be applied in various ways in the future, including object recognition, emotion recognition, motion recognition, and automatic translation.” The research team’s work was displayed in the IEEE International Solid-State Circuits Conference (ISSCC) held in San Francisco, US on February 13.

Copyright © The KAIST Herald Unauthorized reproduction, redistribution prohibited