Login ㆍ Sign up
Updated: 2017.11.20 21:48
 
HOME > NEWS > News > News
     
Neuroscience-Inspired AI Mimics Human Brain
[ Issue 156 Page 5 ] Friday, September 22, 2017, 20:25:43 Juhoon Lee Assistant Editor juhoonlee@kaist.ac.kr

On September 13, Professor of Cognitive Neuroscience Christopher Summerfield from the University of Oxford gave a seminar on his research regarding neuroscience-inspired artificial intelligence (AI). As the second part of the 2017 KAIST Computational Psychiatry Seminar Series, the lecture titled “Neural and computation mechanisms of human decision-making” was held at the Yang Boon Soon Building (E16-1).

Professor Summerfield, a consultant at Google DeepMind, has been focusing on mapping the human neural networks onto the algorithms of AI to enhance its decision-making skills. The research team takes the human mind as both “an inspiration and a validation” for the algorithms. During his interview with The KAIST Herald, the professor elucidated the basic foundation and challenges of the simulation process.

   
Professor Christopher Summerfield

He explained that many deep learning algorithms already utilize deep convolutional networks, primarily mammalian sensory and primate visual systems, to highly perform at specific tasks such as image classification. However, Professor Summerfield stated that the goal of the research is to expand beyond simple recognition systems: “Intelligent systems can do more than simply classify information into taught labels. For example, when you have children, as they grow, they are very capable of generating all types of complex behaviors. They’re learning from the statistics of the environment; they’re capable of learning in a ‘One-shot learning’,” he explained. The term refers to the ability of a person to quickly generate inferences through unsupervised learning without having to parse through a large amount of data for a long period of time.

The team is trying build AI that can mimic such capabilities, setting them as “guiding principles”. For example, humans demonstrate attention, or weighted observations. Following suit, the desired algorithm will accept the image input and learn to associate each pixel of the picture with a different weight and thus build mechanisms that can isolate important aspects of the data given.

However, the human mind is not without its own faults. The professor addressed the dilemma of human insufficiencies in decision-making. The quality of man-made choices varies wildly for each case, some arguably worse than the ones existing AI would make. Humans are extremely efficient when solving problems; but mistakes arise from assumptions, namely biases, derived by the brain from the conditions given that are employed in wrong situations. The professor iterated, “Generally, recycling the policies is useful. But when applying them to a situation that may be unfamiliar, you are using a sensible policy, but it just happens to not work in the context of what the situation requires.”

The key difference between humans and artificial systems is choosing between optimization and efficiency. To illustrate, when choosing the best lunch option, humans forgo weighing every single option for the sake of time and money. People devote resources in proportion to the importance of the issue, even when it may not be the most optimal way. On the other hand, researchers design AI with a specific, narrow problem in mind and optimizes it to solve the problem in the most efficient way possible. Thus, it takes a long time to train AI while the algorithm itself may be optimal. Thus, though human systems may be considered “sub-optimal”, it is extremely efficient in allocating resources according to levels of importance. Humans adopt policies that compensate for the “late noise” of their sensory inputs in order to maximize the accuracy of their selections.

Professor Summerfield ended the seminar with what he hoped aspiring KAIST students can take away from the study – he encouraged young researchers to sketch the big picture when exploring the inner working of the human brain. He emphasized approaching the problem from different mechanisms of information processing and research streams, and affirmed, “In order to truly understand how the brain works, you need build a model ... and that model has to be quantitative in nature. To rebuild [the neural] system in a simulation, you really need to understand it and approach from a mechanistic point of view.”

Professor Summerfield and his team’s research has been published in various prominent articles, including Neuron and Nature.

Twitter Facebook Google
ⓒ KAIST Herald 2011 (http://herald.kaist.ac.kr)
All materials on this site are protected under the Korean Copyright Law and may not be reproduced, distributed, transmitted, displayed, published without the prior consent of KAIST Herald.

     
Total comments(0)  
      Enter the code!   
 
   * Readers can write comments up to 200 words (Current 0 byte/Max 400byte)
About Us | Privacy Policy | Rights and Permissions | Article Submission | RSS | Contact Us
The KAIST Herald, Undergraduate Library, KAIST, Daejeon, Republic of Korea
Publisher: Sung Mo Kang | Managing Editor: Jeounghoon Kim | Editor: Gyuri Bae
Copyright 2011 The KAIST Herald | All rights reserved | Mail to: heraldwebmaster@gmail.com