Researchers Develop Neural Network Chip for Smartphones

An energy-efficient computer chip for neural networks was developed by researchers at MIT for mobile applications.

smartphones

MIT associate professor of electrical engineering and computer science Vivienne Sze and colleagues have introduced a computer chip for the software systems called neural networks. The networks are designed to power artificial intelligence (AI) systems that will be used on mobile devices like smartphones.

Based on the paper scheduled to be presented by Sze’s group at the Computer Vision and Pattern Recognition Conference in late July 2017, the new methods that they developed for paring down neural networks can reduce the power consumption of the networks by up to 73% of the standard consumption of the networks.

The theory behind the neural networks was based on the anatomy and operation of the human brain. The networks are composed of thousands or even millions of simple but heavily integrated information-processing nodes. The nodes are usually organized into layers. The types of nodes are differentiated based on the number of their layers, the number of nodes in each layer, and the number of connections between the nodes.

To design more energy-efficient networks, Sze’s team has designed an analytic process which can calculate the power consumption of a network when operating on a certain type of hardware. The method was then employed to assess the new techniques that were created by the team to pare down the neural networks so that they can effectively operate on handheld or mobile devices.

According to Sze, what they did was to develop an energy-modeling tool accounting for data movement, transactions, and data flow.

“If you give it a network architecture and the value of its weights, it will tell you how much energy this neural network will take. One of the questions that people had is ‘Is it more energy efficient to have a shallow network and more weights or a deeper network with fewer weights?’ This tool gives us better intuition as to where the energy is going, so that an algorithm designer could have a better understanding and use this as feedback. The second thing we did is that, now that we know where the energy is actually going, we started to use this model to drive our design of energy-efficient neural networks,” she added.

Zse and colleagues Tien-Ju Yang and Yu-Hsin Chen, who are both graduate students in electrical engineering and computer science, have also experimented with the method called “pruning” to reduce the power consumption of the neural networks. They focused on the low-weight connections between nodes because these have very little effect on the final output of the networks.

To significantly lessen the networks’ power consumption, they started to prune the layers with the highest energy consumption to achieve the largest possible energy savings. This energy-saving pruning method was called “energy-aware pruning.”

To further improve the efficiency of the networks, the team also experimented on the weights and the way the associated nodes process training data. Their effort has led to the creation of neural networks with higher efficiency and fewer connections than those achieved in earlier pruning techniques.

Disclaimer: This page contains affiliate links. If you choose to make a purchase after clicking a link, we may receive a commission at no additional cost to you. Thank you for your support!

Be the first to comment

Leave a Reply

Your email address will not be published.


*

This site uses Akismet to reduce spam. Learn how your comment data is processed.