AI Generates High-Performing Neural Nets In A Matter of Hours

As one of the foundations of artificial intelligence (AI), the demand for deep learning is only going to increase as more industries realize the benefits of this technology which has the potential to give our devices human-like matching capabilities.

AI

A US research team at the Department of Energy’s Oak Ridge National Laboratory (ORNL) has developed an algorithm called Multinode Evolutionary Neural Networks for Deep Learning (MENNDL) which they claim can create custom artificial intelligence (AI) deep neural nets in less than 24 hours.

The idea behind the research was to speed up the development of an AI neural network system which unlike some of the more typical speech-recognition or game-playing data processing systems-attempts to mimic the activity in layers of neurons in the neocortex, the part of our brain that handles functions such as sensory perception, conscious thought and language processing.

Using Titan Cray XK7, the most powerful supercomputer in the U.S. for open science, ORNL developers thanks to the machine’s 18,600+ graphic processing units (GPUs), a type of computer hardware that involves massive amounts of matrix multiplications and one that’s well-suited to deep learning, were able to use MENNDL as a tool for testing and training thousands of potential neural networks simultaneously on unique science problems. Better yet, research showed that these auto-generated nets could be produced in a matter of hours as opposed to months using AI software developed by top-notch data scientists.

According to ORNL’s press release, MENNDL works by choosing the best set of hyperparameters and model to tackle a particular dataset. The process starts by eliminating the poor performing neural networks to then move via computationally intensive search processes towards sifting through the higher performing ones, until an optimal network is found. Needless to say, the software eliminates the trial-and-error tuning, a process traditionally performed by machine learning experts when they program these neural networks by hand.

“There’s no clear set of instructions scientists can follow to tweak networks to work for their problem,” said research scientist Steven Young, a member of the ORNL team. “With MENNDL, they no longer have to worry about designing a network. Instead, the algorithm can quickly do that for them….”

One of MENNDL’s first attempts at auto-generated networks involved the Department of Energy’s Fermi National Accelerator Laboratory. ‘Neutrinos’, as the only identified candidate for dark matter, are high-energy, ghost-like subatomic particles that could play a significant role in explaining and understanding the formation of the early universe and normal matter.

To better understand neutrino behavior, Fermilab researchers through an experiment involving the use of high-intensity detectors wanted the help of an AI system that could analyze and classify Fermilab’s detector data when it came to studying neutrino interactions with ordinary matter. After only 24 hours of going through 800,000 images of neutrino events, and the evaluation of 500,000 neural networks, MENNDL’s final solution proved superior to custom handcrafted AI systems.

In another case involving a human-designed algorithm for identifying mitochondria — the cell’s “powerhouse” — inside 3D electron microscopy images, Singularityhub noted that MENNDL proved very efficient as it improved the error rate of brain tissue by 30%.

“We are able to do better than humans in a fraction of the time at designing networks for these sort of very different datasets that we’re interested in,” Young explains.

The team at ORNL is looking forward to the lab’s next supercomputer called Summit, scheduled to come online sometime this year. The new superfast system, which will contain more than 27,000 of Nvidia’s (NASDAQ:NVDA) newest Volta GPUs in addition to more than 9,000 IBM Power9 CPUs, is expected to deliver more than 5x the performance of its predecessor. That could potentially deliver exascale-level performance for deep learning problems, thus allowing researchers get a handle on some of science’s most persistent mysteries.

“That means we’ll be able to evaluate larger networks much faster and evolve many more generations of networks in less time,” Young said.

References: SingularityHub, OLCF

Disclaimer: This page contains affiliate links. If you choose to make a purchase after clicking a link, we may receive a commission at no additional cost to you. Thank you for your support!

Be the first to comment

Leave a Reply

Your email address will not be published.


*

This site uses Akismet to reduce spam. Learn how your comment data is processed.