According to Doug Burger, Microsoft’s famed engineer, the system has been designed for real-time artificial intelligence (AI) in the cloud with remarkable speed processing.
Through Intel’s new Stratix 10 field programmable gate array (FPGA) chip, the system can process 39.5 teraflops in machine learning tasks with less than 1 millisecond of latency. This means it has the capability to deal with complex AI tasks upon receiving them.
Microsoft sees the increasing importance of real-time AI since cloud infrastructures have been processing more live data streams. Users need faster cloud services and real-time AI can provide the answer.
The newly-unveiled Project Brainwave system is designed with 3 major layers:
- A highly-efficient, distributed system architecture;
- A hardware deep neural network (DNN) engine synthesized onto FPGAs;
- A runtime and compiler intended for low-friction deployment of trained models.
According to Microsoft Research Blog, they designed this system to show high actual performance across a broad range of complex models without batching tasks. Brainwave can handle complex, memory-intensive models like Long Short-Term Memory networks (LSTMs) and Gated Recurrent Units (GRUs).
Deep learning or deep neural network has made AI smarter. By imitating the activities of neural layers in the neocortex, a machine is able to acquire knowledge in a real sense. This has prompted the impressive computer advancements in image recognition, voice recognition, language processing, and machine translation.
The amazing thing about neural networks is that it starts with a learning algorithm that’s fed to a computer. Then, it’s trained through exposure to terabytes of data — from there it learns on its own. It thinks, analyzes, and arrives at its own conclusions.
Yet, it may further surprise you that the concept of neural networks goes wayback to the 1950’s. It was followed by numerous breakthroughs in key algorithms in the 1980s and 1990s. But, we’ve been able to make more advancements today because we’ve learned to exploit the vast computational power and gargantuan storehouses of data necessary for the development of neural networks.
Based on their Project Brainwave, there is a slight difference in Microsoft’s approach from that of Google’s hardware A1. Google uses a specific set of algorithms in its chip. Microsoft’s FPGAs, on the other hand, are reprogrammable after manufacturing. This flexibility, as Microsoft sees it, is important since deep-learning advancements are made at a monthly or weekly pace.
“We wanted to build something bigger, more disruptive and more general than a point solution,” Burger related in an interview at the Hot Chips conference, where the company held Project Brainwave’s unveiling.
Another one of Microsoft’s advantages over other hardware AI’s is that it has the capability to support multiple deep-learning frameworks. These include Facebook’s Caffe2, Google’s TensorFlow, and their own CNTK.
Soon Project Brainwave will available through Microsoft’s Azure cloud services which will offer opportunities for companies to utilize live AI. Microsoft is optimistic that in spite of tough competition, many will find Project Brainwave very practical since it’s they who’ll have control over it to suit their company’s needs.