Microprocessor giant Intel has launched its Movidius Neural Compute Stick, a USB-based deep learning inference kit and self-contained Artificial Intelligence (AI) accelerator. The technology is intended to provide dedicated ‘deep neural’ network processing capabilities to a range of Internet of Things (IoT) host devices at the ‘edge’.
Truly, deeply, neural
Deep learning (also known as hierarchical learning) is the application of AI and neural networks (that aim to mimic the function of the human brain) to learning tasks that contain more than one layer. While ‘basic’ or traditional machine leaning algorithms are linear in nature, deep learning algorithms are differentiated by virtue of the fact that they are stacked in a hierarchical structure of increasing complexity and abstraction.
Intel’s move to put smarter brain power out there with this new piece of (embedded) software-rich hardware is designed for software application development professionals looking to put ever smarter predictive analytics capabilities out on the edge of the IoT.
The company calls this space the AI-centric digital economy. Spin and marketing subterfuge aside, this technology will work in two key streams:
- At the data level — this ‘chip’ brain will help train artificial neural networks on the Intel Nervana cloud to optimize workloads.
- At the device implementation level – this ‘chip’ brain will power automated driving and take AI to the edge with Movidius vision processing unit (VPU) technology.
“The Myriad 2 VPU housed inside the Movidius Neural Compute Stick provides powerful, yet efficient performance – more than 100 gigaflops of performance within a 1W power envelope – to run real-time deep neural networks (DNN) directly from the device,” said Remi El-Ouazzane, vice president and general manager of Movidius, an Intel company. “This enables a wide range of AI applications to be deployed offline.”
Read more: Facebook AI project halted after bots invent new language
Two stages of intelligence
Machine intelligence development is fundamentally composed of two stages:
(1) Training an algorithm on large sets of sample data via modern machine learning techniques
(2) Running the algorithm in an end-application that needs to interpret real-world data.
This second stage is referred to as inference. Intel says that performing inference at the IoT edge – or natively inside the device – brings numerous benefits in terms of latency, power consumption and privacy.
With lower latency, we can enjoy better data compile power, more highly tuned performance metrics (validation scripts here allow developers to compare the accuracy of the optimized model on the device to the original PC-based model) and also better device acceleration. Indeed, the Movidius Neural Compute Stick can behave as a discrete neural network accelerator by adding dedicated deep learning inference capabilities to existing computing platforms.
Read more: Artificial intelligence needed to make sense of IoT data