Technical Document
Specifications
Brand
IntelKit Name
Movidius Neural Network Compute Stick
Kit Classification
Development Board
Processor Part Number
Myriad-2
Processor Family Name
Myriad
Product details
Movidius Neural Compute Stick
The Neural Network Compute Stick from Movidius™ allows Deep Neural Network development without the need for expensive, power-hungry supercomputer hardware. Simply prototype and tune the Deep Neural Network with the 100Gflops of computing power provided by the Movidius stick. A Cloud connection is not required. The USB stick form-factor makes for easy connection to a host PC while the on-board Myriad-2 Vision Processing Unit (VPU) delivers the necessary computational performance. The Myriad-2 achieves high-efficiency parallel processing courtesy of its twelve Very Long Instruction Word (VLIW) processors. The decision on parallel scheduling is carried out at program compile time, relieving the processors of this chore at run-time.
Features
Movidius 600MHz Myriad-2 SoC with 12 x 128-bit VLIW SHAVE vector processors
2MB of 400Gbps transfer-rate on-chip memory
Supports FP16, FP32 and integer operations with 8-, 16- and 32-bit accuracy
All data and power provided over a single USB 3.0 port on a host PC
Real-time, on-device inference without Cloud connectivity
Quickly deploy existing CNN models or uniquely trained networks
Multiple Movidius Sticks can be networked to the host PC via a suitable hub
Dimensions: 72.5 x 27 x 14mm
Compile
Automatically convert a trained Caffe-based Convolutional Neural Network (CNN) into an embedded neural network optimized for the on-board Myriad-2 VPU. The SDK also supports TensorFlow.
Tune
Layer-by-layer performance metrics for both industry-standard and custom-designed neural networks enable effective tuning for optimal real-world performance at ultra-low power. Validation scripts allow developers to compare the accuracy of the optimized model on the device to the original PC-based model.
Accelerate
The Movidius Stick can behave as a discrete neural network accelerator by adding dedicated deep learning inference capabilities to existing computing platforms for improved performance and power efficiency.
Where can you use me?
Smart home and consumer robotics
Surveillance and security industry
Retail industry
Healthcare
Stock information temporarily unavailable.
Please check again later.
P.O.A.
1
P.O.A.
1
Technical Document
Specifications
Brand
IntelKit Name
Movidius Neural Network Compute Stick
Kit Classification
Development Board
Processor Part Number
Myriad-2
Processor Family Name
Myriad
Product details
Movidius Neural Compute Stick
The Neural Network Compute Stick from Movidius™ allows Deep Neural Network development without the need for expensive, power-hungry supercomputer hardware. Simply prototype and tune the Deep Neural Network with the 100Gflops of computing power provided by the Movidius stick. A Cloud connection is not required. The USB stick form-factor makes for easy connection to a host PC while the on-board Myriad-2 Vision Processing Unit (VPU) delivers the necessary computational performance. The Myriad-2 achieves high-efficiency parallel processing courtesy of its twelve Very Long Instruction Word (VLIW) processors. The decision on parallel scheduling is carried out at program compile time, relieving the processors of this chore at run-time.
Features
Movidius 600MHz Myriad-2 SoC with 12 x 128-bit VLIW SHAVE vector processors
2MB of 400Gbps transfer-rate on-chip memory
Supports FP16, FP32 and integer operations with 8-, 16- and 32-bit accuracy
All data and power provided over a single USB 3.0 port on a host PC
Real-time, on-device inference without Cloud connectivity
Quickly deploy existing CNN models or uniquely trained networks
Multiple Movidius Sticks can be networked to the host PC via a suitable hub
Dimensions: 72.5 x 27 x 14mm
Compile
Automatically convert a trained Caffe-based Convolutional Neural Network (CNN) into an embedded neural network optimized for the on-board Myriad-2 VPU. The SDK also supports TensorFlow.
Tune
Layer-by-layer performance metrics for both industry-standard and custom-designed neural networks enable effective tuning for optimal real-world performance at ultra-low power. Validation scripts allow developers to compare the accuracy of the optimized model on the device to the original PC-based model.
Accelerate
The Movidius Stick can behave as a discrete neural network accelerator by adding dedicated deep learning inference capabilities to existing computing platforms for improved performance and power efficiency.
Where can you use me?
Smart home and consumer robotics
Surveillance and security industry
Retail industry
Healthcare