Neural networks are inspired by biological systems, in particular the human brain. Through the combination of powerful computing resources and novel architectures for neurons, neural networks have achieved state-of-the-art results in many domains such as computer vision and machine translation. FPGAs are a natural choice for implementing neural networks as they can handle different algorithms in computing, logic, and memory resources in the same device. Faster performance comparing to competitive implementations as the user can hardcore operations into the hardware. Software developers can use the OpenCL™ device1 C level programming standard to target FPGAs as accelerators to standard CPUs without having to deal with hardware level design.
More on Efficient Implementation of Neural Network Systems Built on FPGAs
CNN Implementation Using an FPGA and OpenCL™ Device
This is a power-efficient machine learning demo of the AlexNet convolutional neural networking (CNN) topology on Intel® FPGAs.
These HPC applications greatly benefit from machine learning implementations on an FPGA:
For more details on hardware and software application packages for Machine Learning, go to the Machine Learning page.
For more information on how you can use FPGAs to accelerate your machine learning application, contact your local sales representative.
Learn how to leverage these application solutions to help meet your design challenges.
Learn how these powerful devices can be customized to accelerate key workloads and enable design engineers to adapt to emerging standards or changing requirements.
OpenCL e il logo OpenCL sono marchi di Apple Inc. usati su concessione da Khronos.