The OpenVINO™ toolkit offers software developers a single toolkit to accelerate their solutions across multiple hardware platforms including FPGAs. Intel® FPGAs offer fast to market, scalable, and customizable solutions.
OpenVINO™ toolkit with the Deep Learning Deployment Toolkit (DLDT) enables you to go from Caffe* or TensorFlow* to hardware with no FPGA experience.
We have developed Intel® FPGA Deep Learning Acceleration Suite (Intel® FPGA DL Acceleration Suite) to scale to a wide array of networks.
The unique architecture of FPGAs provides the flexibility needed to support custom primitives.
The IEI Mustang-F100-A10 is an Intel® Vision Accelerator Design with Intel® Arria® 10 FPGA GX in a small form factor, PCI Express* (PCIe*) acceleration card that is supported natively by the OpenVINO™ toolkit to deliver low-latency video inference for edge and, cloud deployment. It has been designed to add acceleration capabilities to PCIe* host platforms and has been validated on the IEI TANK-870AI compact IPC for those with space and power constraints.
Intel® FPGA-based acceleration platforms include PCIe*-based programmable acceleration cards, socket-based server platforms with integrated FPGAs, and others that are supported by the Acceleration Stack for Intel® Xeon® CPU with FPGAs. Intel platforms are qualified and validated for several leading original equipment manufacturers (OEM) server providers to support large scale FPGA deployment.
This video demonstrates how an Intel® FPGA Programmable Acceleration Card (Intel® FPGA PAC) and OpenVINO™ toolkit can be used to accelerate vision inference where people are detected, identified, and tracked across multiple camera feeds. The design also contains a cryptographic block inside the Intel® FPGA that allows the user to encrypt the trained model before use to protect intellectual property.
Find technical documentation, videos, and training courses for Intel's AI solutions.