OpenVINO
OpenVINO
Toolkit for deploying inference neural network model on Intel hardware
OpenVINO is an open-source software toolkit for optimizing and deploying deep learning models. It enables programmers to develop scalable and efficient AI solutions with relatively few lines of code. It supports several popular model formats[2] and categories, such as large language models, computer vision, and generative AI.
Actively developed by Intel, it prioritizes high-performance inference on Intel hardware but also supports ARM/ARM64 processors[2] and encourages contributors to add new devices to the portfolio.
Based in C++, it offers the following APIs: C/C++, Python, and Node.js (an early preview).
OpenVINO is cross-platform and free for use under Apache License 2.0.[3]