diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 23a7bfac..a68a1a5e 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,7 +1,7 @@ -# DL streamer Contributor Guide +# DL Streamer Contributor Guide -The following are guidelines for contributing to the Dlstreamer project, including the code of conduct, submitting issues, and contributing code. +The following are guidelines for contributing to the DL Streamer project, including the code of conduct, submitting issues, and contributing code. # Table of Contents @@ -15,15 +15,15 @@ The following are guidelines for contributing to the Dlstreamer project, includi # Code of Conduct -This project and everyone participating in it are governed by the [`CODE_OF_CONDUCT`](CODE_OF_CONDUCT.md) document. By participating, you are expected to adhere to this code. +This project and everyone participating in it are governed by the [`CODE_OF_CONDUCT`](CODE_OF_CONDUCT.md) document. By participating, you are expected to adhere to this code. -# Security +# Security -Read the [`Security Policy`](SECURITY.md). +Read the [`Security Policy`](SECURITY.md). # Get Started -Clone the repository and follow the [`README`](README.md) to get started with the sample applications of interest. +Clone the repository and follow the [`README`](README.md) to get started with the sample applications of interest. ``` git clone https://github.com/open-edge-platform/dlstreamer.git @@ -34,18 +34,18 @@ Clone the repository and follow the [`README`](README.md) to get started with th ## Contribute Code Changes -> If you want to help improve Dlstreamer, choose one of the issues reported in [`GitHub Issues`](issues) and create a [`Pull Request`](pulls) to address it. +> If you want to help improve DL Streamer, choose one of the issues reported in [`GitHub Issues`](issues) and create a [`Pull Request`](pulls) to address it. > Note: Please check that the change hasn't been implemented before you start working on it. ## Improve Documentation The easiest way to help with the `Developer Guide` and `User Guide` is to review it and provide feedback on the -existing articles. Whether you notice a mistake, see the possibility of improving the text, or think more +existing articles. Whether you notice a mistake, see the possibility of improving the text, or think more information should be added, you can reach out to discuss the potential changes. ## Report Bugs -If you encounter a bug, open an issue in [`Github Issues`](issues). Provide the following information to help us +If you encounter a bug, open an issue in [`Github Issues`](issues). Provide the following information to help us understand and resolve the issue quickly: - A clear and descriptive title @@ -59,7 +59,7 @@ understand and resolve the issue quickly: Intel welcomes suggestions for new features and improvements. Follow these steps to make a suggestion: -- Check if there's already a similar suggestion in [`Github Issues`](issues). +- Check if there's already a similar suggestion in [`Github Issues`](issues). - If not, open a new issue and provide the following information: - A clear and descriptive title - A detailed description of the enhancement @@ -75,7 +75,7 @@ Before submitting a pull request, ensure you follow these guidelines: - Test your changes thoroughly. - Document your changes (in code, readme, etc.). - Submit your pull request, detailing the changes and linking to any relevant issues. -- Wait for a review. Intel will review your pull request as soon as possible and provide you with feedback. +- Wait for a review. Intel will review your pull request as soon as possible and provide you with feedback. You can expect a merge once your changes are validated with automatic tests and approved by maintainers. # Development Guidelines diff --git a/README.md b/README.md index 0da12771..21fb12c0 100644 --- a/README.md +++ b/README.md @@ -27,10 +27,10 @@ Please refer to [Install Guide](./docs/source/get_started/install/install_guide_ 3. [Compile from source code](./docs/source/dev_guide/advanced_install/advanced_install_guide_compilation.md) 4. [Build Docker image from source code](./docs/source/dev_guide/advanced_install/advanced_build_docker_image.md) -To see the full list of installed components check the [dockerfile content for Ubuntu24](https://github.com/open-edge-platform/edge-ai-libraries/blob/main/libraries/dl-streamer/docker/ubuntu/ubuntu24.Dockerfile) +To see the full list of installed components check the [dockerfile content for Ubuntu24](https://github.com/open-edge-platform/dlstreamer/blob/master/docker/ubuntu/ubuntu24.Dockerfile) ## Samples -[Samples](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples) available for C/C++ and Python programming, and as gst-launch command lines and scripts. +[Samples](https://github.com/open-edge-platform/dlstreamer/tree/master/samples) available for C/C++ and Python programming, and as gst-launch command lines and scripts. ## NN models DL Streamer supports NN models in OpenVINO™ IR and ONNX* formats. diff --git a/RELEASE_NOTES.md b/RELEASE_NOTES.md index fee0a5f9..d600ec44 100644 --- a/RELEASE_NOTES.md +++ b/RELEASE_NOTES.md @@ -27,7 +27,7 @@ The complete solution leverages: | [gvamotiondetect](./docs/source/elements/gvamotiondetect.md) | Performs lightweight motion detection on NV12 video frames and emits motion regions of interest (ROIs) as analytics metadata. | | [gvapython](./docs/source/elements/gvapython.md) | Provides a callback to execute user-defined Python functions on every frame. Can be used for metadata conversion, inference post-processing, and other tasks. | | [gvarealsense](./docs/source/elements/gvarealsense.md) | Provides integration with Intel RealSense cameras, enabling video and depth stream capture for use in GStreamer pipelines. | - | [gvatrack](./docs/source/elements/gvatrack.md) | Performs object tracking using zero-term, or imageless tracking algorithms. Assigns unique object IDs to the tracked objects. | + | [gvatrack](./docs/source/elements/gvatrack.md) | Performs object tracking using zero-term, or imageless tracking algorithms. Assigns unique object IDs to the tracked objects. | | [gvawatermark](./docs/source/elements/gvawatermark.md) | Overlays the metadata on the video frame to visualize the inference results. | For the details on supported platforms, please refer to [System Requirements](./docs/source/get_started/system_requirements.md). @@ -236,7 +236,7 @@ For installing Pipeline Framework with the prebuilt binaries or Docker\* or to b |[preview] Enabled Intel® Arc™ B-Series Graphics [products formerly Battlemage] | Validated with Ubuntu 24.04, 6.12.3-061203-generic and the latest Intel® Graphics Compute Runtime for oneAPI Level Zero and OpenCL™ Driver v24.52.32224.5 + the latest public Intel Graphics Media Driver version + pre-rerelease Intel® Graphics Memory Management Library version | | OpenVINO 2024.6 support | Update to the latest version of OpenVINO | | Updated NPU driver | Updated NPU driver to 1.10.1 version. | -| Bug fixing | Running multiple gstreamer pipeline objects in the same process on dGPU leads to error; DLStreamer docker image build is failing (2024.2.2 and 2024.3.0 versions); Fixed installation scripts: minor fixes of GPU, NPU installation section; Updated documentation: cleanup, added missed parts, added DLS system requirements | +| Bug fixing | Running multiple gstreamer pipeline objects in the same process on dGPU leads to error; DL Streamer docker image build is failing (2024.2.2 and 2024.3.0 versions); Fixed installation scripts: minor fixes of GPU, NPU installation section; Updated documentation: cleanup, added missed parts, added DLS system requirements | # Deep Learning Streamer (DL Streamer) Pipeline Framework Release 2024.3.0 @@ -322,7 +322,7 @@ For installing Pipeline Framework with the prebuilt binaries or Docker\* or to b |----------------|------------------------| | [#425](https://github.com/dlstreamer/dlstreamer/issues/425) | when using inference-region=roi-list vs full-frame in my classification pipeline, classification data does not get published | | [#432](https://github.com/dlstreamer/dlstreamer/issues/432) | Installation issues with gst-ugly plugins | -| [#397](https://github.com/dlstreamer/dlstreamer/issues/397) | Installation Error DLStreamer - Both Debian Packages and Compile from Sources | +| [#397](https://github.com/dlstreamer/dlstreamer/issues/397) | Installation Error DL Streamer - Both Debian Packages and Compile from Sources | | Internal findings | custom efficientnetb0 fix, issue with selection region before inference, Geti classification model fix, dGPU vah264enc element not found error fix, sample: face_detection_and_classifiation fix| ## Deep Learning Streamer (DL Streamer) Pipeline Framework Release 2024.1.1 @@ -331,8 +331,8 @@ For installing Pipeline Framework with the prebuilt binaries or Docker\* or to b | **Title** | **High-level description** | |----------------|---------------------------------| -| Missing git package | Git package added to DLStreamer docker runtime image | -| VTune when running DLStreamer | Publish instructions to install and run VTune to analyze media + gpu when running DLStreamer | +| Missing git package | Git package added to DL Streamer docker runtime image | +| VTune when running DL Streamer | Publish instructions to install and run VTune to analyze media + gpu when running DL Streamer | | Update NPU drivers to version 1.5.0 | Update NPU driver version inside docker images| | Instance_segmentation sample | Add new Mask-RCNN segmentation sample | | Documentation updates | Enhance Performance Guide and Model Preparation section | @@ -355,7 +355,7 @@ For installing Pipeline Framework with the prebuilt binaries or Docker\* or to b | Update OpenVINO version to latest one (2024.2.0) | Update OpenVINO version to latest one (2024.2.0) | | Release docker images on DockerHUB: runtime and dev | Release docker images on DockerHUB: runtime and dev | | Bugs fixing | Bug fixed: GPU not detected in Docker container Dlstreamer - MTL platform; Updated docker images with proper GPU and NPU packages; yolo5 model failed with batch-size >1; Remove excessive ‘mbind failed:...’ warning logs | -| Documentation updates | Added sample applications for Mask-RCNN instance segmentation. Added list of supported models from Open Model Zoo and public repos. Added scripts to generate DLStreamer-consumable models from public repos. Document usage of ModelAPI properties in OpenVINO IR (model.xml) instead of creating custom model_proc files. Updated installation instructions for docker images. | +| Documentation updates | Added sample applications for Mask-RCNN instance segmentation. Added list of supported models from Open Model Zoo and public repos. Added scripts to generate DL Streamer-consumable models from public repos. Document usage of ModelAPI properties in OpenVINO IR (model.xml) instead of creating custom model_proc files. Updated installation instructions for docker images. | ### Fixed issues @@ -407,8 +407,8 @@ For installing Pipeline Framework with the prebuilt binaries or Docker\* or to b |----------------|------------------------|-----------------|-------------------------| | 390 | [How to install packages with sudo inside the docker container intel/dlstreamer:latest](https://github.com/dlstreamer/dlstreamer/issues/390) | start the container as mentioned above with root-user `(-u 0) docker run -it -u 0 --rm`... and then are able to update binaries | All | | 392 | [installation error dlstreamer with openvino 2023.2](https://github.com/dlstreamer/dlstreamer/issues/392) | 2024.0 version supports API 2.0 so I highly recommend to check it and in case if this problem is still valid please raise new issue | All | -| 393 | [Debian file location for DL streamer 2022.3](https://github.com/dlstreamer/dlstreamer/issues/393) | Error no longer occurring for user | All | -| 394 | [Custom YoloV5m Accuracy Drop in dlstreamer with model proc](https://github.com/dlstreamer/dlstreamer/issues/394) | Procedure to transform crowdhuman_yolov5m.pt model to the openvino version that can be used directly in DLstreamer with Yolo_v7 converter (no layer cutting required) * `git clone https://github.com/ultralytics/yolov5 * cd yolov5 * pip install -r requirements.txt openvino-dev * python export.py --weights crowdhuman_yolov5m.pt --include openvino` | All | +| 393 | [Debian file location for DL Streamer 2022.3](https://github.com/dlstreamer/dlstreamer/issues/393) | Error no longer occurring for user | All | +| 394 | [Custom YoloV5m Accuracy Drop in dlstreamer with model proc](https://github.com/dlstreamer/dlstreamer/issues/394) | Procedure to transform crowdhuman_yolov5m.pt model to the openvino version that can be used directly in DL Streamer with Yolo_v7 converter (no layer cutting required) * `git clone https://github.com/ultralytics/yolov5 * cd yolov5 * pip install -r requirements.txt openvino-dev * python export.py --weights crowdhuman_yolov5m.pt --include openvino` | All | | 396 | [Segfault when reuse same model with same model-instance-id.](https://github.com/dlstreamer/dlstreamer/issues/396) | 2024.0 version supports API 2.0 so I highly recommend to check it and in case if this problem is still valid please raise new issue | All | | 404 | [How to generate model proc file for yolov8?](https://github.com/dlstreamer/dlstreamer/issues/404) | Added as a feature in this release | All | | 406 | [yolox support](https://github.com/dlstreamer/dlstreamer/issues/406) | Added as a feature in this release | All | @@ -445,7 +445,7 @@ For more detailed instructions please refer to [DL Streamer Pipeline Framework i ## Samples -The [samples](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples) folder in DL Streamer Pipeline Framework repository contains command line, C++ and Python examples. +The [samples](https://github.com/open-edge-platform/dlstreamer/tree/master/samples) folder in DL Streamer Pipeline Framework repository contains command line, C++ and Python examples. ## Legal Information diff --git a/docs/source/architecture_2.0/cpp_elements.md b/docs/source/architecture_2.0/cpp_elements.md index f1361d15..b7179956 100644 --- a/docs/source/architecture_2.0/cpp_elements.md +++ b/docs/source/architecture_2.0/cpp_elements.md @@ -24,9 +24,8 @@ interfaces: ![c++-interfaces-and-base-classes](../_images/c++-interfaces-and-base-classes.svg) -Many examples how to create C++ elements can be found on github -repository in [folder -src](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/src) +Many examples how to create C++ elements can be found on the github +repository in the [src folder](https://github.com/open-edge-platform/dlstreamer/tree/master/src) and sub-folders. ## Element description @@ -86,7 +85,7 @@ auto ffmpeg_source = create_source(ffmpeg_multi_source, {{"inputs", inputs}}, ff ``` See direct programming samples -[ffmpeg_openvino](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/ffmpeg_openvino) +[ffmpeg_openvino](https://github.com/open-edge-platform/dlstreamer/tree/master/samples/ffmpeg_openvino) and -[ffmpeg_dpcpp](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/ffmpeg_dpcpp) +[ffmpeg_dpcpp](https://github.com/open-edge-platform/dlstreamer/tree/master/samples/ffmpeg_dpcpp) for examples. diff --git a/docs/source/architecture_2.0/cpp_interfaces.md b/docs/source/architecture_2.0/cpp_interfaces.md index c85465a9..a3939c82 100644 --- a/docs/source/architecture_2.0/cpp_interfaces.md +++ b/docs/source/architecture_2.0/cpp_interfaces.md @@ -1,8 +1,8 @@ -# ① Memory Interop and C++ abstract interfaces +# Memory Interop and C++ abstract interfaces -Deep Learning Streamer provides independent sub-component for zero-copy +Deep Learning Streamer (DL Streamer) provides an independent sub-component for zero-copy buffer sharing and memory interop between various frameworks and memory -handles on CPU and GPU +handles on the CPU and GPU. - CPU memory `void*` - FFmpeg `AVFrame` @@ -17,10 +17,10 @@ handles on CPU and GPU The memory interop sub-component is available via APT installation `sudo apt install intel-dlstreamer-cpp` and on -[github](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/include/dlstreamer). +[github](https://github.com/open-edge-platform/dlstreamer/tree/master/include/dlstreamer). -> **Note:** This sub-component implemented as C++ header-only library. Python -> bindings for this library coming in next releases. +> **Note:** This sub-component is implemented as a C++ header-only library. Python +> bindings for this library will be coming in future releases. ## Why memory interop library? @@ -97,14 +97,11 @@ pre-allocated native memory object to C++ constructor (wrap already allocated object) or passing allocation parameters to C++ constructor (allocate new memory). -Many examples how to allocate memory and create and use memory mappers -can be found by searching word `mapper` in [samples -https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples\>]{.title-ref}\_\_ -and [src -https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/src\>]{.title-ref}\_\_ -folders on github source code, for example FFmpeg+DPCPP sample -[rgb_to_grayscale -https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/ffmpeg_dpcpp/rgb_to_grayscale\>]() +Many examples on how to allocate memory and create and use memory mappers +can be found by searching for the word `mapper` in [samples](https://github.com/open-edge-platform/dlstreamer/tree/master/samples) +and [src](https://github.com/open-edge-platform/dlstreamer/tree/master/src) +folders on github source code, like for example in the FFmpeg+DPCPP sample +[rgb_to_grayscale](https://github.com/open-edge-platform/dlstreamer/tree/master/samples/ffmpeg_dpcpp/rgb_to_grayscale) and almost every C++ element. There is special mapper diff --git a/docs/source/architecture_2.0/pytorch_inference.md b/docs/source/architecture_2.0/pytorch_inference.md index 19c0cbcd..a802f7b0 100644 --- a/docs/source/architecture_2.0/pytorch_inference.md +++ b/docs/source/architecture_2.0/pytorch_inference.md @@ -1,19 +1,19 @@ # PyTorch tensor inference -DLStreamer supports PyTorch inference backend via the +Deep Learning Streamer (DL Streamer) supports PyTorch inference backend via the `pytorch_tensor_inference` GStreamer element implemented in Python. -Element description can be found in -[Elements 2.0 reference](elements_list) . +Element description can be found in [Elements 2.0 reference](elements_list). ## Prior to the first run Before using `pytorch_tensor_inference`, make sure that all of the following requirements are met. Visit -[Install Guide](../get_started/install/install_guide_ubuntu.md) for more information about installing DLStreamer. +[Install Guide](../get_started/install/install_guide_ubuntu.md) for more information about +installing DL Streamer. 1. `intel-dlstreamer-gst-python3-plugin-loader` and - `intel-dlstreamer-gst-python3` packages are installed. If not, add - DLStreamer apt repository and install the following packages: + `intel-dlstreamer-gst-python3` packages are installed. If not, add a DL Streamer apt + repository and install the following packages: ```bash apt-get update @@ -28,7 +28,7 @@ following requirements are met. Visit python3 -m pip install -r requirements.txt ``` -3. DLStreamer environment has been configured. If not: +3. DL Streamer environment has been configured. If not: ```bash source /opt/intel/dlstreamer/setupvars.sh @@ -70,7 +70,7 @@ model. To obtain the size of the output tensors during caps negotiations phase, inference is performed on an random tensor, the size of which will be set in accordance with the capabilities. -## DLStreamer pipelines with pytorch_tensor_inference +## DL Streamer pipelines with pytorch_tensor_inference Below is an example using the `pytorch_tensor_inference` element in pipeline to classify objects. This example uses the `resnet50` model diff --git a/docs/source/dev_guide/advanced_install/advanced_build_docker_image.md b/docs/source/dev_guide/advanced_install/advanced_build_docker_image.md index ea8d0bf4..0118c736 100644 --- a/docs/source/dev_guide/advanced_install/advanced_build_docker_image.md +++ b/docs/source/dev_guide/advanced_install/advanced_build_docker_image.md @@ -14,7 +14,7 @@ Follow the instructions in ## Step 2: Download Dockerfiles All Dockerfiles are in -[DLStreamer GitHub repository](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/docker). +[DLStreamer GitHub repository](https://github.com/open-edge-platform/dlstreamer/tree/master/docker). Ubuntu24 debian/dev Dockerfile diff --git a/docs/source/dev_guide/advanced_install/advanced_install_guide_index.md b/docs/source/dev_guide/advanced_install/advanced_install_guide_index.md index 2496bf9c..a78c171e 100644 --- a/docs/source/dev_guide/advanced_install/advanced_install_guide_index.md +++ b/docs/source/dev_guide/advanced_install/advanced_install_guide_index.md @@ -12,7 +12,7 @@ - [Step 6: Build OpenCV](./advanced_install_guide_compilation.md#optional-step-6-install-openvino-genai-only-for-ubuntu) - [Step 7: Clone Intel® DL Streamer repository](./advanced_install_guide_compilation.md#step-7-build-deep-learning-streamer) - [Step 8: Install OpenVINO™ Toolkit](./advanced_install_guide_compilation.md#step-8-install-deep-learning-streamer-optional) - - [Step 9: Build Intel DLStreamer](./advanced_install_guide_compilation.md#step-9-set-up-environment) + - [Step 9: Build Intel DL Streamer](./advanced_install_guide_compilation.md#step-9-set-up-environment) - [Step 10: Set up environment](./advanced_install_guide_compilation.md#step-10-install-python-dependencies-optional) - [Ubuntu advanced installation - build Docker image](./advanced_build_docker_image.md) - [Step 1: Install prerequisites](./advanced_build_docker_image.md#step-1-install-prerequisites) diff --git a/docs/source/dev_guide/advanced_install/advanced_install_on_windows.md b/docs/source/dev_guide/advanced_install/advanced_install_on_windows.md index 3c3d5c27..1f370576 100644 --- a/docs/source/dev_guide/advanced_install/advanced_install_on_windows.md +++ b/docs/source/dev_guide/advanced_install/advanced_install_on_windows.md @@ -3,7 +3,7 @@ The instructions below are intended for building Deep Learning Streamer Pipeline Framework from the source code provided in -[Open Edge Platform repository](https://github.com/open-edge-platform/edge-ai-libraries.git). +[Open Edge Platform repository](https://github.com/open-edge-platform/dlstreamer.git). ## Step 1: Clone Deep Learning Streamer repository diff --git a/docs/source/dev_guide/converting_deepstream_to_dlstreamer.md b/docs/source/dev_guide/converting_deepstream_to_dlstreamer.md index 400fa972..66ba91ac 100644 --- a/docs/source/dev_guide/converting_deepstream_to_dlstreamer.md +++ b/docs/source/dev_guide/converting_deepstream_to_dlstreamer.md @@ -20,10 +20,9 @@ a working example is described at each step, to help understand the applied modi - [Video Processing Elements](#video-processing-elements) - [Metadata Elements](#metadata-elements) - [Multiple Input Streams](#multiple-input-streams) -- [DeepStream to DLStreamer Mapping](#deepstream-to-dlstreamer-mapping) - - [Element Mapping](#element-mapping-table) - - [Property Mapping](#property-mapping-table) - +- [DeepStream to DL Streamer Mapping](#deepstream-to-dl-streamer-mapping) + - [Element Mapping](#element-mapping) + - [Property Mapping](#property-mapping) ## Preparing Your Model @@ -37,7 +36,7 @@ a working example is described at each step, to help understand the applied modi ### Command Line Applications The following sections show how to convert a DeepStream pipeline to -the DLStreamer. The DeepStream pipeline is taken from one of the +a DL Streamer one. The DeepStream pipeline is taken from one of the [examples](https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps). It reads a video stream from the input file, decodes it, runs inference, overlays the inferences on the video, re-encodes and outputs a new .mp4 @@ -61,11 +60,11 @@ pipeline. ### Python Applications -While GStreamer command line allows quick demonstration of a running pipeline, fine-grain control typically involves using a GStreamer pipeline object in a programmable way: either Python or C/C++ code. +While GStreamer command line allows quick demonstration of a running pipeline, fine-grain control typically involves using a GStreamer pipeline object in a programmable way: either Python or C/C++ code. -This section illustrates how to convert [DeepStream Python example](https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/tree/master/apps/deepstream-test1) into [DLStreamer Python example](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/python/hello_dlstreamer). Both applications implement same functionality, yet they use DeepStream or DLStreamer elements as illustrated in a table below. The elements in __bold__ are vendor-specific, while others are regular GStreamer elements. +This section illustrates how to convert a [DeepStream Python example](https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/tree/master/apps/deepstream-test1) into a [DL Streamer Python example](https://github.com/open-edge-platform/dlstreamer/tree/master/samples/gstreamer/python/hello_dlstreamer). Both applications implement the same functionality, yet they use DeepStream or DLStreamer elements as illustrated in the table below. The elements in __bold__ are vendor-specific, while others are regular GStreamer elements. -| DeepStream Element | DLStreamer Element | Function | +| DeepStream Element | DL Streamer Element | Function | |---|---|---| | filesrc | filesrc | Read video file | | h264parse ! __nvv4l2decoder__ | decodebin3 | Decode video file | @@ -82,8 +81,8 @@ pipeline.add(element) element.link(next_element) ``` -Please note DeepStream and DLStreamer applications use same set of regular GStreamer library functions to construct pipelines. -The difference is in what elements are created and linked. In addition, DLStreamer `decodebin3` element uses late linking within a callback function. +Please note DeepStream and DL Streamer applications use same set of regular GStreamer library functions to construct pipelines. +The difference is in what elements are created and linked. In addition, DL Streamer `decodebin3` element uses late linking within a callback function. @@ -123,7 +122,7 @@ pipeline.add(decoder) ... source.link(decoder) decoder.connect("pad-added", - lambda element, pad, data: element.link(data) if pad.get_name().find("video") != -1 and not pad.is_linked() else None, + lambda element, pad, data: element.link(data) if pad.get_name().find("video") != -1 and not pad.is_linked() else None, detect) @@ -154,7 +153,7 @@ watermarksinkpad.add_probe(Gst.PadProbeType.BUFFER, watermark_sink_pad_buffer_pr
-The probe function iterates over prediction metadata found by the AI model. Here, DeepStream and DLStreamer implementation differ significantly. DeepStream sample uses DeepStream-specific structures for batches of frames, frames and objects within a frame. On the contrary, DLStreamer sample uses regular GStreamer data structures from [GstAnalytics metadata library](https://gstreamer.freedesktop.org/documentation/analytics/index.html?gi-language=python#analytics-metadata-library). Please also note DLStreamer handler runs on per-frame frequency while DeepStream sample runs on per-batch (of frames) frequency. +The probe function iterates over prediction metadata found by the AI model. Here, DeepStream and DL Streamer implementation differ significantly. DeepStream sample uses DeepStream-specific structures for batches of frames, frames and objects within a frame. In contrast, DL Streamer sample uses regular GStreamer data structures from [GstAnalytics metadata library](https://gstreamer.freedesktop.org/documentation/analytics/index.html?gi-language=python#analytics-metadata-library). Please also note DL Streamer handler runs on per-frame frequency while DeepStream sample runs on per-batch (of frames) frequency. @@ -178,7 +177,7 @@ while l_frame is not None: ... process object metadata

-# no batch meta in DLStreamer, probes run per-frame
+# no batch meta in DL Streamer, probes run per-frame
 ...
 frame_meta = GstAnalytics.buffer_get_analytics_relation_meta(buffer)
 for obj in frame_meta:
@@ -189,7 +188,7 @@ for obj in frame_meta:
 
The last table compares pipeline execution logic. Both applications set the pipeline state to `PLAYING` and run the main GStreamer event loop. -DeepStream sample invokes a predefined event loop from a DeepStream library, while DLStreamer application explicitly adds the message processing loop. +DeepStream sample invokes a predefined event loop from a DeepStream library, while DL Streamer application explicitly adds the message processing loop. Both implementations keep running the pipeline until end-of-stream message is received. @@ -228,7 +227,7 @@ while not terminate: if msg: if msg.type == Gst.MessageType.ERROR: ... handle errors - terminate = True + terminate = True if msg.type == Gst.MessageType.EOS: terminate = True pipeline.set_state(Gst.State.NULL) @@ -303,7 +302,7 @@ output the region of interests. `batch-size` is also added for consistency with what was removed above (the default value is `1` so it is not needed). The `config-file-path` property is replaced with `model` and `model-proc` properties as described in -[Configuring Model for Deep Learning Streamer](#configuring-model-for-deep-learning-streamer) +[Preparing Your Model](#preparing-your-model) above. ```shell @@ -425,14 +424,14 @@ filesrc ! decode ! gvadetect model-instance-id=model1 model=./model.xml batch-si filesrc ! decode ! gvadetect model-instance-id=model1 ! encode ! filesink ``` -## DeepStream to DLStreamer Mapping +## DeepStream to DL Streamer Mapping ### Element Mapping The table below provides quick reference for mapping typical DeepStream elements to Deep Learning Streamer elements or GStreamer. -| DeepStream Element | DLStreamer Element | +| DeepStream Element | DL Streamer Element | |---|---| | [nvinfer](https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html) | [gvadetect](../elements/gvadetect), [gvaclassify](../elements/gvaclassify.md), [gvainference](../elements/gvainference.md) | | [nvdsosd](https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvdsosd.html) | [gvawatermark](../elements/gvawatermark.md) | diff --git a/docs/source/dev_guide/custom_processing.md b/docs/source/dev_guide/custom_processing.md index 64a38c00..98da64d2 100644 --- a/docs/source/dev_guide/custom_processing.md +++ b/docs/source/dev_guide/custom_processing.md @@ -6,7 +6,7 @@ plugins is stored under frame [Metadata](./metadata.md) using a flexible [GstStructure](https://gstreamer.freedesktop.org/documentation/gstreamer/gststructure.html) key-value container. The **GVA::Tensor** C++ class with -[header-only implementation](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/include/dlstreamer/gst/videoanalytics/tensor.h#L38) +[header-only implementation](https://github.com/open-edge-platform/dlstreamer/tree/master/include/dlstreamer/gst/videoanalytics/tensor.h#L38) helps C++ applications access the tensor data. The integration of DL model inference into real application typically @@ -55,7 +55,7 @@ The C/C++ application can either: consumption The pad probe callback is demonstrated in the -[draw_face_attributes](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/cpp/draw_face_attributes/main.cpp) C++ sample. +[draw_face_attributes](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/cpp/draw_face_attributes/main.cpp) C++ sample. ## 2. Set C/Python callback in the middle of GStreamer pipeline @@ -95,13 +95,13 @@ Refer to the a GStreamer plugin. If the frame processing function is implemented in C++, it can utilize the -[GVA::Tensor](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/include/dlstreamer/gst/videoanalytics/tensor.h#L38) +[GVA::Tensor](https://github.com/open-edge-platform/dlstreamer/blob/master/include/dlstreamer/gst/videoanalytics/tensor.h#L38) helper class. ## 5. Modify source code of post-processors for gvadetect/gvaclassify elements You can add new or modify any suitable existing -[post-processor](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/src/monolithic/gst/inference_elements/common/post_processor/blob_to_meta_converter.cpp) +[post-processor](https://github.com/open-edge-platform/dlstreamer/blob/master/src/monolithic/gst/inference_elements/common/post_processor/blob_to_meta_converter.cpp) for `gvadetect`/`gvaclassify` elements. ## 6. Create custom post-processing library @@ -113,7 +113,7 @@ flexibility and modularity while maintaining clean separation between the core framework and custom processing logic. Practical examples of implementations are demonstrated in the -[sample](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/gst_launch/custom_postproc). +[sample](https://github.com/open-edge-platform/dlstreamer/tree/master/samples/gstreamer/gst_launch/custom_postproc). **Important Requirements** @@ -134,14 +134,14 @@ At this time, only **detection** and **classification** tasks are supported: - **Object Detection** (`GstAnalyticsODMtd`) - works only with the `gvadetect` element (see [*Detection* sample][detection_sample]). - - [detection_sample]: https://github.com/open-edge-platform/edge-ai-libraries/blob/main/libraries/dl-streamer/samples/gstreamer/gst_launch/custom_postproc/detect/README.md + + [detection_sample]: https://github.com/open-edge-platform/dlstreamer/tree/master/samples/gstreamer/gst_launch/custom_postproc/detect/ - **Classification** (`GstAnalyticsClsMtd`) - works with both the [gvadetect](../elements/gvadetect.md) and [gvaclassify](../elements/gvaclassify.md) elements (see [*Classification* sample][classify_sample]). - [classify_sample]: https://github.com/open-edge-platform/edge-ai-libraries/blob/main/libraries/dl-streamer/samples/gstreamer/gst_launch/custom_postproc/classify/README.md + [classify_sample]: https://github.com/open-edge-platform/dlstreamer/tree/master/samples/gstreamer/gst_launch/custom_postproc/classify/ **Implementation Requirements** @@ -149,7 +149,7 @@ Your custom library must export a `Convert` function with the following signature: ```c -void Convert(GstTensorMeta *outputTensors, +void Convert(GstTensorMeta *outputTensors, const GstStructure *network, const GstStructure *params, GstAnalyticsRelationMeta *relationMeta); diff --git a/docs/source/dev_guide/dev_guide_index.md b/docs/source/dev_guide/dev_guide_index.md index 3ef1ef39..61771853 100644 --- a/docs/source/dev_guide/dev_guide_index.md +++ b/docs/source/dev_guide/dev_guide_index.md @@ -54,7 +54,7 @@ - [Configuring Model for Intel® DL Streamer](./converting_deepstream_to_dlstreamer.md#configuring-model-for-deep-learning-streamer) - [GStreamer Pipeline Adjustments](./converting_deepstream_to_dlstreamer.md#gstreamer-pipeline-adjustments) - [Multiple Input Streams](./converting_deepstream_to_dlstreamer.md#multiple-input-streams) - - [DeepStream to DLStreamer Elements Mapping Cheetsheet](./converting_deepstream_to_dlstreamer.md#deepstream-to-dlstreamer-elements-mapping-cheetsheet) + - [DeepStream to DL Streamer Elements Mapping Cheetsheet](./converting_deepstream_to_dlstreamer.md#deepstream-to-dlstreamer-elements-mapping-cheetsheet) - [How to Contribute](./how_to_contribute.md) - [Coding Style](./coding_style.md) - [Latency Tracer](./latency_tracer.md) diff --git a/docs/source/dev_guide/download_public_models.md b/docs/source/dev_guide/download_public_models.md index 5ddfca85..b0dc6409 100644 --- a/docs/source/dev_guide/download_public_models.md +++ b/docs/source/dev_guide/download_public_models.md @@ -1,7 +1,7 @@ # Download Public Models This page provides instructions on how to use the -[samples/download_public_models.sh](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/download_public_models.sh) +[samples/download_public_models.sh](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/download_public_models.sh) script to download the following models: - [YOLO](https://docs.ultralytics.com/models/) @@ -18,7 +18,7 @@ export MODELS_PATH=/path/to/models ``` You can refer to the list of -[supported models](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/download_public_models.sh#L20). +[supported models](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/download_public_models.sh#L23). For example, to download the YOLOv11s model, use: diff --git a/docs/source/dev_guide/how_to_create_model_proc_file.md b/docs/source/dev_guide/how_to_create_model_proc_file.md index 8615c49c..740c7187 100644 --- a/docs/source/dev_guide/how_to_create_model_proc_file.md +++ b/docs/source/dev_guide/how_to_create_model_proc_file.md @@ -69,8 +69,8 @@ formats described above: | Model | Model-proc | 2nd layer format | |---|---|---| -| [Faster-RCNN](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/faster_rcnn_resnet50_coco) | [preproc-image-info.json](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-image-info.json) | image_info | -| [license-plate-recognition-barrier-0007](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/license-plate-recognition-barrier-0007) | [license-plate-recognition-barrier-0007.json](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/license-plate-recognition-barrier-0007.json) | sequence_index | +| [Faster-RCNN](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/faster_rcnn_resnet50_coco) | [preproc-image-info.json](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-image-info.json) | image_info | +| [license-plate-recognition-barrier-0007](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/license-plate-recognition-barrier-0007) | [license-plate-recognition-barrier-0007.json](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/license-plate-recognition-barrier-0007.json) | sequence_index | ### Model requires more advance image pre-processing @@ -91,8 +91,8 @@ some of the operations described above: | Model | Model-proc | Operation | |---|---|---| -| [MobileNet](https://github.com/onnx/models/blob/main/validated/vision/classification/mobilenet) | [mobilenetv2-7.json](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/onnx/mobilenetv2-7.json) | normalization | -| [single-human-pose-estimation-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/single-human-pose-estimation-0001) | [single-human-pose-estimation-0001.json](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/single-human-pose-estimation-0001.json) | padding | +| [MobileNet](https://github.com/onnx/models/blob/main/validated/vision/classification/mobilenet) | [mobilenetv2-7.json](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/onnx/mobilenetv2-7.json) | normalization | +| [single-human-pose-estimation-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/single-human-pose-estimation-0001) | [single-human-pose-estimation-0001.json](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/single-human-pose-estimation-0001.json) | padding | For more details, see the [model-proc documentation](./model_proc_file.md). @@ -106,7 +106,7 @@ converter in *"output_postproc"* for separate processing. For example: | Model | Model-proc | |---|---| -| [age-gender-recognition-retail-0013](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/age-gender-recognition-retail-0013) |[age-gender-recognition-retail-0013.json](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/age-gender-recognition-retail-0013.json) | +| [age-gender-recognition-retail-0013](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/age-gender-recognition-retail-0013) |[age-gender-recognition-retail-0013.json](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/age-gender-recognition-retail-0013.json) | For joint processing of blobs from several output layers, it is enough to specify only one converter and the *"layer_names": ["layer_name_1", .. , "layer_name_n"]* @@ -116,7 +116,7 @@ Example: | Model | Model-proc | |---|---| -| [YOLOv3](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolo-v3-tf) |[yolo-v3-tf.json](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/yolo-v3-tf.json) | +| [YOLOv3](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolo-v3-tf) |[yolo-v3-tf.json](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/yolo-v3-tf.json) | > **NOTE:** In this example, you will not find the use of the *"layer_names"* > field, because it is not necessary to specify it when the @@ -154,9 +154,9 @@ Examples of labels in model-proc files: | Dataset | Model | Model-proc | |---|---|---| -| ImageNet | [resnet-18-pytorch](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-18-pytorch) | [preproc-aspect-ratio.json](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | -| COCO | [YOLOv2](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolo-v2-tf) |[yolo-v2-tf.json](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/yolo-v2-tf.json) | -| PASCAL VOC | [yolo-v2-ava-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/yolo-v2-ava-0001) | [yolo-v2-ava-0001.json](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/yolo-v2-ava-0001.json) | +| ImageNet | [resnet-18-pytorch](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-18-pytorch) | [preproc-aspect-ratio.json](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | +| COCO | [YOLOv2](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolo-v2-tf) |[yolo-v2-tf.json](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/yolo-v2-tf.json) | +| PASCAL VOC | [yolo-v2-ava-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/yolo-v2-ava-0001) | [yolo-v2-ava-0001.json](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/yolo-v2-ava-0001.json) | ## Practice diff --git a/docs/source/dev_guide/lvms.md b/docs/source/dev_guide/lvms.md index 829c92de..4c1e3bc9 100644 --- a/docs/source/dev_guide/lvms.md +++ b/docs/source/dev_guide/lvms.md @@ -5,7 +5,7 @@ CLIP models for integration with the Deep Learning Streamer pipeline. > **NOTE:** The instructions provided below are comprehensive, but for convenience, > it is recommended to use the -> [download_public_models.sh](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/download_public_models.sh) +> [download_public_models.sh](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/download_public_models.sh) > script. This script will download all supported models and perform the > necessary conversions automatically. See [download_public_models](./download_public_models.md) for more information. @@ -149,5 +149,5 @@ ov.save_model(ov_model, MODEL + ".xml") ## 3. Model usage -See the [generate_frame_embeddings.sh](https://github.com/open-edge-platform/edge-ai-libraries/blob/main/libraries/dl-streamer/samples/gstreamer/gst_launch/lvm/generate_frame_embeddings.sh) sample for detailed +See the [generate_frame_embeddings.sh](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/gst_launch/lvm/generate_frame_embeddings.sh) sample for detailed examples of Deep Learning Streamer pipelines using the model. diff --git a/docs/source/dev_guide/metadata.md b/docs/source/dev_guide/metadata.md index a5e83793..36b12660 100644 --- a/docs/source/dev_guide/metadata.md +++ b/docs/source/dev_guide/metadata.md @@ -6,13 +6,13 @@ for object detection and classification use cases (the [gvadetect](../elements/gvadetect.md), [gvaclassify](../elements/gvaclassify.md) elements), and define two custom metadata types: -- [GstGVATensorMeta](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/include/dlstreamer/gst/metadata/gva_tensor_meta.h) +- [GstGVATensorMeta](https://github.com/open-edge-platform/dlstreamer/blob/master/include/dlstreamer/gst/metadata/gva_tensor_meta.h) For output of the [gvainference](../elements/gvainference.md) element performing generic inference on any model with an image-compatible input layer and any format of output layer(s) -- [GstGVAJSONMeta](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/include/dlstreamer/gst/metadata/gva_json_meta.h) +- [GstGVAJSONMeta](https://github.com/open-edge-platform/dlstreamer/blob/master/include/dlstreamer/gst/metadata/gva_json_meta.h) For output of the [gvametaconvert](../elements/gvametaconvert.md) element performing conversion of `GstVideoRegionOfInterestMeta` into the JSON format @@ -60,7 +60,7 @@ gst-launch-1.0 --gst-plugin-path ${GST_PLUGIN_PATH} \ ``` > **NOTE:** More examples can be found in the -> [gst_launch](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/gst_launch) +> [gst_launch](https://github.com/open-edge-platform/dlstreamer/tree/master/samples/gstreamer/gst_launch) > folder. If the `gvadetect` element detected three faces, it will attach three diff --git a/docs/source/dev_guide/model_preparation.md b/docs/source/dev_guide/model_preparation.md index c2ad7d50..84b6e06c 100644 --- a/docs/source/dev_guide/model_preparation.md +++ b/docs/source/dev_guide/model_preparation.md @@ -3,9 +3,9 @@ When getting started with Deep Learning Streamer, the best way to obtain a collection of models ready for use in video analytics pipelines is to run -[download_omz_models.sh](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/download_omz_models.sh) +[download_omz_models.sh](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/download_omz_models.sh) and -[download_public_models.sh](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/download_public_models.sh). +[download_public_models.sh](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/download_public_models.sh). These scripts will download models from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo) and other sources, handle the necessary conversions and put model files in a @@ -15,7 +15,7 @@ This way, you will be able to easily perform the most popular tasks, such as object detection and classification, instance segmentation, face localization and many others. For examples of how to set up Deep Learning Streamer pipelines that carry out these functions, refer to the -[sample directory](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/gst_launch). +[sample directory](https://github.com/open-edge-platform/dlstreamer/tree/master/samples/gstreamer/gst_launch). If you are interested in designing custom pipelines, make sure to review the [Supported Models](../supported_models.md) table for @@ -140,4 +140,4 @@ yolo_models lvms download_public_models ::: -hide_directive--> \ No newline at end of file +hide_directive--> diff --git a/docs/source/dev_guide/model_proc_file.md b/docs/source/dev_guide/model_proc_file.md index 9cce1cd2..2456f340 100644 --- a/docs/source/dev_guide/model_proc_file.md +++ b/docs/source/dev_guide/model_proc_file.md @@ -35,7 +35,7 @@ to BGR. listed in `"labels"`. See the -[samples/gstreamer/model_proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc) +[samples/gstreamer/model_proc](https://github.com/open-edge-platform/dlstreamer/tree/master/samples/gstreamer/model_proc) for examples of .json files using various models from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo) and some public models. @@ -186,19 +186,19 @@ converter can be applied. | **For gvainference:** | | | | raw_data_copy | Attach tensor data from all output layers in raw binary format and optionally tag the data format. | Basically any inference model | | **For gvadetect:** | | | -| [detection_output](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/face-detection-retail-0004.json) | Parse the output blob produced by an object detection neural network with the DetectionOutput IR output layer’s type. Output is RegionOfInterest.
- labels - an array of strings representing labels or a path to a file with each label per line.
| [ssdlite_mobilenet_v2](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssdlite_mobilenet_v2#output)
[person-vehicle-bike-detection-crossroad-0078](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/person-vehicle-bike-detection-crossroad-0078#outputs)
| -| [boxes_labels](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/face-detection-0205.json) | Parse the output blob produced by an object detection neural network with two output layers: boxes and labels. Output is RegionOfInterest.
- labels - an array of strings representing labels or a path to a file with each label per line.
| [face-detection-0205](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/face-detection-0205#outputs)
[person-vehicle-bike-detection-2004](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/person-vehicle-bike-detection-2004#outputs)
| -| [yolo_v2](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/yolo-v2-tf.json) | Parse the output blob produced by an object detection neural network with the YOLO v2 architecture. Output is RegionOfInterest.

- labels - an array of strings representing labels or a path to a file with each label per line;
- classes - an integer number of classes;
- `bbox_number_on_cell` - the box count that can be predicted in each cell;
- anchors - box size (x, y) is multiplied by this value. len(anchors) == `bbox_number_on_cell * 2 * number_of_outputs`;
- `cells_number` - an image is split on cells with this number (if model’s input layer has non-square form set cells_number_x & cells_number_y instead `cells_number`);
- `iou_threshold` - the parameter for NMS.

| [yolo-v2-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolo-v2-tf#output)
[yolo-v2-tiny-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolo-v2-tiny-tf#output)
| -| [yolo_v3](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/yolo-v3-tf.json) | Parse the output blob produced by an object detection neural network with the YOLO v3 architecture.

- labels - an array of strings representing labels or a path to a file with each label per line;
- classes - an integer number of classes;
- `bbox_number_on_cell` - the box count that can be predicted in each cell;
- anchors - the box size (x, y) is multiplied by this value. `len(anchors) == bbox_number_on_cell * 2 * number_of_outputs`;
- `cells_number` - an image is split on cells with this number (if model’s input layer has non-square form set `cells_number_x` & `cells_number_y` instead `cells_number`);
- `iou_threshold` - the parameter for NMS;
- `masks`- determines which anchors are related to which output layer;
- `output_sigmoid_activation` - performs a sigmoid operation for coordinates and confidence.

See more details [there](./how_to_create_model_proc_file.md#build-model-proc-for-detection-model-with-advance-post-processing).
| [yolo-v3-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolo-v3-tf#output)
[yolo-v4-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolo-v4-tf#output)
| +| [detection_output](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/face-detection-retail-0004.json) | Parse the output blob produced by an object detection neural network with the DetectionOutput IR output layer’s type. Output is RegionOfInterest.
- labels - an array of strings representing labels or a path to a file with each label per line.
| [ssdlite_mobilenet_v2](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssdlite_mobilenet_v2#output)
[person-vehicle-bike-detection-crossroad-0078](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/person-vehicle-bike-detection-crossroad-0078#outputs)
| +| [boxes_labels](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/face-detection-0205.json) | Parse the output blob produced by an object detection neural network with two output layers: boxes and labels. Output is RegionOfInterest.
- labels - an array of strings representing labels or a path to a file with each label per line.
| [face-detection-0205](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/face-detection-0205#outputs)
[person-vehicle-bike-detection-2004](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/person-vehicle-bike-detection-2004#outputs)
| +| [yolo_v2](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/yolo-v2-tf.json) | Parse the output blob produced by an object detection neural network with the YOLO v2 architecture. Output is RegionOfInterest.

- labels - an array of strings representing labels or a path to a file with each label per line;
- classes - an integer number of classes;
- `bbox_number_on_cell` - the box count that can be predicted in each cell;
- anchors - box size (x, y) is multiplied by this value. len(anchors) == `bbox_number_on_cell * 2 * number_of_outputs`;
- `cells_number` - an image is split on cells with this number (if model’s input layer has non-square form set cells_number_x & cells_number_y instead `cells_number`);
- `iou_threshold` - the parameter for NMS.

| [yolo-v2-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolo-v2-tf#output)
[yolo-v2-tiny-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolo-v2-tiny-tf#output)
| +| [yolo_v3](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/yolo-v3-tf.json) | Parse the output blob produced by an object detection neural network with the YOLO v3 architecture.

- labels - an array of strings representing labels or a path to a file with each label per line;
- classes - an integer number of classes;
- `bbox_number_on_cell` - the box count that can be predicted in each cell;
- anchors - the box size (x, y) is multiplied by this value. `len(anchors) == bbox_number_on_cell * 2 * number_of_outputs`;
- `cells_number` - an image is split on cells with this number (if model’s input layer has non-square form set `cells_number_x` & `cells_number_y` instead `cells_number`);
- `iou_threshold` - the parameter for NMS;
- `masks`- determines which anchors are related to which output layer;
- `output_sigmoid_activation` - performs a sigmoid operation for coordinates and confidence.

See more details [there](./how_to_create_model_proc_file.md#build-model-proc-for-detection-model-with-advance-post-processing).
| [yolo-v3-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolo-v3-tf#output)
[yolo-v4-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolo-v4-tf#output)
| | heatmap_boxes | Parse the output blob produced by a network with DBNet architecture which is in the form of a probability heatmap. Output is RegionOfInterest.

- `minimum_side` - Any detected box with its smallest side < `minimum_side` will be dropped;
- `binarize_threshold` - a threshold value for OpenCV binary image thresholding, expected in the [0.0, 255.0] range ;

| | | **For gvaclassify:** | | | -| [text](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/age-gender-recognition-retail-0013.json) | Transform the output tensor to text.

- `text_scale` - scales data by this number;
- `text_precision` - sets precision for textual representation.

| [age-gender-recognition-retail-0013](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/age-gender-recognition-retail-0013#outputs) | -| [label](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/vehicle-attributes-recognition-barrier-0039.json) | Put an appropriate label for result.

- method: one of `[max, index, compound (threshold is required. 0.5 is default)]`;
- `labels` - an array of strings representing labels or a path to a file with each label per line.

| [emotions-recognition-retail-0003](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/age-gender-recognition-retail-0013#outputs)
[license-plate-recognition-barrier-0007](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/license-plate-recognition-barrier-0001#outputs)
[person-attributes-recognition-crossroad-0230](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/person-attributes-recognition-crossroad-0230#outputs)
| -| [keypoints_hrnet](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/single-human-pose-estimation-0001.json) | Parse the output blob produced by a network with the HRNet architecture. The output tensor will have an array of key points.

- `point_names` - an array of strings with the name of the points;
- `point_connections` - an array of strings with points connection. The length should be even.

| [single-human-pose-estimation-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/single-human-pose-estimation-0001#outputs) | -| [keypoints_openpose](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/human-pose-estimation-0001.json) | Parse the output blob produced by a network with OpenPose architecture. The output tensor will have an array of key points.

`point_names` - an array of strings with the name of the points;
`point_connections` - an array of strings with points connection. The length should be even.

| [human-pose-estimation-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/human-pose-estimation-0001#outputs) | +| [text](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/age-gender-recognition-retail-0013.json) | Transform the output tensor to text.

- `text_scale` - scales data by this number;
- `text_precision` - sets precision for textual representation.

| [age-gender-recognition-retail-0013](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/age-gender-recognition-retail-0013#outputs) | +| [label](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/vehicle-attributes-recognition-barrier-0039.json) | Put an appropriate label for result.

- method: one of `[max, index, compound (threshold is required. 0.5 is default)]`;
- `labels` - an array of strings representing labels or a path to a file with each label per line.

| [emotions-recognition-retail-0003](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/age-gender-recognition-retail-0013#outputs)
[license-plate-recognition-barrier-0007](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/license-plate-recognition-barrier-0001#outputs)
[person-attributes-recognition-crossroad-0230](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/person-attributes-recognition-crossroad-0230#outputs)
| +| [keypoints_hrnet](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/single-human-pose-estimation-0001.json) | Parse the output blob produced by a network with the HRNet architecture. The output tensor will have an array of key points.

- `point_names` - an array of strings with the name of the points;
- `point_connections` - an array of strings with points connection. The length should be even.

| [single-human-pose-estimation-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/single-human-pose-estimation-0001#outputs) | +| [keypoints_openpose](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/human-pose-estimation-0001.json) | Parse the output blob produced by a network with OpenPose architecture. The output tensor will have an array of key points.

`point_names` - an array of strings with the name of the points;
`point_connections` - an array of strings with points connection. The length should be even.

| [human-pose-estimation-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/human-pose-estimation-0001#outputs) | | keypoints_3d | Parse the output blob produced by a network with HRNet architecture. The output tensor will have an array of 3D-key points.

- `point_names` - an array of strings with the name of the points;
- `point_connections` - an array of strings with points connection. The length should be even.

| None | | **For gvaaudiodetect:** | | | -| [audio_labels](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/aclnet.json) | Output tensor - audio detections tensor.

- layer_name - name of the layer to process;
- labels - an array of JSON objects with index, label, threshold fields.

| [aclnet](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/public/aclnet/README.md#output) | +| [audio_labels](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/aclnet.json) | Output tensor - audio detections tensor.

- layer_name - name of the layer to process;
- labels - an array of JSON objects with index, label, threshold fields.

| [aclnet](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/public/aclnet/README.md#output) | ### Example of Output Post-processing @@ -303,4 +303,4 @@ Below is an example of `output_postproc` and its parameters: how_to_create_model_proc_file ::: -hide_directive--> \ No newline at end of file +hide_directive--> diff --git a/docs/source/dev_guide/object_tracking.md b/docs/source/dev_guide/object_tracking.md index 8cea2670..6212e290 100644 --- a/docs/source/dev_guide/object_tracking.md +++ b/docs/source/dev_guide/object_tracking.md @@ -53,7 +53,7 @@ Example: ## Sample Refer to the -[vehicle_pedestrian_tracking](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/gst_launch/vehicle_pedestrian_tracking) sample +[vehicle_pedestrian_tracking](https://github.com/open-edge-platform/dlstreamer/tree/master/samples/gstreamer/gst_launch/vehicle_pedestrian_tracking) sample for a pipeline with `gvadetect`, `gvatrack`, and `gvaclassify` elements. ## Deep SORT Tracking diff --git a/docs/source/dev_guide/openvino_custom_operations.md b/docs/source/dev_guide/openvino_custom_operations.md index 9eedc2ad..9fd4d57f 100644 --- a/docs/source/dev_guide/openvino_custom_operations.md +++ b/docs/source/dev_guide/openvino_custom_operations.md @@ -9,10 +9,10 @@ Custom operations may be required in two scenarios: 1. **New or rarely used operations** - Operations from frameworks (TensorFlow, PyTorch, ONNX, etc.) that are not yet supported in OpenVINO™ 2. **User-defined operations** - Custom operations created specifically for a model using framework extension capabilities -The `ov-extension-lib` parameter is available in the following DLStreamer elements: +The `ov-extension-lib` parameter is available in the following DL Streamer elements: - `gvadetect` - Object detection -- `gvaclassify` - Object classification +- `gvaclassify` - Object classification - `gvainference` - Generic inference ## Prerequisites diff --git a/docs/source/dev_guide/profiling.md b/docs/source/dev_guide/profiling.md index aca4d7d7..eb0483cf 100644 --- a/docs/source/dev_guide/profiling.md +++ b/docs/source/dev_guide/profiling.md @@ -80,7 +80,7 @@ For example, the screenshot below shows that whole pipeline duration was took 9.068s of whole pipeline execution and was called 300 times (Because input media file had 300 frames.). 4.110s of this inference was taken by Inference completion callback (`completion_callback_lambda`) -where DLStreamer processes inference results from OV. And about 0.445s +where DL Streamer processes inference results from OV. And about 0.445s for Submitting image. This means that the remaining time 9.068 - 4.110 - 0.445 = 4.513s was taken by executing inference inside OV. diff --git a/docs/source/dev_guide/python_bindings.md b/docs/source/dev_guide/python_bindings.md index 8441026c..69544800 100644 --- a/docs/source/dev_guide/python_bindings.md +++ b/docs/source/dev_guide/python_bindings.md @@ -17,15 +17,15 @@ connecting elements programmatically), set the pad probe callback(s) on source or sink pad of any element in the pipeline, etc. See the -[draw_face_attributes.py](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/python/draw_face_attributes/draw_face_attributes.py) +[draw_face_attributes.py](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/python/draw_face_attributes/draw_face_attributes.py) Python sample. ## 2. Video-analytics specific Python bindings As GVA plugin registers inference specific metadata, another Python library - -[gstgva](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/python/gstgva) -in this repository is complimentary to *pygst* and additionally provides +[gstgva](https://github.com/open-edge-platform/dlstreamer/tree/master/python/gstgva) +in this repository is complementary to *pygst* and additionally provides Python bindings for GVA specific types such as *GstGVATensorMeta* and *GstGVAJSONMeta* and access to inference specific fields in `GstVideoRegionOfInterestMeta`. diff --git a/docs/source/dev_guide/yolo_models.md b/docs/source/dev_guide/yolo_models.md index 9dc3dc16..d8f01182 100644 --- a/docs/source/dev_guide/yolo_models.md +++ b/docs/source/dev_guide/yolo_models.md @@ -4,8 +4,8 @@ This article describes how to prepare models from the **YOLO** family for integration with the Deep Learning Streamer pipeline. > **NOTE:** The instructions provided below are comprehensive, but for convenience, -> it is recommended to use the -> [download_public_models.sh](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/download_public_models.sh) +> it is recommended to use the +> [download_public_models.sh](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/download_public_models.sh) > script. This script will download all supported Yolo models and perform > the necessary conversions automatically. > diff --git a/docs/source/elements/elements.md b/docs/source/elements/elements.md index abed1d85..363735f8 100644 --- a/docs/source/elements/elements.md +++ b/docs/source/elements/elements.md @@ -12,7 +12,7 @@ gst-inspect-1.0 utility. | [gvaclassify](./gvaclassify.md) | Performs object classification/segmentation/pose estimation. Inputs: ROIs or full frame. Output: prediction metadata. The `queue` element must be put directly after the `gvaclassify` element in the pipeline.
Example:
gst-launch-1.0 … ! decodebin3 ! gvadetect model=$mDetect device=GPU ! queue ! gvaclassify model=$mClassify device=CPU ! queue ! … OUT
| | [gvainference](./gvainference.md) | Executes any inference model and outputs raw results. Does not interpret data and does not generate metadata. The `queue` element must be put directly after the `gvainference` element in the pipeline.
Example:
gst-launch-1.0 … ! decodebin3 ! gvadetect model=$mDetect device=GPU ! queue ! gvainference model=$mHeadPoseEst device=CPU ! queue ! … OUT
| | [gvatrack](./gvatrack.md) | Tracks objects across video frames using zero-term or short-term tracking algorithms. Zero-term tracking assigns unique object IDs and requires object detection to run on every frame. Short-term tracking allows for tracking objects between frames, reducing the need to run object detection on each frame.
Example:
gst-launch-1.0 … ! decodebin3 ! gvadetect model=$mDetect device=GPU ! gvatrack tracking-type=short-term-imageless ! … OUT
| -| [gvaaudiodetect](./gvaaudiodetect.md) | Legacy plugin. Performs audio event detection using the `AclNet` model.
Example:
gst-launch-1.0 … ! decodebin3 ! audioresample ! audioconvert ! audio/x-raw … ! audiomixer … ! gvaaudiodetect model=$mAudioDetect ! … OUT
+| [gvaaudiodetect](./gvaaudiodetect.md) | Legacy plugin. Performs audio event detection using the `AclNet` model.
Example:
gst-launch-1.0 … ! decodebin3 ! audioresample ! audioconvert ! audio/x-raw … ! audiomixer … ! gvaaudiodetect model=$mAudioDetect ! … OUT
| [gvaaudiotranscribe](./gvaaudiotranscribe.md) | ASR plugin. Performs audio transcription using `Whisper` model.
Example:
gst-launch-1.0 … ! decodebin3 ! audioresample ! audioconvert ! audio/x-raw … ! audiomixer … ! gvaaudiotranscribe model=$mASR device=CPU ! … OUT
| | [gvagenai](./gvagenai.md) | Performs inference using GenAI models. It can be used to generate text descriptions from images or video.
Example:
gst-launch-1.0 … ! decodebin3 ! videoconvert ! gvagenai model=$mGenAI device=GPU ! … OUT
| @@ -26,7 +26,7 @@ gst-inspect-1.0 utility. | [gvametaaggregate](./gvametaaggregate.md) | Aggregates inference results from multiple pipeline branches.
Example:
gst-launch-1.0 … ! decodebin3 ! tee name=t t. ! queue ! gvametaaggregate name=a ! gvaclassify … ! gvaclassify … ! gvametaconvert … ! gvametapublish … ! fakesink t. ! queue ! gvadetect … ! a.
| | [gvametaconvert](./gvametaconvert.md) | Converts the metadata structure to JSON or raw text formats. Can write output to a file.| | [gvametapublish](./gvametapublish.md) | Publishes the JSON metadata to MQTT or Kafka message brokers or files.
Example:
gst-launch-1.0 … ! decodebin3 ! gvadetect model=$mDetect device=GPU … ! gvametaconvert format=json … ! gvametapublish … ! … OUT
| -| [gvapython](./gvapython.md) | Provides a callback to execute user-defined Python functions on every frame. It is used to augment DLStreamer with user-defined algorithms (e.g. metadata conversion, inference post-processing).
Example:
gst-launch-1.0 … ! gvaclassify ! gvapython module={gvapython.callback_module.classAge_pp} ! … OUT
| +| [gvapython](./gvapython.md) | Provides a callback to execute user-defined Python functions on every frame. It is used to augment DL Streamer with user-defined algorithms (e.g. metadata conversion, inference post-processing).
Example:
gst-launch-1.0 … ! gvaclassify ! gvapython module={gvapython.callback_module.classAge_pp} ! … OUT
| | [gvarealsense](./gvarealsense.md) | Provides integration with Intel RealSense cameras, enabling video and depth stream capture for use in GStreamer pipelines. | | [gvawatermark](./gvawatermark.md) | Overlays the metadata on the video frame to visualize the inference results.
Example:
gst-launch-1.0 … ! decodebin3 ! gvadetect … ! gvawatermark ! … | | [gvamotiondetect](./gvamotiondetect.md) | Performs lightweight motion detection on NV12 frames and emits motion ROIs as analytics metadata. Uses VA-API acceleration when VAMemory caps are negotiated, otherwise system-memory path.
Example:
gst-launch-1.0 … ! vaapih264dec ! gvamotiondetect confirm-frames=2 motion-threshold=0.08 ! gvawatermark ! … | diff --git a/docs/source/get_started/get_started_index.md b/docs/source/get_started/get_started_index.md index a1d516b4..95d18c58 100644 --- a/docs/source/get_started/get_started_index.md +++ b/docs/source/get_started/get_started_index.md @@ -15,8 +15,7 @@ - [Exercise 3: Use object tracking to improve performance](./tutorial.md#exercise-3-use-object-tracking-to-improve-performance-object-tracking) - [Exercise 4: Publish Inference Results](./tutorial.md#exercise-4-publish-inference-results) - [Next Steps](./tutorial.md#additional-resources) -- [Samples](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/README.md) - +- [Samples](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/README.md) \ No newline at end of file +hide_directive--> diff --git a/docs/source/get_started/install/install_guide_ubuntu.md b/docs/source/get_started/install/install_guide_ubuntu.md index aa9791a8..fce47037 100644 --- a/docs/source/get_started/install/install_guide_ubuntu.md +++ b/docs/source/get_started/install/install_guide_ubuntu.md @@ -119,7 +119,7 @@ sudo apt-get install intel-dlstreamer use!** To see the full list of installed components check the -[Dockerfile content for Ubuntu 24](https://github.com/open-edge-platform/edge-ai-libraries/blob/main/libraries/dl-streamer/docker/ubuntu/ubuntu24.Dockerfile) +[Dockerfile content for Ubuntu 24](https://github.com/open-edge-platform/dlstreamer/blob/master/docker/ubuntu/ubuntu24.Dockerfile) ### [Optional] Step 4: Python dependencies diff --git a/docs/source/get_started/tutorial.md b/docs/source/get_started/tutorial.md index 4ea30841..0c012c9a 100644 --- a/docs/source/get_started/tutorial.md +++ b/docs/source/get_started/tutorial.md @@ -484,7 +484,7 @@ In the above pipeline: 4. `gvawatermark` displays the ROIs and their attributes. See -[model-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc) +[model-proc](https://github.com/open-edge-platform/dlstreamer/tree/master/samples/gstreamer/model_proc) for `model-proc` file examples as well as its input and output specifications. ## Exercise 3: Use object tracking to improve performance {#object-tracking} @@ -604,14 +604,14 @@ In the above pipeline: are published. For publishing the results to MQTT or Kafka, please refer to -[metapublish samples](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/gst_launch/metapublish). +[metapublish samples](https://github.com/open-edge-platform/dlstreamer/tree/master/samples/gstreamer/gst_launch/metapublish). You have completed this tutorial. Now, start creating your video analytics pipelines with Deep Learning Streamer Pipeline Framework! ## Additional Resources -- [Samples overview](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/README.md) +- [Samples overview](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/README.md) - [Elements](../elements/elements.md) - [How to create model-proc file](../dev_guide/how_to_create_model_proc_file.md) diff --git a/docs/source/index.md b/docs/source/index.md index c22f422e..b1df48e7 100644 --- a/docs/source/index.md +++ b/docs/source/index.md @@ -70,7 +70,7 @@ deploy, and benchmark. They require: **DL Streamer** uses OpenVINO™ Runtime inference back-end, optimized for Intel hardware platforms and supports over -[70 NN Intel and open-source community pre-trained models](https://github.com/open-edge-platform/edge-ai-libraries/blob/main/libraries/dl-streamer/docs/scripts/supported_models.json), and models converted +[70 NN Intel and open-source community pre-trained models](https://github.com/open-edge-platform/dlstreamer/blob/master/docs/scripts/supported_models.json), and models converted [from other training frameworks](https://docs.openvino.ai/2024/openvino-workflow/model-preparation/convert-model-to-ir.html). These models include object detection, object classification, human pose detection, sound classification, semantic segmentation, and other use @@ -129,4 +129,4 @@ api_ref/api_reference architecture_2.0/architecture_2.0 release-notes ::: -hide_directive--> \ No newline at end of file +hide_directive--> diff --git a/docs/source/release-notes.md b/docs/source/release-notes.md index ee9e4452..80991f42 100644 --- a/docs/source/release-notes.md +++ b/docs/source/release-notes.md @@ -33,17 +33,17 @@ For installing Pipeline Framework with the prebuilt binaries or Docker\* or to b | Title | High-level description | |---|---| -| Custom model post-processing | End user can now create a custom post-processing library (.so); [sample](https://github.com/open-edge-platform/edge-ai-libraries/blob/main/libraries/dl-streamer/samples/gstreamer/gst_launch/custom_postproc) added as reference.  | +| Custom model post-processing | End user can now create a custom post-processing library (.so); [sample](https://github.com/open-edge-platform/dlstreamer/tree/master/samples/gstreamer/gst_launch/custom_postproc) added as reference.  | | Latency mode support | Default scheduling policy for DL Streamer is throughput. With this change user can add scheduling-policy=latency for scenarios that prioritize latency requirements over throughput. | | | | -| Visual Embeddings enabled | New models enabled to convert input video into feature embeddings, validated with Clip-ViT-Base-B16/Clip-ViT-Base-B32 models; [sample](https://github.com/open-edge-platform/edge-ai-libraries/blob/main/libraries/dl-streamer/samples/gstreamer/gst_launch/lvm) added as reference. | -| VLM models support | new gstgenai element added to convert video into text (with VLM models), validated with miniCPM2.6, available in advanced installation option when building from sources; [sample](https://github.com/open-edge-platform/edge-ai-libraries/blob/main/libraries/dl-streamer/samples/gstreamer/gst_launch/gvagenai) added as reference. | +| Visual Embeddings enabled | New models enabled to convert input video into feature embeddings, validated with Clip-ViT-Base-B16/Clip-ViT-Base-B32 models; [sample](https://github.com/open-edge-platform/dlstreamer/tree/master/samples/gstreamer/gst_launch/lvm) added as reference. | +| VLM models support | new gstgenai element added to convert video into text (with VLM models), validated with miniCPM2.6, available in advanced installation option when building from sources; [sample](https://github.com/open-edge-platform/dlstreamer/tree/master/samples/gstreamer/gst_launch/gvagenai) added as reference. | | INT8 automatic quantization support for Yolo models | Performance improvement, automatic INT8 quantization for Yolo models | | MS Windows 11 support  | Native support for Windows 11 | | New Linux distribution (Azure Linux derivative) | New distribution added, DL Streamer can be now installed on Edge Microvisor Toolkit. | -| License plate recognition use case support | Added support for models that allow to recognize license plates; [sample](https://github.com/open-edge-platform/edge-ai-libraries/blob/main/libraries/dl-streamer/samples/gstreamer/gst_launch/license_plate_recognition) added as reference.  | +| License plate recognition use case support | Added support for models that allow to recognize license plates; [sample](https://github.com/open-edge-platform/dlstreamer/tree/master/samples/gstreamer/gst_launch/license_plate_recognition) added as reference.  | | Deep Scenario model support | Commercial 3D model support | -| Anomaly model support | Added support for anomaly model, [sample](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/gst_launch/geti_deployment) added as reference, sample added as reference. | +| Anomaly model support | Added support for anomaly model, [sample](https://github.com/open-edge-platform/dlstreamer/tree/master/samples/gstreamer/gst_launch/geti_deployment) added as reference, sample added as reference. | | RealSense element support | New [gvarealsense](./elements/gvarealsense.md) element implementation providing basic integration with Intel RealSense cameras, enabling video and depth stream capture for use in GStreamer pipelines. | | OpenVINO 2025.3 version support | Support of recent OpenVINO version added. | | GStreamer 1.26.6 version support | Support of recent GStreamer version added. | diff --git a/docs/source/supported_models.md b/docs/source/supported_models.md index e3fd2203..f53efb43 100644 --- a/docs/source/supported_models.md +++ b/docs/source/supported_models.md @@ -11,8 +11,8 @@ but some of them come from other sources. | 1 | Lvm | [clip\-vit\-base\-patch32](https://huggingface.co/openai/clip-vit-base-patch32) | | | not required | | | 2 | Lvm | [clip\-vit\-base\-patch16](https://huggingface.co/openai/clip-vit-base-patch16) | | | not required | | | 3 | Lvm | [clip\-vit\-large\-patch14](https://huggingface.co/openai/clip-vit-large-patch14) | | | not required | | -| 4 | Detection | [YOLOv5](./dev_guide/yolo_models.md) | | [coco\_80cl.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/coco_80cl.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/yolo-v5.json) | | -| 5 | Detection | [YOLOv7](./dev_guide/yolo_models.md) | | [coco\_80cl.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/coco_80cl.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/yolo-v7.json) | | +| 4 | Detection | [YOLOv5](./dev_guide/yolo_models.md) | | [coco\_80cl.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/coco_80cl.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/yolo-v5.json) | | +| 5 | Detection | [YOLOv7](./dev_guide/yolo_models.md) | | [coco\_80cl.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/coco_80cl.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/yolo-v7.json) | | | 6 | Detection | [YOLOv8](./dev_guide/yolo_models.md) | | | not required | | | 7 | Detection | [YOLOv8\-OBB](./dev_guide/yolo_models.md) | | | not required | | | 8 | Instance Segmentation | [YOLOv8\-SEG](./dev_guide/yolo_models.md) | | | not required | | @@ -20,129 +20,129 @@ but some of them come from other sources. | 10 | Detection | [YOLOv9](./dev_guide/yolo_models.md) | | | not required | | | 11 | Detection | [YOLOv10](./dev_guide/yolo_models.md) | | | not required | | | 12 | Detection | [YOLO11](./dev_guide/yolo_models.md) | | | not required | | -| 13 | Detection | [YOLOX](./dev_guide/yolo_models.md) | | [coco\_80cl.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/coco_80cl.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/yolo-x.json) | | +| 13 | Detection | [YOLOX](./dev_guide/yolo_models.md) | | [coco\_80cl.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/coco_80cl.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/yolo-x.json) | | | 14 | Ocr | [ch\_PP\-OCRv4\_rec\_infer](https://github.com/PaddlePaddle/PaddleOCR) | | | not required | | | 15 | Detection | [CenterFace](https://github.com/Star-Clouds/CenterFace/tree/master) | | | not required | | | 16 | Emotion Recognition | [HSEmotion](https://github.com/av-savchenko/face-emotion-recognition/tree/main) | | | not required | | -| 17 | Instance Segmentation | [mask\_rcnn\_inception\_resnet\_v2\_atrous\_coco](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/mask_rcnn_inception_resnet_v2_atrous_coco) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/mask-rcnn.json) | [mask\_rcnn\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/mask_rcnn_demo/cpp) | -| 18 | Instance Segmentation | [mask\_rcnn\_resnet50\_atrous\_coco](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/mask_rcnn_resnet50_atrous_coco) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/mask-rcnn.json) | [mask\_rcnn\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/mask_rcnn_demo/cpp) | -| 19 | Classification | [efficientnet\-v2\-b0](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/efficientnet-v2-b0) | | [imagenet\_2012\.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 20 | Classification | [efficientnet\-v2\-s](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/efficientnet-v2-s) | | [imagenet\_2012\.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 17 | Instance Segmentation | [mask\_rcnn\_inception\_resnet\_v2\_atrous\_coco](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/mask_rcnn_inception_resnet_v2_atrous_coco) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/mask-rcnn.json) | [mask\_rcnn\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/mask_rcnn_demo/cpp) | +| 18 | Instance Segmentation | [mask\_rcnn\_resnet50\_atrous\_coco](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/mask_rcnn_resnet50_atrous_coco) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/mask-rcnn.json) | [mask\_rcnn\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/mask_rcnn_demo/cpp) | +| 19 | Classification | [efficientnet\-v2\-b0](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/efficientnet-v2-b0) | | [imagenet\_2012\.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 20 | Classification | [efficientnet\-v2\-s](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/efficientnet-v2-s) | | [imagenet\_2012\.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | | 21 | Detection | [GETI\_Detection](https://geti.intel.com/) | | | not required | | | 22 | Detection | [GETI\_Detection\_Oriented](https://geti.intel.com/) | | | not required | | | 23 | Segmentation | [GETI\_Instance\_Segmentation](https://geti.intel.com/) | | | not required | | | 24 | Classification | [GETI\_Classification\_Single\_Label](https://geti.intel.com/) | | | not required | | | 25 | Classification | [GETI\_Classification\_Multi\_Label](https://geti.intel.com/) | | | not required | | -| 26 | Sound Classification | [aclnet](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/aclnet) | 1\.42 | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/aclnet.json) | [sound\_classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/sound_classification_demo/python) | -| 27 | Action Recognition | [action\-recognition\-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/action-recognition-0001) | | [kinetics\_400\.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/kinetics_400.txt) | | [action\_recognition\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/action_recognition_demo/python) | -| 28 | Object Attributes | [age\-gender\-recognition\-retail\-0013](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/age-gender-recognition-retail-0013) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/age-gender-recognition-retail-0013.json) | [interactive\_face\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/interactive_face_detection_demo/cpp_gapi) | -| 29 | Classification | [anti\-spoof\-mn3](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/anti-spoof-mn3) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/anti-spoof-mn3.json) | [interactive\_face\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/interactive_face_detection_demo/cpp_gapi) | -| 30 | Classification | [densenet\-121\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/densenet-121-tf) | | [imagenet\_2012\.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 31 | Classification | [dla\-34](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/dla-34) | 6\.1368 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 32 | Action Recognition | [driver\-action\-recognition\-adas\-0002](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/driver-action-recognition-adas-0002) | | [driver\_actions.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/driver_actions.txt) | | [action\_recognition\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/action_recognition_demo/python) | -| 33 | Detection | [efficientdet\-d0\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/efficientdet-d0-tf) | 2\.54 | [coco\_91cl.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/coco_91cl.txt) | | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 34 | Detection | [efficientdet\-d1\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/efficientdet-d1-tf) | 6\.1 | [coco\_91cl.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/coco_91cl.txt) | | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 35 | Classification | [efficientnet\-b0](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/efficientnet-b0) | 0\.819 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 36 | Classification | [efficientnet\-b0\-pytorch](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/efficientnet-b0-pytorch) | 0\.819 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 37 | Object Attributes | [emotions\-recognition\-retail\-0003](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/emotions-recognition-retail-0003) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/emotions-recognition-retail-0003.json) | [interactive\_face\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/interactive_face_detection_demo/cpp_gapi) | -| 38 | Detection | [face\-detection\-0200](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/face-detection-0200) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/face-detection-0200.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 39 | Detection | [face\-detection\-0202](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/face-detection-0202) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/face-detection-0202.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 40 | Detection | [face\-detection\-0204](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/face-detection-0204) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/face-detection-0204.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 41 | Detection | [face\-detection\-0205](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/face-detection-0205) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/face-detection-0205.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 42 | Detection | [face\-detection\-0206](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/face-detection-0206) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/face-detection-0206.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 43 | Detection | [face\-detection\-adas\-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/face-detection-adas-0001) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/face-detection-adas-0001.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 44 | Detection | [face\-detection\-retail\-0004](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/face-detection-retail-0004) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/face-detection-retail-0004.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 45 | Detection | [face\-detection\-retail\-0005](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/face-detection-retail-0005) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/face-detection-retail-0005.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 46 | Object Attributes | [facial\-landmarks\-35\-adas\-0002](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/facial-landmarks-35-adas-0002) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/facial-landmarks-35-adas-0002.json) | [gaze\_estimation\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/gaze_estimation_demo/cpp_gapi) | -| 47 | Object Attributes | [facial\-landmarks\-98\-detection\-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/facial-landmarks-98-detection-0001) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/facial-landmarks-98-detection-0001.json) | [gaze\_estimation\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/gaze_estimation_demo/cpp) | -| 48 | Detection | [faster\_rcnn\_inception\_resnet\_v2\_atrous\_coco](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/faster_rcnn_inception_resnet_v2_atrous_coco) | | [coco\_91cl\_bkgr.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/coco_91cl_bkgr.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-image-info.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 49 | Detection | [faster\_rcnn\_resnet50\_coco](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/faster_rcnn_resnet50_coco) | | [coco\_91cl\_bkgr.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/coco_91cl_bkgr.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-image-info.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 50 | Classification | [googlenet\-v1\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/googlenet-v1-tf) | 3\.016 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 51 | Classification | [googlenet\-v2\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/googlenet-v2-tf) | 4\.058 | [imagenet\_2012\_bkgr.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012_bkgr.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 52 | Classification | [googlenet\-v3](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/googlenet-v3) | 11\.469 | [imagenet\_2012\_bkgr.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012_bkgr.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 53 | Classification | [googlenet\-v3\-pytorch](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/googlenet-v3-pytorch) | 11\.469 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 54 | Classification | [googlenet\-v4\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/googlenet-v4-tf) | 24\.584 | [imagenet\_2012\_bkgr.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012_bkgr.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 55 | Classification | [hbonet\-0\.25](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/hbonet-0.25) | 0\.037 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 56 | Classification | [hbonet\-1\.0](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/hbonet-1.0) | 0\.305 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 26 | Sound Classification | [aclnet](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/aclnet) | 1\.42 | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/aclnet.json) | [sound\_classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/sound_classification_demo/python) | +| 27 | Action Recognition | [action\-recognition\-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/action-recognition-0001) | | [kinetics\_400\.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/kinetics_400.txt) | | [action\_recognition\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/action_recognition_demo/python) | +| 28 | Object Attributes | [age\-gender\-recognition\-retail\-0013](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/age-gender-recognition-retail-0013) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/age-gender-recognition-retail-0013.json) | [interactive\_face\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/interactive_face_detection_demo/cpp_gapi) | +| 29 | Classification | [anti\-spoof\-mn3](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/anti-spoof-mn3) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/anti-spoof-mn3.json) | [interactive\_face\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/interactive_face_detection_demo/cpp_gapi) | +| 30 | Classification | [densenet\-121\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/densenet-121-tf) | | [imagenet\_2012\.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 31 | Classification | [dla\-34](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/dla-34) | 6\.1368 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 32 | Action Recognition | [driver\-action\-recognition\-adas\-0002](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/driver-action-recognition-adas-0002) | | [driver\_actions.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/driver_actions.txt) | | [action\_recognition\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/action_recognition_demo/python) | +| 33 | Detection | [efficientdet\-d0\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/efficientdet-d0-tf) | 2\.54 | [coco\_91cl.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/coco_91cl.txt) | | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 34 | Detection | [efficientdet\-d1\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/efficientdet-d1-tf) | 6\.1 | [coco\_91cl.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/coco_91cl.txt) | | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 35 | Classification | [efficientnet\-b0](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/efficientnet-b0) | 0\.819 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 36 | Classification | [efficientnet\-b0\-pytorch](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/efficientnet-b0-pytorch) | 0\.819 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 37 | Object Attributes | [emotions\-recognition\-retail\-0003](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/emotions-recognition-retail-0003) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/emotions-recognition-retail-0003.json) | [interactive\_face\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/interactive_face_detection_demo/cpp_gapi) | +| 38 | Detection | [face\-detection\-0200](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/face-detection-0200) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/face-detection-0200.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 39 | Detection | [face\-detection\-0202](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/face-detection-0202) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/face-detection-0202.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 40 | Detection | [face\-detection\-0204](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/face-detection-0204) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/face-detection-0204.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 41 | Detection | [face\-detection\-0205](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/face-detection-0205) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/face-detection-0205.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 42 | Detection | [face\-detection\-0206](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/face-detection-0206) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/face-detection-0206.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 43 | Detection | [face\-detection\-adas\-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/face-detection-adas-0001) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/face-detection-adas-0001.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 44 | Detection | [face\-detection\-retail\-0004](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/face-detection-retail-0004) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/face-detection-retail-0004.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 45 | Detection | [face\-detection\-retail\-0005](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/face-detection-retail-0005) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/face-detection-retail-0005.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 46 | Object Attributes | [facial\-landmarks\-35\-adas\-0002](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/facial-landmarks-35-adas-0002) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/facial-landmarks-35-adas-0002.json) | [gaze\_estimation\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/gaze_estimation_demo/cpp_gapi) | +| 47 | Object Attributes | [facial\-landmarks\-98\-detection\-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/facial-landmarks-98-detection-0001) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/facial-landmarks-98-detection-0001.json) | [gaze\_estimation\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/gaze_estimation_demo/cpp) | +| 48 | Detection | [faster\_rcnn\_inception\_resnet\_v2\_atrous\_coco](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/faster_rcnn_inception_resnet_v2_atrous_coco) | | [coco\_91cl\_bkgr.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/coco_91cl_bkgr.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-image-info.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 49 | Detection | [faster\_rcnn\_resnet50\_coco](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/faster_rcnn_resnet50_coco) | | [coco\_91cl\_bkgr.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/coco_91cl_bkgr.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-image-info.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 50 | Classification | [googlenet\-v1\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/googlenet-v1-tf) | 3\.016 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 51 | Classification | [googlenet\-v2\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/googlenet-v2-tf) | 4\.058 | [imagenet\_2012\_bkgr.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012_bkgr.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 52 | Classification | [googlenet\-v3](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/googlenet-v3) | 11\.469 | [imagenet\_2012\_bkgr.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012_bkgr.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 53 | Classification | [googlenet\-v3\-pytorch](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/googlenet-v3-pytorch) | 11\.469 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 54 | Classification | [googlenet\-v4\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/googlenet-v4-tf) | 24\.584 | [imagenet\_2012\_bkgr.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012_bkgr.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 55 | Classification | [hbonet\-0\.25](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/hbonet-0.25) | 0\.037 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 56 | Classification | [hbonet\-1\.0](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/hbonet-1.0) | 0\.305 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | | 57 | Head Pose Estimation | [head\-pose\-estimation\-adas\-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/head-pose-estimation-adas-0001) | | | | [gaze\_estimation\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/gaze_estimation_demo/cpp_gapi) | -| 58 | Detection | [horizontal\-text\-detection\-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/horizontal-text-detection-0001) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/horizontal-text-detection-0001.json) | [text\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/text_detection_demo/cpp) | -| 59 | Human Pose Estimation | [human\-pose\-estimation\-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/human-pose-estimation-0001) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/human-pose-estimation-0001.json) | [multi\_channel\_human\_pose\_estimation\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/multi_channel_human_pose_estimation_demo/cpp) | -| 60 | Classification | [inception\-resnet\-v2\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/inception-resnet-v2-tf) | | [imagenet\_2012\_bkgr.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012_bkgr.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 58 | Detection | [horizontal\-text\-detection\-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/horizontal-text-detection-0001) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/horizontal-text-detection-0001.json) | [text\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/text_detection_demo/cpp) | +| 59 | Human Pose Estimation | [human\-pose\-estimation\-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/human-pose-estimation-0001) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/human-pose-estimation-0001.json) | [multi\_channel\_human\_pose\_estimation\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/multi_channel_human_pose_estimation_demo/cpp) | +| 60 | Classification | [inception\-resnet\-v2\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/inception-resnet-v2-tf) | | [imagenet\_2012\_bkgr.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012_bkgr.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | | 61 | Instance Segmentation | [instance\-segmentation\-person\-0007](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/instance-segmentation-person-0007) | | | | [background\_subtraction\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/background_subtraction_demo/cpp_gapi) | -| 62 | Instance Segmentation | [instance\-segmentation\-security\-0002](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/instance-segmentation-security-0002) | | [coco\_80cl.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/coco_80cl.txt) | | [background\_subtraction\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/background_subtraction_demo/cpp_gapi) | -| 63 | Instance Segmentation | [instance\-segmentation\-security\-0091](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/instance-segmentation-security-0091) | | [coco\_80cl.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/coco_80cl.txt) | | [background\_subtraction\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/background_subtraction_demo/cpp_gapi) | -| 64 | Instance Segmentation | [instance\-segmentation\-security\-0228](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/instance-segmentation-security-0228) | | [coco\_80cl.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/coco_80cl.txt) | | [background\_subtraction\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/background_subtraction_demo/cpp_gapi) | -| 65 | Instance Segmentation | [instance\-segmentation\-security\-1039](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/instance-segmentation-security-1039) | | [coco\_80cl.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/coco_80cl.txt) | | [background\_subtraction\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/background_subtraction_demo/cpp_gapi) | -| 66 | Instance Segmentation | [instance\-segmentation\-security\-1040](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/instance-segmentation-security-1040) | | [coco\_80cl.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/coco_80cl.txt) | | [background\_subtraction\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/background_subtraction_demo/cpp_gapi) | -| 67 | Object Attributes | [landmarks\-regression\-retail\-0009](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/landmarks-regression-retail-0009) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/landmarks-regression-retail-0009.json) | [face\_recognition\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/face_recognition_demo/python) | -| 68 | Optical Character Recognition | [license\-plate\-recognition\-barrier\-0007](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/license-plate-recognition-barrier-0007) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/license-plate-recognition-barrier-0007.json) | [security\_barrier\_camera\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/security_barrier_camera_demo/cpp) | -| 69 | Classification | [mixnet\-l](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/mixnet-l) | 0\.565 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 70 | Classification | [mobilenet\-v1\-0\.25\-128](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/mobilenet-v1-0.25-128) | | [imagenet\_2012\_bkgr.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012_bkgr.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 71 | Classification | [mobilenet\-v1\-1\.0\-224\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/mobilenet-v1-1.0-224-tf) | | [imagenet\_2012\_bkgr.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012_bkgr.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 72 | Classification | [mobilenet\-v2\-1\.0\-224](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/mobilenet-v2-1.0-224) | | [imagenet\_2012\_bkgr.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012_bkgr.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 73 | Classification | [mobilenet\-v2\-1\.4\-224](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/mobilenet-v2-1.4-224) | | [imagenet\_2012\_bkgr.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012_bkgr.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 74 | Classification | [mobilenet\-v2\-pytorch](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/mobilenet-v2-pytorch) | 0\.615 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 75 | Classification | [mobilenet\-v3\-large\-1\.0\-224\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/mobilenet-v3-large-1.0-224-tf) | | [imagenet\_2012\.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 76 | Classification | [mobilenet\-v3\-small\-1\.0\-224\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/mobilenet-v3-small-1.0-224-tf) | | [imagenet\_2012\.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 77 | Detection | [mobilenet\-yolo\-v4\-syg](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/mobilenet-yolo-v4-syg) | 65\.984 | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/blob/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/mobilenet-yolo-v4-syg.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 78 | Classification | [nfnet\-f0](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/nfnet-f0) | 24\.8053 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 79 | Classification | [open\-closed\-eye\-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/open-closed-eye-0001) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/open-closed-eye-0001.json) | [gaze\_estimation\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/gaze_estimation_demo/cpp_gapi) | -| 80 | Detection | [pedestrian\-and\-vehicle\-detector\-adas\-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/pedestrian-and-vehicle-detector-adas-0001) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/pedestrian-and-vehicle-detector-adas-0001.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 81 | Detection | [pedestrian\-detection\-adas\-0002](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/pedestrian-detection-adas-0002) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/pedestrian-detection-adas-0002.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 82 | Object Attributes | [person\-attributes\-recognition\-crossroad\-0230](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-attributes-recognition-crossroad-0230) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/person-attributes-recognition-crossroad-0230.json) | [crossroad\_camera\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/crossroad_camera_demo/cpp) | -| 83 | Object Attributes | [person\-attributes\-recognition\-crossroad\-0234](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-attributes-recognition-crossroad-0234) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/person-attributes-recognition-crossroad-0234.json) | [crossroad\_camera\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/crossroad_camera_demo/cpp) | -| 84 | Object Attributes | [person\-attributes\-recognition\-crossroad\-0238](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-attributes-recognition-crossroad-0238) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/person-attributes-recognition-crossroad-0238.json) | [crossroad\_camera\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/crossroad_camera_demo/cpp) | -| 85 | Detection | [person\-detection\-0200](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-detection-0200) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/person-detection-0200.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 86 | Detection | [person\-detection\-0201](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-detection-0201) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/person-detection-0201.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 87 | Detection | [person\-detection\-0202](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-detection-0202) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/person-detection-0202.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 88 | Detection | [person\-detection\-0203](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-detection-0203) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/person-detection-0203.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 89 | Detection | [person\-detection\-asl\-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-detection-asl-0001) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/person-detection-0203.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 90 | Detection | [person\-detection\-retail\-0013](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-detection-retail-0013) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/person-detection-retail-0013.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 91 | Detection | [person\-vehicle\-bike\-detection\-2000](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-vehicle-bike-detection-2000) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/person-vehicle-bike-detection-2000.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 92 | Detection | [person\-vehicle\-bike\-detection\-2001](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-vehicle-bike-detection-2001) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/person-vehicle-bike-detection-2001.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 93 | Detection | [person\-vehicle\-bike\-detection\-2002](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-vehicle-bike-detection-2002) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/person-vehicle-bike-detection-2002.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 94 | Detection | [person\-vehicle\-bike\-detection\-2003](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-vehicle-bike-detection-2003) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/person-vehicle-bike-detection-2003.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 95 | Detection | [person\-vehicle\-bike\-detection\-2004](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-vehicle-bike-detection-2004) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/person-vehicle-bike-detection-2004.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 96 | Detection | [person\-vehicle\-bike\-detection\-crossroad\-0078](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-vehicle-bike-detection-crossroad-0078) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/person-vehicle-bike-detection-crossroad-0078.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 97 | Detection | [person\-vehicle\-bike\-detection\-crossroad\-1016](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-vehicle-bike-detection-crossroad-1016) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/person-vehicle-bike-detection-crossroad-1016.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 98 | Detection | [person\-vehicle\-bike\-detection\-crossroad\-yolov3\-1020](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-vehicle-bike-detection-crossroad-yolov3-1020) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/person-vehicle-bike-detection-crossroad-yolov3-1020.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 99 | Detection | [product\-detection\-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/product-detection-0001) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/product-detection-0001.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 100 | Classification | [regnetx\-3\.2gf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/regnetx-3.2gf) | 6\.3893 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 101 | Classification | [repvgg\-a0](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/repvgg-a0) | 2\.7286 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 102 | Classification | [repvgg\-b1](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/repvgg-b1) | 23\.6472 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 103 | Classification | [repvgg\-b3](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/repvgg-b3) | 52\.4407 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 104 | Classification | [resnest\-50\-pytorch](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/resnest-50-pytorch) | 10\.8148 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 105 | Classification | [resnet\-18\-pytorch](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/resnet-18-pytorch) | 3\.637 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 106 | Classification | [resnet\-34\-pytorch](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/resnet-34-pytorch) | 7\.3409 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 107 | Classification | [resnet\-50\-pytorch](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/resnet-50-pytorch) | 8\.216 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 108 | Classification | [resnet\-50\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/resnet-50-tf) | 8\.2164 | [imagenet\_2012\_bkgr.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012_bkgr.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 109 | Classification | [resnet18\-xnor\-binary\-onnx\-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/resnet18-xnor-binary-onnx-0001) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/resnet18-xnor-binary-onnx-0001.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 110 | Classification | [resnet50\-binary\-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/resnet50-binary-0001) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/resnet50-binary-0001.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 111 | Detection | [retinanet\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/retinanet-tf) | | [coco\_80cl.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/coco_80cl.txt) | | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 112 | Classification | [rexnet\-v1\-x1\.0](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/rexnet-v1-x1.0) | 0\.8325 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 113 | Detection | [rfcn\-resnet101\-coco\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/rfcn-resnet101-coco-tf) | | [coco\_91cl\_bkgr.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/coco_91cl_bkgr.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-image-info.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 114 | Classification | [shufflenet\-v2\-x1\.0](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/shufflenet-v2-x1.0) | 0\.2957 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 115 | Human Pose Estimation | [single\-human\-pose\-estimation\-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/single-human-pose-estimation-0001) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/single-human-pose-estimation-0001.json) | [single\_human\_pose\_estimation\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/single_human_pose_estimation_demo/python) | -| 116 | Detection | [ssd\_mobilenet\_v1\_coco](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/ssd_mobilenet_v1_coco) | 2\.494 | [coco\_91cl\_bkgr.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/coco_91cl_bkgr.txt) | | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 117 | Detection | [ssd\_mobilenet\_v1\_fpn\_coco](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/ssd_mobilenet_v1_fpn_coco) | 123\.309 | [coco\_91cl\_bkgr.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/coco_91cl_bkgr.txt) | | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 118 | Detection | [ssdlite\_mobilenet\_v2](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/ssdlite_mobilenet_v2) | 1\.525 | [coco\_91cl\_bkgr.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/coco_91cl_bkgr.txt) | | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 119 | Classification | [swin\-tiny\-patch4\-window7\-224](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/swin-tiny-patch4-window7-224) | | [imagenet\_2012\.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | -| 120 | Object Attributes | [vehicle\-attributes\-recognition\-barrier\-0039](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/vehicle-attributes-recognition-barrier-0039) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/vehicle-attributes-recognition-barrier-0039.json) | [security\_barrier\_camera\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/security_barrier_camera_demo/cpp) | -| 121 | Object Attributes | [vehicle\-attributes\-recognition\-barrier\-0042](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/vehicle-attributes-recognition-barrier-0042) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/vehicle-attributes-recognition-barrier-0042.json) | [security\_barrier\_camera\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/security_barrier_camera_demo/cpp) | -| 122 | Detection | [vehicle\-detection\-0200](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/vehicle-detection-0200) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/vehicle-detection-0200.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 123 | Detection | [vehicle\-detection\-0201](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/vehicle-detection-0201) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/vehicle-detection-0201.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 124 | Detection | [vehicle\-detection\-0202](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/vehicle-detection-0202) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/vehicle-detection-0202.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 125 | Detection | [vehicle\-detection\-adas\-0002](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/vehicle-detection-adas-0002) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/vehicle-detection-adas-0002.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 126 | Detection | [vehicle\-license\-plate\-detection\-barrier\-0106](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/vehicle-license-plate-detection-barrier-0106) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/vehicle-license-plate-detection-barrier-0106.json) | [security\_barrier\_camera\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/security_barrier_camera_demo/cpp) | -| 127 | Detection | [vehicle\-license\-plate\-detection\-barrier\-0123](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/vehicle-license-plate-detection-barrier-0123) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/vehicle-license-plate-detection-barrier-0123.json) | [security\_barrier\_camera\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/security_barrier_camera_demo/cpp) | -| 128 | Action Recognition | [weld\-porosity\-detection\-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/weld-porosity-detection-0001) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/intel/weld-porosity-detection-0001.json) | [action\_recognition\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/action_recognition_demo/python) | -| 129 | Detection | [yolo\-v3\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/yolo-v3-tf) | 65\.984 | [coco\_80cl.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/coco_80cl.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/yolo-v3-tf.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 130 | Detection | [yolo\-v3\-tiny\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/yolo-v3-tiny-tf) | 5\.582 | [coco\_80cl.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/coco_80cl.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/yolo-v3-tiny-tf.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 131 | Detection | [yolo\-v4\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/yolo-v4-tf) | 129\.5567 | [coco\_80cl.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/coco_80cl.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/yolo-v4-tf.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 132 | Detection | [yolo\-v4\-tiny\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/yolo-v4-tiny-tf) | 6\.9289 | [coco\_80cl.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/coco_80cl.txt) | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/public/yolo-v4-tiny-tf.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | -| 133 | Classification | [mobilenetv2\-7](https://github.com/onnx/models/tree/main/validated/vision/classification/mobilenet) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/onnx/mobilenetv2-7.json) | | -| 134 | Classification | [emotion\-ferplus\-8](https://github.com/onnx/models/tree/main/validated/vision/body_analysis/emotion_ferplus) | | | [model\-proc](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/gstreamer/model_proc/onnx/emotion-ferplus-8.json) | | -| 135 | Detection | [torchvision.models.detection. ssdlite320\_mobilenet\_v3\_large](https://pytorch.org/vision/main/models/generated/torchvision.models.detection.ssdlite320_mobilenet_v3_large.html) | 0\.583 | [coco\_80cl.txt](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer/samples/labels/coco_80cl.txt) | | | +| 62 | Instance Segmentation | [instance\-segmentation\-security\-0002](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/instance-segmentation-security-0002) | | [coco\_80cl.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/coco_80cl.txt) | | [background\_subtraction\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/background_subtraction_demo/cpp_gapi) | +| 63 | Instance Segmentation | [instance\-segmentation\-security\-0091](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/instance-segmentation-security-0091) | | [coco\_80cl.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/coco_80cl.txt) | | [background\_subtraction\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/background_subtraction_demo/cpp_gapi) | +| 64 | Instance Segmentation | [instance\-segmentation\-security\-0228](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/instance-segmentation-security-0228) | | [coco\_80cl.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/coco_80cl.txt) | | [background\_subtraction\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/background_subtraction_demo/cpp_gapi) | +| 65 | Instance Segmentation | [instance\-segmentation\-security\-1039](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/instance-segmentation-security-1039) | | [coco\_80cl.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/coco_80cl.txt) | | [background\_subtraction\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/background_subtraction_demo/cpp_gapi) | +| 66 | Instance Segmentation | [instance\-segmentation\-security\-1040](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/instance-segmentation-security-1040) | | [coco\_80cl.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/coco_80cl.txt) | | [background\_subtraction\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/background_subtraction_demo/cpp_gapi) | +| 67 | Object Attributes | [landmarks\-regression\-retail\-0009](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/landmarks-regression-retail-0009) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/landmarks-regression-retail-0009.json) | [face\_recognition\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/face_recognition_demo/python) | +| 68 | Optical Character Recognition | [license\-plate\-recognition\-barrier\-0007](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/license-plate-recognition-barrier-0007) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/license-plate-recognition-barrier-0007.json) | [security\_barrier\_camera\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/security_barrier_camera_demo/cpp) | +| 69 | Classification | [mixnet\-l](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/mixnet-l) | 0\.565 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 70 | Classification | [mobilenet\-v1\-0\.25\-128](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/mobilenet-v1-0.25-128) | | [imagenet\_2012\_bkgr.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012_bkgr.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 71 | Classification | [mobilenet\-v1\-1\.0\-224\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/mobilenet-v1-1.0-224-tf) | | [imagenet\_2012\_bkgr.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012_bkgr.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 72 | Classification | [mobilenet\-v2\-1\.0\-224](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/mobilenet-v2-1.0-224) | | [imagenet\_2012\_bkgr.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012_bkgr.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 73 | Classification | [mobilenet\-v2\-1\.4\-224](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/mobilenet-v2-1.4-224) | | [imagenet\_2012\_bkgr.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012_bkgr.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 74 | Classification | [mobilenet\-v2\-pytorch](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/mobilenet-v2-pytorch) | 0\.615 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 75 | Classification | [mobilenet\-v3\-large\-1\.0\-224\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/mobilenet-v3-large-1.0-224-tf) | | [imagenet\_2012\.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 76 | Classification | [mobilenet\-v3\-small\-1\.0\-224\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/mobilenet-v3-small-1.0-224-tf) | | [imagenet\_2012\.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 77 | Detection | [mobilenet\-yolo\-v4\-syg](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/mobilenet-yolo-v4-syg) | 65\.984 | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/mobilenet-yolo-v4-syg.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 78 | Classification | [nfnet\-f0](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/nfnet-f0) | 24\.8053 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 79 | Classification | [open\-closed\-eye\-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/open-closed-eye-0001) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/open-closed-eye-0001.json) | [gaze\_estimation\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/gaze_estimation_demo/cpp_gapi) | +| 80 | Detection | [pedestrian\-and\-vehicle\-detector\-adas\-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/pedestrian-and-vehicle-detector-adas-0001) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/pedestrian-and-vehicle-detector-adas-0001.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 81 | Detection | [pedestrian\-detection\-adas\-0002](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/pedestrian-detection-adas-0002) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/pedestrian-detection-adas-0002.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 82 | Object Attributes | [person\-attributes\-recognition\-crossroad\-0230](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-attributes-recognition-crossroad-0230) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/person-attributes-recognition-crossroad-0230.json) | [crossroad\_camera\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/crossroad_camera_demo/cpp) | +| 83 | Object Attributes | [person\-attributes\-recognition\-crossroad\-0234](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-attributes-recognition-crossroad-0234) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/person-attributes-recognition-crossroad-0234.json) | [crossroad\_camera\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/crossroad_camera_demo/cpp) | +| 84 | Object Attributes | [person\-attributes\-recognition\-crossroad\-0238](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-attributes-recognition-crossroad-0238) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/person-attributes-recognition-crossroad-0238.json) | [crossroad\_camera\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/crossroad_camera_demo/cpp) | +| 85 | Detection | [person\-detection\-0200](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-detection-0200) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/person-detection-0200.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 86 | Detection | [person\-detection\-0201](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-detection-0201) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/person-detection-0201.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 87 | Detection | [person\-detection\-0202](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-detection-0202) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/person-detection-0202.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 88 | Detection | [person\-detection\-0203](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-detection-0203) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/person-detection-0203.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 89 | Detection | [person\-detection\-asl\-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-detection-asl-0001) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/person-detection-0203.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 90 | Detection | [person\-detection\-retail\-0013](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-detection-retail-0013) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/person-detection-retail-0013.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 91 | Detection | [person\-vehicle\-bike\-detection\-2000](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-vehicle-bike-detection-2000) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/person-vehicle-bike-detection-2000.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 92 | Detection | [person\-vehicle\-bike\-detection\-2001](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-vehicle-bike-detection-2001) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/person-vehicle-bike-detection-2001.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 93 | Detection | [person\-vehicle\-bike\-detection\-2002](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-vehicle-bike-detection-2002) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/person-vehicle-bike-detection-2002.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 94 | Detection | [person\-vehicle\-bike\-detection\-2003](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-vehicle-bike-detection-2003) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/person-vehicle-bike-detection-2003.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 95 | Detection | [person\-vehicle\-bike\-detection\-2004](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-vehicle-bike-detection-2004) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/person-vehicle-bike-detection-2004.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 96 | Detection | [person\-vehicle\-bike\-detection\-crossroad\-0078](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-vehicle-bike-detection-crossroad-0078) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/person-vehicle-bike-detection-crossroad-0078.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 97 | Detection | [person\-vehicle\-bike\-detection\-crossroad\-1016](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-vehicle-bike-detection-crossroad-1016) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/person-vehicle-bike-detection-crossroad-1016.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 98 | Detection | [person\-vehicle\-bike\-detection\-crossroad\-yolov3\-1020](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/person-vehicle-bike-detection-crossroad-yolov3-1020) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/person-vehicle-bike-detection-crossroad-yolov3-1020.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 99 | Detection | [product\-detection\-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/product-detection-0001) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/product-detection-0001.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 100 | Classification | [regnetx\-3\.2gf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/regnetx-3.2gf) | 6\.3893 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 101 | Classification | [repvgg\-a0](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/repvgg-a0) | 2\.7286 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 102 | Classification | [repvgg\-b1](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/repvgg-b1) | 23\.6472 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 103 | Classification | [repvgg\-b3](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/repvgg-b3) | 52\.4407 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 104 | Classification | [resnest\-50\-pytorch](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/resnest-50-pytorch) | 10\.8148 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 105 | Classification | [resnet\-18\-pytorch](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/resnet-18-pytorch) | 3\.637 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 106 | Classification | [resnet\-34\-pytorch](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/resnet-34-pytorch) | 7\.3409 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 107 | Classification | [resnet\-50\-pytorch](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/resnet-50-pytorch) | 8\.216 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 108 | Classification | [resnet\-50\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/resnet-50-tf) | 8\.2164 | [imagenet\_2012\_bkgr.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012_bkgr.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 109 | Classification | [resnet18\-xnor\-binary\-onnx\-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/resnet18-xnor-binary-onnx-0001) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/resnet18-xnor-binary-onnx-0001.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 110 | Classification | [resnet50\-binary\-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/resnet50-binary-0001) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/resnet50-binary-0001.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 111 | Detection | [retinanet\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/retinanet-tf) | | [coco\_80cl.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/coco_80cl.txt) | | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 112 | Classification | [rexnet\-v1\-x1\.0](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/rexnet-v1-x1.0) | 0\.8325 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 113 | Detection | [rfcn\-resnet101\-coco\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/rfcn-resnet101-coco-tf) | | [coco\_91cl\_bkgr.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/coco_91cl_bkgr.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-image-info.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 114 | Classification | [shufflenet\-v2\-x1\.0](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/shufflenet-v2-x1.0) | 0\.2957 | [imagenet\_2012\.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 115 | Human Pose Estimation | [single\-human\-pose\-estimation\-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/single-human-pose-estimation-0001) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/single-human-pose-estimation-0001.json) | [single\_human\_pose\_estimation\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/single_human_pose_estimation_demo/python) | +| 116 | Detection | [ssd\_mobilenet\_v1\_coco](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/ssd_mobilenet_v1_coco) | 2\.494 | [coco\_91cl\_bkgr.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/coco_91cl_bkgr.txt) | | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 117 | Detection | [ssd\_mobilenet\_v1\_fpn\_coco](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/ssd_mobilenet_v1_fpn_coco) | 123\.309 | [coco\_91cl\_bkgr.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/coco_91cl_bkgr.txt) | | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 118 | Detection | [ssdlite\_mobilenet\_v2](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/ssdlite_mobilenet_v2) | 1\.525 | [coco\_91cl\_bkgr.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/coco_91cl_bkgr.txt) | | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 119 | Classification | [swin\-tiny\-patch4\-window7\-224](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/swin-tiny-patch4-window7-224) | | [imagenet\_2012\.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/imagenet_2012.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) | [classification\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/classification_demo/python) | +| 120 | Object Attributes | [vehicle\-attributes\-recognition\-barrier\-0039](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/vehicle-attributes-recognition-barrier-0039) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/vehicle-attributes-recognition-barrier-0039.json) | [security\_barrier\_camera\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/security_barrier_camera_demo/cpp) | +| 121 | Object Attributes | [vehicle\-attributes\-recognition\-barrier\-0042](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/vehicle-attributes-recognition-barrier-0042) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/vehicle-attributes-recognition-barrier-0042.json) | [security\_barrier\_camera\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/security_barrier_camera_demo/cpp) | +| 122 | Detection | [vehicle\-detection\-0200](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/vehicle-detection-0200) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/vehicle-detection-0200.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 123 | Detection | [vehicle\-detection\-0201](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/vehicle-detection-0201) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/vehicle-detection-0201.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 124 | Detection | [vehicle\-detection\-0202](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/vehicle-detection-0202) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/vehicle-detection-0202.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 125 | Detection | [vehicle\-detection\-adas\-0002](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/vehicle-detection-adas-0002) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/vehicle-detection-adas-0002.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 126 | Detection | [vehicle\-license\-plate\-detection\-barrier\-0106](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/vehicle-license-plate-detection-barrier-0106) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/vehicle-license-plate-detection-barrier-0106.json) | [security\_barrier\_camera\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/security_barrier_camera_demo/cpp) | +| 127 | Detection | [vehicle\-license\-plate\-detection\-barrier\-0123](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/vehicle-license-plate-detection-barrier-0123) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/vehicle-license-plate-detection-barrier-0123.json) | [security\_barrier\_camera\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/security_barrier_camera_demo/cpp) | +| 128 | Action Recognition | [weld\-porosity\-detection\-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/intel/weld-porosity-detection-0001) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/intel/weld-porosity-detection-0001.json) | [action\_recognition\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/action_recognition_demo/python) | +| 129 | Detection | [yolo\-v3\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/yolo-v3-tf) | 65\.984 | [coco\_80cl.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/coco_80cl.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/yolo-v3-tf.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 130 | Detection | [yolo\-v3\-tiny\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/yolo-v3-tiny-tf) | 5\.582 | [coco\_80cl.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/coco_80cl.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/yolo-v3-tiny-tf.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 131 | Detection | [yolo\-v4\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/yolo-v4-tf) | 129\.5567 | [coco\_80cl.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/coco_80cl.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/yolo-v4-tf.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 132 | Detection | [yolo\-v4\-tiny\-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master//models/public/yolo-v4-tiny-tf) | 6\.9289 | [coco\_80cl.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/coco_80cl.txt) | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/public/yolo-v4-tiny-tf.json) | [object\_detection\_demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master//demos/object_detection_demo/cpp) | +| 133 | Classification | [mobilenetv2\-7](https://github.com/onnx/models/tree/main/validated/vision/classification/mobilenet) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/onnx/mobilenetv2-7.json) | | +| 134 | Classification | [emotion\-ferplus\-8](https://github.com/onnx/models/tree/main/validated/vision/body_analysis/emotion_ferplus) | | | [model\-proc](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/gstreamer/model_proc/onnx/emotion-ferplus-8.json) | | +| 135 | Detection | [torchvision.models.detection. ssdlite320\_mobilenet\_v3\_large](https://pytorch.org/vision/main/models/generated/torchvision.models.detection.ssdlite320_mobilenet_v3_large.html) | 0\.583 | [coco\_80cl.txt](https://github.com/open-edge-platform/dlstreamer/blob/master/samples/labels/coco_80cl.txt) | | | ## Legal Information diff --git a/samples/gstreamer/README.md b/samples/gstreamer/README.md index 5a983fd0..0a306310 100644 --- a/samples/gstreamer/README.md +++ b/samples/gstreamer/README.md @@ -6,7 +6,7 @@ Samples separated into several categories 1. gst_launch command-line samples (samples construct GStreamer pipeline via [gst-launch-1.0](https://gstreamer.freedesktop.org/documentation/tools/gst-launch.html) command-line utility) * [Face Detection And Classification Sample](./gst_launch/face_detection_and_classification/README.md) - constructs object detection and classification pipeline example with [gvadetect](../../../dl-streamer/docs/source/elements/gvadetect.md) and [gvaclassify](../../../dl-streamer/docs/source/elements/gvaclassify.md) elements to detect faces and estimate age, gender, emotions and landmark points * [Audio Event Detection Sample ](./gst_launch/audio_detect/README.md) - constructs audio event detection pipeline example with [gvaaudiodetect](../../../dl-streamer/docs/source/elements/gvaaudiodetect.md) element and uses [gvametaconvert](../../../dl-streamer/docs/source/elements/gvametaconvert.md), [gvametapublish](../../../dl-streamer/docs/source/elements/gvametapublish.md) elements to convert audio event metadata with inference results into JSON format and to print on standard out - * [Audio Transcription Sample](./gst_launch/audio_transcribe/README.md) - performs audio transcription using OpenVino GenAI model (whisper) with [gvaaudiotranscribe](../../../dl-streamer/docs/source/elements/gvaaudiotranscribe.md) + * [Audio Transcription Sample](./gst_launch/audio_transcribe/README.md) - performs audio transcription using OpenVino GenAI model (whisper) with [gvaaudiotranscribe](../../../dl-streamer/docs/source/elements/gvaaudiotranscribe.md) * [Vehicle and Pedestrian Tracking Sample](./gst_launch/vehicle_pedestrian_tracking/README.md) - demonstrates object tracking via [gvatrack](../../../dl-streamer/docs/source/elements/gvatrack.md) element * [Human Pose Estimation Sample](./gst_launch/human_pose_estimation/README.md) - demonstrates human pose estimation with full-frame inference via [gvaclassify](../../../dl-streamer/docs/source/elements/gvaclassify.md) element * [Metadata Publishing Sample](./gst_launch/metapublish/README.md) - demonstrates how [gvametaconvert](../../../dl-streamer/docs/source/elements/gvametaconvert.md) and [gvametapublish](../../../dl-streamer/docs/source/elements/gvametapublish.md) elements are used for converting metadata with inference results into JSON format and publishing to file or Kafka/MQTT message bus @@ -25,7 +25,7 @@ Samples separated into several categories 2. C++ samples * [Draw Face Attributes C++ Sample](./cpp/draw_face_attributes/README.md) - constructs pipeline and sets "C" callback to access frame metadata and visualize inference results 3. Python samples - * [Hello DLStreamer Sample](./python/hello_dlstreamer/README.md) - constructs an object detection pipeline, add logic to analyze metadata and count objects and visualize results along with object count summary in a local window + * [Hello DL Streamer Sample](./python/hello_dlstreamer/README.md) - constructs an object detection pipeline, add logic to analyze metadata and count objects and visualize results along with object count summary in a local window * [Draw Face Attributes Python Sample](./python/draw_face_attributes/README.md) - constructs pipeline and sets Python callback to access frame metadata and visualize inference results * [Open Close Valve Sample](./python/open_close_valve/README.md) - constructs pipeline with two sinks. On of them has [GStreamer valve element](https://gstreamer.freedesktop.org/documentation/coreelements/valve.html?gi-language=python), which is managed based object detection result and opened/closed by callback. 4. Benchmark diff --git a/samples/gstreamer/python/hello_dlstreamer/README.md b/samples/gstreamer/python/hello_dlstreamer/README.md index 22bec2f7..57adbb9e 100644 --- a/samples/gstreamer/python/hello_dlstreamer/README.md +++ b/samples/gstreamer/python/hello_dlstreamer/README.md @@ -1,15 +1,16 @@ -# Hello DLStreamer +# Hello DL Streamer -This sample demonstrates how to build a Python application that constructs and executes a DLStreamer pipeline. +This sample demonstrates how to build a Python application that constructs and executes a DL Streamer pipeline. > filesrc -> decodebin3 -> gvadetect -> gvawatermark -> autovideosink -The individual pipeline stages implement the following functions: +The individual pipeline stages implement the following functions: + * __filesrc__ element reads video stream from a local file -* __decodebin3__ element decodes video stream into individual frames +* __decodebin3__ element decodes video stream into individual frames * __gvadetect__ element runs AI inference object detection for each frame * __gvawatermark__ element draws (overlays) object bounding boxes on top of analyzed frames -* __autovideosink__ element renders video stream on local display +* __autovideosink__ element renders video stream on local display In addition, the sample uses 'queue' and 'videoconvert' elements to adapt interface between functional stages. The resulting behavior is similar to [hello_dlstreamer.sh](../../scripts/hello_dlstreamer.sh) using command line. @@ -17,7 +18,7 @@ In addition, the sample uses 'queue' and 'videoconvert' elements to adapt interf ### STEP 1 - Pipeline Construction -First, the application creates a GStreamer `pipeline` object. +First, the application creates a GStreamer `pipeline` object. The sample code demonstrates two methods for pipeline creation: * OPTION A: Use `gst_parse_launch` method to construct the pipeline from a string representation. This is the default method. It uses a single API call to create a set of elements and links them together into a pipeline. @@ -34,19 +35,19 @@ The sample code demonstrates two methods for pipeline creation: ``` Both methods are equivalent and produce same output pipeline. -### STEP 2 - Adding custom probe. +### STEP 2 - Adding custom probe -The application registers a custom callback (GStreamer `probe`) on the sink pad of `gvawatermark` element. The GStreamer pipeline will invoke the callback function on each buffer pushed to the sink pad. +The application registers a custom callback (GStreamer `probe`) on the sink pad of `gvawatermark` element. The GStreamer pipeline will invoke the callback function on each buffer pushed to the sink pad. ```code watermarksinkpad = watermark.get_static_pad("sink") watermarksinkpad.add_probe(watermark_sink_pad_buffer_probe, ...) ``` -In this example, the callback function inspects `GstAnalytics` metadata produced by the `gvadetect` element. The callback counts the number of detected objects in each category, and attaches a custom classification string to the processed frame. +In this example, the callback function inspects `GstAnalytics` metadata produced by the `gvadetect` element. The callback counts the number of detected objects in each category, and attaches a custom classification string to the processed frame. -### STEP 3 - Pipeline execution. +### STEP 3 - Pipeline execution -The last step is to run the pipeline. The application sets the pipeline state to `PLAYING` and implements the message processing loop. Once the input video file is fully replayed, the `filesrc` element will send end-of-stream message. +The last step is to run the pipeline. The application sets the pipeline state to `PLAYING` and implements the message processing loop. Once the input video file is fully replayed, the `filesrc` element will send end-of-stream message. ```code pipeline.set_state(Gst.State.PLAYING) terminate = False @@ -59,7 +60,7 @@ pipeline.set_state(Gst.State.NULL) ## Running The sample application requires two local files with input video and an object detection model. Here is an example command line to download sample assets. -Please note the model download step may take up to several minutes as it includes model quantization to INT8. +Please note the model download step may take up to several minutes as it includes model quantization to INT8. ```sh cd @@ -67,7 +68,7 @@ export MODELS_PATH=${PWD} wget https://videos.pexels.com/video-files/1192116/1192116-sd_640_360_30fps.mp4 ../../../download_public_models.sh yolo11n coco128 ``` -Once assets are downloaded to the local disk, the sample application can be started as any other regular python application. +Once assets are downloaded to the local disk, the sample application can be started as any other regular python application. ```sh python3 ./hello_dlstreamer.py 1192116-sd_640_360_30fps.mp4 public/yolo11n/INT8/yolo11n.xml @@ -76,4 +77,3 @@ The sample opens a window and renders a video stream along with object detection ## See also * [Samples overview](../../README.md) -