Skip to content

Commit fcf54d6

Browse files
committed
restore docs, tests
1 parent 8564632 commit fcf54d6

File tree

64 files changed

+6788
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

64 files changed

+6788
-0
lines changed

dockerfiles/README.md

Lines changed: 107 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,107 @@
1+
# Dockerfiles with [Intel® Distribution of OpenVINO™ toolkit](https://github.com/openvinotoolkit/openvino)
2+
3+
This repository folder contains Dockerfiles to build an docker image with the Intel® Distribution of OpenVINO™ toolkit.
4+
You can use Docker CI framework to build an image, please follow [Get Started with DockerHub CI for Intel® Distribution of OpenVINO™ toolkit](../get-started.md).
5+
6+
1. [Supported Operating Systems for Docker image](#supported-operating-systems-for-docker-image)
7+
2. [Supported devices and distributions](#supported-devices-and-distributions)
8+
3. [Where to get OpenVINO package](#where-to-get-openvino-package)
9+
4. [How to build](#how-to-build)
10+
5. [Prebuilt images](#prebuilt-images)
11+
6. [How to run a container](#how-to-run-a-container)
12+
13+
## Supported Operating Systems for Docker image
14+
15+
- `ubuntu18` folder (Ubuntu* 18.04 LTS)
16+
- `ubuntu20` folder (Ubuntu* 20.04 LTS)
17+
- `rhel8` folder (RHEL* 8)
18+
- `winserver2019` folder (Windows* Server Core base OS LTSC 2019)
19+
- `windows20h2` folder (Windows* OS 20H2)
20+
21+
*Note*: `dl-workbench` folder contains Dockerfiles for OpenVINO™ Deep Learning Workbench.
22+
23+
## Supported devices and distributions
24+
25+
![OpenVINO Dockerfile Name](../docs/img/dockerfile_name.png)
26+
27+
**Devices:**
28+
- CPU
29+
- GPU
30+
- VPU (NCS2)
31+
- HDDL (VPU HDDL) (_Prerequisite_: run HDDL daemon on the host machine, follow the [configuration guide for HDDL device](../install_guide_vpu_hddl.md))
32+
33+
OpenVINO documentation for [supported devices](https://docs.openvino.ai/latest/openvino_docs_IE_DG_supported_plugins_Supported_Devices.html).
34+
35+
**Distributions:**
36+
37+
- **runtime**: IE core, nGraph, plugins
38+
- **dev**: IE core, nGraph, plugins, samples, Python dev tools: Model Optimizer, Post training Optimization tool, Accuracy checker, Open Model Zoo tools (downloader, converter), OpenCV
39+
- **base** (only for CPU): IE core, nGraph
40+
41+
You can generate Dockerfile with your settings, please follow the [DockerHub CI documentation](../get-started.md).
42+
* _runtime_ and _dev_ distributions are based on archive package of OpenVINO product. You can just remove unnecessary parts.
43+
* _base_ distribution is created by [OpenVINO™ Deployment Manager](https://docs.openvino.ai/latest/openvino_docs_install_guides_deployment_manager_tool.html).
44+
45+
## Where to get OpenVINO package
46+
47+
You can get OpenVINO distribution packages (runtime, dev) directly from [public storage](https://storage.openvinotoolkit.org/repositories/openvino/packages/).
48+
For example:
49+
* take 2022.2 > linux > ubuntu20 `l_openvino_toolkit_ubuntu20_2022.2.0.7713.af16ea1d79a_x86_64.tgz` package.
50+
51+
## How to build
52+
53+
**Note:** Please use Docker CI framework release version corresponding to the version of OpenVINO™ Toolkit that you need to build.
54+
55+
* Base image with CPU only:
56+
57+
You can use Docker CI framework to build an image, please follow [Get Started with DockerHub CI for Intel® Distribution of OpenVINO™ toolkit](../get-started.md).
58+
59+
```bash
60+
python3 docker_openvino.py build --file "dockerfiles/ubuntu18/openvino_c_base_2022.2.0.dockerfile" -os ubuntu18 -dist base -p 2022.2.0
61+
```
62+
63+
----------------
64+
65+
* Dev/runtime image:
66+
67+
You can use Docker CI framework to build an image, please follow [Get Started with DockerHub CI for Intel® Distribution of OpenVINO™ toolkit](../get-started.md).
68+
69+
```bash
70+
python3 docker_openvino.py build --file "dockerfiles/ubuntu18/openvino_cgvh_dev_2022.2.0.dockerfile" -os ubuntu18 -dist dev -p 2022.2.0
71+
```
72+
For runtime distribution, please set appropriate `-dist` and `--file` options.
73+
74+
Or via Docker Engine directly, but you need specify `package_url` argument (see [Where to get OpenVINO package section](#where-to-get-openvino-package)):
75+
```bash
76+
docker build --build-arg package_url=https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.2/linux/l_openvino_toolkit_ubuntu18_2022.2.0.7713.af16ea1d79a_x86_64.tgz \
77+
-t ubuntu18_dev:2022.2.0 -f dockerfiles/ubuntu18/openvino_cgvh_dev_2022.2.0.dockerfile .
78+
```
79+
----------------
80+
81+
* Custom image with CPU, iGPU, VPU support
82+
You can use Dockerfiles from the `build_custom` folders to build a custom version of OpenVINO™ from source code for development. To learn more, follow:
83+
* [Build custom Intel® Distribution of OpenVINO™ toolkit Docker image on Ubuntu 18](ubuntu18/build_custom/README.md)
84+
* [Build custom Intel® Distribution of OpenVINO™ toolkit Docker image on Ubuntu 20](ubuntu20/build_custom/README.md)
85+
86+
## Prebuilt images
87+
88+
Prebuilt images are available on:
89+
- [Docker Hub](https://hub.docker.com/u/openvino)
90+
- [Red Hat* Quay.io](https://quay.io/organization/openvino)
91+
- [Red Hat* Ecosystem Catalog (runtime image)](https://catalog.redhat.com/software/containers/intel/openvino-runtime/606ff4d7ecb5241699188fb3)
92+
- [Red Hat* Ecosystem Catalog (development image)](https://catalog.redhat.com/software/containers/intel/openvino-dev/613a450dc9bc35f21dc4a1f7)
93+
- [Azure* Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/intel_corporation.openvino)
94+
95+
96+
## How to run a container
97+
98+
Please follow [Run a container](../get-started.md#run-a-container) section in DockerHub CI getting started guide.
99+
100+
## Documentation
101+
102+
* [Install Intel® Distribution of OpenVINO™ toolkit for Linux* from a Docker* Image](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_docker_linux.html)
103+
* [Install Intel® Distribution of OpenVINO™ toolkit for Windows* from Docker* Image](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_docker_windows.html)
104+
* [Official Dockerfile reference](https://docs.docker.com/engine/reference/builder/)
105+
106+
---
107+
\* Other names and brands may be claimed as the property of others.

docs/accelerators.md

Lines changed: 81 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,81 @@
1+
# Using OpenVINO™ Toolkit containers with GPU accelerators
2+
3+
4+
Containers can be used to execute inference operations with GPU acceleration or with the [virtual devices](https://docs.openvino.ai/nightly/openvino_docs_Runtime_Inference_Modes_Overview.html).
5+
6+
There are the following prerequisites:
7+
8+
- Use the Linux kernel with GPU models supported by you integrated GPU or discrete GPU. Check the documetnation on https://dgpu-docs.intel.com/driver/kernel-driver-types.html.
9+
On Linux host, confirm if there is available a character device /dev/dri
10+
11+
- On Windows Subsystem for Linux (WSL2) refer to the guidelines on https://docs.openvino.ai/nightly/openvino_docs_install_guides_configurations_for_intel_gpu.html#
12+
Note, that on WLS2, there must be present a character device `/dev/drx`.
13+
14+
- Docker image for the container must include GPU runtime drivers like described on https://docs.openvino.ai/nightly/openvino_docs_install_guides_configurations_for_intel_gpu.html#
15+
16+
While the host and preconfigured docker engine is up and running, use the docker run parameters like described below.
17+
18+
## Linux
19+
20+
The command below should report both CPU and GPU devices available for inference execution:
21+
```
22+
export IMAGE=openvino/ubuntu20_dev:2023.0.0
23+
docker run -it --device /dev/dri --group-add=$(stat -c \"%g\" /dev/dri/render* ) $IMAGE ./samples/cpp/samples_bin/hello_query_device
24+
```
25+
26+
`--device /dev/dri` - it passes the GPU device to the container
27+
`--group-add` - it adds a security context to the container command with permission to use the GPU device
28+
29+
## Windows Subsystem for Linux
30+
31+
On WSL2, use the command to start the container:
32+
33+
```
34+
export IMAGE=openvino/ubuntu20_dev:2023.0.0
35+
docker run -it --device=/dev/dxg -v /usr/lib/wsl:/usr/lib/wsl $IMAGE ./samples/cpp/samples_bin/hello_query_device
36+
```
37+
`--device /dev/dri` - it passes the virtual GPU device to the container
38+
`-v /usr/lib/wsl:/usr/lib/wsl` - it mounts required WSL libs into the container
39+
40+
41+
## Usage example:
42+
43+
Run the benchmark app using GPU accelerator with `-use_device_mem` param showcasing inference without copy between CPU and GPU memory
44+
```
45+
docker run --device /dev/dri --group-add=$(stat -c \"%g\" /dev/dri/render* ) $IMAGE bash -c " \
46+
curl -O https://storage.openvinotoolkit.org/repositories/open_model_zoo/2023.0/models_bin/1/resnet50-binary-0001/FP16-INT1/resnet50-binary-0001.xml && \
47+
curl -O https://storage.openvinotoolkit.org/repositories/open_model_zoo/2023.0/models_bin/1/resnet50-binary-0001/FP16-INT1/resnet50-binary-0001.bin && \
48+
./samples/cpp/samples_bin/benchmark_app -m resnet50-binary-0001.xml -d GPU -use_device_mem -inference_only=false"
49+
```
50+
In the benchmark app, the parameter `-use_device_mem` employs the OV::RemoteTensor as the input buffer. It demonstrates the gain without data copy beteen the host and the GPU device.
51+
52+
Run the benchmark app using both GPU and CPU. Load will be distributed on both types of devices:
53+
```
54+
docker run --device /dev/dri --group-add=$(stat -c \"%g\" /dev/dri/render* ) $IMAGE bash -c " \
55+
curl -O https://storage.openvinotoolkit.org/repositories/open_model_zoo/2023.0/models_bin/1/resnet50-binary-0001/FP16-INT1/resnet50-binary-0001.xml && \
56+
curl -O https://storage.openvinotoolkit.org/repositories/open_model_zoo/2023.0/models_bin/1/resnet50-binary-0001/FP16-INT1/resnet50-binary-0001.bin && \
57+
./samples/cpp/samples_bin/benchmark_app -m resnet50-binary-0001.xml -d MULTI:GPU,CPU"
58+
```
59+
60+
61+
**Check also:**
62+
63+
[Prebuilt images](#prebuilt-images)
64+
65+
[Working with OpenVINO Containers](docs/containers.md)
66+
67+
[Generating dockerfiles and building the images in Docker_CI tools](docs/openvino_docker.md)
68+
69+
[OpenVINO GPU Plugin](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_GPU.html)
70+
71+
72+
73+
74+
75+
76+
77+
78+
79+
80+
81+

docs/configure_gpu_ubuntu20.md

Lines changed: 116 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,116 @@
1+
# Configuration Guide for the Intel® Graphics Compute Runtime for OpenCL™ on Ubuntu* 20.04
2+
3+
Intel® Graphics Compute Runtime for OpenCL™ driver components are required to use a GPU plugin and write custom layers for Intel® Integrated Graphics.
4+
The driver is installed in the OpenVINO™ Docker image but you need to activate it in the container for a non-root user if you have Ubuntu 20.04 on your host.
5+
To access GPU capabilities, you need to have correct permissions on the host and Docker container.
6+
Run the following commands to list the group assigned ownership of the render nodes on your host:
7+
8+
```bash
9+
$ stat -c "group_name=%G group_id=%g" /dev/dri/render*
10+
group_name=render group_id=134
11+
```
12+
13+
OpenVINO™ Docker images do not contain a render group for openvino non-root user because the render group does not have a strict group ID, unlike the video group.
14+
Choose one of the options below to set up access to a GPU device from a container.
15+
16+
## 1. Configure a Host Non-Root User to Use a GPU Device from an OpenVINO Container on Ubuntu 20 Host [RECOMMENDED]
17+
18+
To run an OpenVINO container with a default non-root user (openvino) with access to a GPU device, you need to have a non-root user with the same id as `openvino` user inside the container:
19+
By default, `openvino` user has the #1000 user ID.
20+
Create a non-root user, for example, host_openvino, on the host with the same user ID and access to video, render, docker groups:
21+
22+
```bash
23+
$ useradd -u 1000 -G users,video,render,docker host_openvino
24+
```
25+
26+
Now you can use the OpenVINO container with GPU access under the non-root user.
27+
28+
```bash
29+
$ docker run -it --rm --device /dev/dri <image_name>
30+
```
31+
32+
## 2. Configure a Container to Use a GPU Device on Ubuntu 20 Host Under a Non-Root User
33+
34+
To run an OpenVINO container as non-root with access to a GPU device, specify the render group ID from your host:
35+
36+
```bash
37+
$ docker run -it --rm --device /dev/dri --group-add=<render_group_id_on_host> <image_name>
38+
```
39+
40+
For example, get the render group ID on your host:
41+
42+
```bash
43+
$ docker run -it --rm --device /dev/dri --group-add=$(stat -c "%g" /dev/dri/render*) <image_name>
44+
```
45+
46+
Now you can use the container with GPU access under the non-root user.
47+
48+
## 3. Configure an Image to Use a GPU Device on Ubuntu 20 Host and Save It
49+
50+
To run an OpenVINO container as the root with access to a GPU device, use the command below:
51+
52+
```bash
53+
$ docker run -it --rm --user root --device /dev/dri --name my_container <image_name>
54+
```
55+
56+
Check groups for the GPU device in the container:
57+
58+
```bash
59+
$ ls -l /dev/dri/
60+
```
61+
62+
The output should look like the following:
63+
64+
```bash
65+
crw-rw---- 1 root video 226, 0 Feb 20 14:28 card0
66+
crw-rw---- 1 root 134 226, 128 Feb 20 14:28 renderD128
67+
```
68+
69+
Create a render group in the container with the same group ID as on your host:
70+
71+
```bash
72+
$ addgroup --gid 134 render
73+
```
74+
75+
Check groups for the GPU device in the container:
76+
77+
```bash
78+
$ ls -l /dev/dri/
79+
```
80+
81+
The output should look like the following:
82+
83+
```bash
84+
crw-rw---- 1 root video 226, 0 Feb 20 14:28 card0
85+
crw-rw---- 1 root render 226, 128 Feb 20 14:28 renderD128
86+
```
87+
88+
Add the non-root user to the render group:
89+
90+
```bash
91+
$ usermod -a -G render openvino
92+
$ id openvino
93+
```
94+
95+
Check that the group now contains the user:
96+
97+
```bash
98+
uid=1000(openvino) gid=1000(openvino) groups=1000(openvino),44(video),100(users),134(render)
99+
```
100+
101+
Then relogin as the non-root user:
102+
103+
```bash
104+
$ su openvino
105+
```
106+
107+
Now you can use the container with GPU access under the non-root user or you can save that container as an image and push it to your registry.
108+
Open another terminal and run the commands below:
109+
110+
```bash
111+
$ docker commit my_container my_image
112+
$ docker run -it --rm --device /dev/dri --user openvino my_image
113+
```
114+
115+
---
116+
\* Other names and brands may be claimed as the property of others.

docs/containers.md

Lines changed: 74 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,74 @@
1+
# Working with OpenVINO™ Toolkit Images
2+
3+
## Runtime images
4+
5+
The runtime images include OpenVINO toolkit with all required dependencies to run inference operations and OpenVINO API both in Python and C++.
6+
There are no development tools installed.
7+
Here are examples how the runtime image could be used:
8+
9+
```
10+
export IMAGE=openvino/ubuntu20_runtime:2023.0.0
11+
```
12+
13+
### Building and Using the OpenVINO samples:
14+
15+
```
16+
docker run -it -u root $IMAGE bash -c "/opt/intel/openvino/install_dependencies/install_openvino_dependencies.sh -y -c dev && ./samples/cpp/build_samples.sh && \
17+
/root/openvino_cpp_samples_build/intel64/Release/hello_query_device"
18+
```
19+
20+
### Using python samples
21+
```
22+
docker run -it $IMAGE python3 samples/python/hello_query_device/hello_query_device.py
23+
```
24+
25+
## Development images
26+
27+
Dev images include the OpenVINO runtime components and development tools as well. It includes a complete environment for experimenting with OpenVINO.
28+
Examples how the development container can be used are below:
29+
30+
```
31+
export IMAGE=openvino/ubuntu20_dev:2023.0.0
32+
```
33+
34+
### Listing OpenVINO Model Zoo Models
35+
```
36+
docker run $IMAGE omz_downloader --print_all
37+
```
38+
39+
### Download a model
40+
```
41+
mkdir model
42+
docker run -u $(id -u) --rm -v $(pwd)/model:/tmp/model $IMAGE omz_downloader --name mozilla-deepspeech-0.6.1 -o /tmp/model
43+
```
44+
45+
### Convert the model to IR format
46+
```
47+
docker run -u $(id -u) --rm -v $(pwd)/model:/tmp/model $IMAGE omz_converter --name mozilla-deepspeech-0.6.1 -d /tmp/model -o /tmp/model/converted/
48+
```
49+
50+
### Run benchmark app to test the model performance
51+
```
52+
docker run -u $(id -u) --rm -v $(pwd)/model:/tmp/model $IMAGE benchmark_app -m /tmp/model/converted/public/mozilla-deepspeech-0.6.1/FP32/mozilla-deepspeech-0.6.1.xml
53+
```
54+
55+
### Run a demo from an OpenVINO Model Zoo
56+
```
57+
docker run $IMAGE bash -c "git clone --depth=1 --recurse-submodules --shallow-submodules https://github.com/openvinotoolkit/open_model_zoo.git && \
58+
cd open_model_zoo/demos/classification_demo/python && \
59+
curl -O https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/3/resnet50-binary-0001/FP32-INT1/resnet50-binary-0001.xml && \
60+
curl -O https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/3/resnet50-binary-0001/FP32-INT1/resnet50-binary-0001.bin && \
61+
curl -O https://raw.githubusercontent.com/openvinotoolkit/model_server/main/demos/common/static/images/zebra.jpeg && \
62+
python3 classification_demo.py -m resnet50-binary-0001.xml -i zebra.jpeg --labels ../../../data/dataset_classes/imagenet_2012.txt --no_show -nstreams 1 -r"
63+
64+
```
65+
66+
**Check also:**
67+
68+
[Prebuilt images](#prebuilt-images)
69+
70+
[Deployment with GPU accelerator](docs/accelerators.md)
71+
72+
[Generating dockerfiles and building the images in Docker_CI tools](docs/openvino_docker.md)
73+
74+
[OpenVINO GPU Plugin](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_GPU.html)

0 commit comments

Comments
 (0)