Skip to content

Commit

Permalink
Update Readme.md
Browse files Browse the repository at this point in the history
Added documentation for the Yolov4 Inference model support
  • Loading branch information
hadikoub authored Jun 23, 2020
1 parent 196a5fa commit e001ea9
Showing 1 changed file with 10 additions and 6 deletions.
16 changes: 10 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,14 @@
# YOLOv3 Darknet GPU Inference API
# YOLOv3-v4 Darknet GPU Inference API

This is a repository for an object detection inference API using the Yolov3 Darknet framework.

This Repository has also support for state of the art Yolov4 models

This repo is based on [AlexeyAB darknet repository](https://github.com/AlexeyAB/darknet).

The inference REST API works on GPU. It's supported only on Linux Operating systems.

Models trained using our training Yolov3 repository can be deployed in this API. Several object detection models can be loaded and used at the same time.
Models trained using our training automation Yolov3 and Yolov4 repository can be deployed in this API. Several object detection models can be loaded and used at the same time.

![predict image](./docs/4.gif)

Expand Down Expand Up @@ -50,12 +52,12 @@ Install NVIDIA Drivers (410.x or higher) and NVIDIA Docker for GPU by following
In order to build the project run the following command from the project's root directory:

```sh
sudo docker build -t yolov3_inference_api_gpu -f ./docker/dockerfile .
sudo docker build -t yolov4_inference_api_gpu -f ./docker/dockerfile .
```
### Behind a proxy

```sh
sudo docker build --build-arg http_proxy='' --build-arg https_proxy='' -t yolov3_inference_api_gpu -f ./docker/dockerfile .
sudo docker build --build-arg http_proxy='' --build-arg https_proxy='' -t yolov4_inference_api_gpu -f ./docker/dockerfile .
```

## Run The Docker Container
Expand All @@ -65,7 +67,7 @@ To run the API go the to the API's directory and run the following:
#### Using Linux based docker:

```sh
sudo NV_GPU=0 nvidia-docker run -itv $(pwd)/models:/models -p <docker_host_port>:1234 yolov3_inference_api_gpu
sudo NV_GPU=0 nvidia-docker run -itv $(pwd)/models:/models -p <docker_host_port>:1234 yolov4_inference_api_gpu
```
The <docker_host_port> can be any unique port of your choice.

Expand Down Expand Up @@ -145,7 +147,7 @@ Inside each subfolder there should be a:
- Cfg file (yolo-obj.cfg): contains the configuration of the model

- data file (obj.data): contains number of classes and names file path

```
classes=<number_of_classes>
names=/models/<model_name>/obj.names
Expand Down Expand Up @@ -212,3 +214,5 @@ Inside each subfolder there should be a:
Antoine Charbel, inmind.ai , Beirut, Lebanon

Charbel El Achkar, Beirut, Lebanon

Hadi Koubeissy, Beirut, Lebanon

0 comments on commit e001ea9

Please sign in to comment.