Operating the Multi-Graph Runner
Starting the Multi-Graph Runner
This guide describes how to start MGR for head detection on the frames of video streams.
Note: The aim of this guide is to describe how to start MGR for production use only. To start MGR for running the UVAP Feature Demos, see the Starting Multi-Graph Runner in the Feature Demos section.
Note: This guide describes MGR-specific operation information only. Please, read the Generic Operation Guide before proceeding with reading this guide.
Prerequisites
Starting MGR requires an UVAP license, and AI resources as well. For information about these, see License Key. It is assumed that MGR is properly configured. For information on MGR configuration, see Configuring Multi-Graph Runner.
MGR Docker image
As described in the Docker images section of the Generic Operation
Guide, the name of the Docker image of MGR can be determined based on
Git tags present in the UVAP Git repository.
In the case of MGR, the UVAP component name is mgr
, so for a given
UVAP version ([version]
), the Docker image name is:
ultinous/uvap:mgr_[version]
, and for the actual most recent version:
ultinous/uvap:mgr_latest
.
MGR Docker container
A Docker container for an MGR instance can be created as described in the Docker containers section of the Generic Operation Guide with the following additional information:
The name of the environment variable
[properties_env_var_name]
isMGR_PROPERTY_FILE_PATHS
.MGR requires the Nvidia container runtime library to be used. This can be done by adding the
--runtime nvidia
flag to the command which creates the Docker container. By default, all the Nvidia devices (GPU cards) are visible in the Docker container, but it can be overridden with the environment variableNVIDIA_VISIBLE_DEVICES
. On a host with multiple GPU devices, it is advised to run one MGR instance on each GPU device, and to specify that one device as one visible device in the container with this environment variable. See the NVIDIA Container Runtime documentation for the possible values of this environment variable. Let's refer to the value of this environment variable as[gpu_specification]
.MGR requires the AI models when running, but since these are not present in the Docker image, these have to be mounted into the container. Let's refer to the directory of the models on the host as
[model_directory_on_the_host]
, and as[model_directory_in_the_container]
in the container.MGR requires an UVAP license when running, but since these are not included in the Docker image, these have to be mounted into the container. The license, be it either an online license or an offline license, it consists of a license data file and a license key file. It is advised to create a separate subdirectory for the license files of the MGR instance, and place both the license data and the key files there, so mounting this one subdirectory will make all the license files present in the container. Let's refer to this directory on the host as
[license_directory_on_the_host]
, and as[license_directory_in_the_container]
in the container.Note: If you are using an offline license, care must be taken to specify the ID of the same GPU device to the value of the
NVIDIA_VISIBLE_DEVICES
environment variable as the one given in the license data file.If video devices (such as USB web cameras) are configured for MGR to be analysed, these devices have to be mounted into the container. Let's refer to one device as
[dev]
. Additional devices can be mounted similarly. Also, the Unix user of the container has to be a member of thevideo
Unix group in the container; this can be ensured with the--group-add video
flag when creating the container.If pre-recorded video files are configured for MGR to be analysed, these files have to be mounted into the container. Let's refer one video file as
[file_on_the_host]
on the host, and as[file_in_the_container]
in the container respectively. Additional video files can be mounted similarly.In addition to the information said about the Docker network
[network]
in the Docker containers section of the Generic Operation Guide, it must be made sure, that all RTSP streams configured for MGR to be analysed can be reached from that network. To quickly test, whether the RTSP stream[rtsp_url]
is reachable from[network]
, try to connect to the stream with the following test container (The IP and the port parts of the URL of the RTSP stream are referred as[ip]
and[port]
):$ echo (echo 'DESCRIBE [rtsp_url] RTSP/1.0 CSeq: 1 '; sleep 1 `#this value can be increased if the network is slower`) \ | docker run --rm -i --net [network] busybox telnet [ip] [port]
The expected output is:
RTSP/1.0 200 OK [...]
There is a monitoring port defined in the configuration of MGR. Let's refer to it as
[port]
.
Creating and starting a container
Docker CLI
Given all the above information, the container can be created with the following example command:
$ docker container create \
--name mgr_[instance_number] \
--user [uid]:[gid] \
-v /sys/firmware/:/host_sys/firmware/:ro \
-v "[conf_on_host]:[conf_in_container]:ro" \
--env MGR_PROPERTY_FILE_PATHS="[properties_env_var_value]" \
--net [network] \
--publish [port]:[port] \
--runtime nvidia \
--env NVIDIA_VISIBLE_DEVICES="[gpu_specification]" \
-v "[model_directory_on_the_host]:[model_directory_in_the_container]:ro" \
-v "[license_directory_on_the_host]:[license_directory_in_the_container]:ro" \
--device "[dev]:[dev]:rw" \
--group-add video \
-v "[file_on_the_host]:[file_in_the_container]:ro" \
[image]
Refer to the Docker CLI section of the Generic Operation Guide on information how to start the above created container.
Docker Compose
An example configuration for Docker Compose follows:
version: '2.3'
services:
mgr_[instance_number]:
image: '[image]'
user: '[uid]:[gid]'
volumes:
- type: bind
source: '/sys/firmware/'
target: '/host_sys/firmware/'
read_only: true
- type: bind
source: '[conf_on_host]'
target: '[conf_in_container]'
read_only: true
- type: bind
source: '[model_directory_on_the_host]'
target: '[model_directory_in_the_container]'
read_only: true
- type: bind
source: '[license_directory_on_the_host]'
target: '[license_directory_in_the_container]'
read_only: true
- type: bind
source: '[file_on_the_host]'
target: '[file_in_the_container]'
read_only: true
devices:
- '[dev]:[dev]:rw'
environment:
- 'MGR_PROPERTY_FILE_PATHS=[properties_env_var_value]'
- 'NVIDIA_VISIBLE_DEVICES=[gpu_specification]'
runtime: 'nvidia'
group_add:
- 'video'
ports:
- [port]:[port]
networks:
default:
external:
name: [network]