UVAP
  • Key Features
  • Feature Demos
  • Installation
  • Developer Guide
  • Operation Guide
  • Tutorials
  • Help

›Starting Microservices

Feature Demos

  • Starting Feature Demos
  • Person Detection

    • Head Detection Demo
    • Head Pose Demo
    • Human Skeleton Demo
    • Detection Filtering Demo

    Movement Detection

    • Tracking Demo
    • Pass Detection Demo

    Facial Properties and Recognition

    • Demography Demo
    • Face mask Demo
    • Single Camera Reidentification Demo with Pre-clustering
    • Reidentification Demo with Person Names

    Image and Video

    • Show Image Demo
    • Saving Video Streams

    Starting Microservices

    • Starting Multi-Graph Runner
    • Starting Tracker
    • Starting Pass Detector
    • Starting Reidentifier
    • Starting Detection Filter
    • Starting Feature Vector Clustering
    • Starting Stream Configurator UI
    • Starting Web Player
    • Starting Video Capture

Starting Multi-Graph Runner

Starts head detection on the frames of a previously configured video stream.

Note: The aim of this guide is to describe how to start MGR for running the UVAP Feature Demos only. To start MGR for production use, see the Operating the Multi-Graph Runner guide in the Operation Guide section.

Prerequisites

It is assumed that UVAP is properly configured. For more information on configuration, see Configuring UVAP.

For information on Multi-Graph Runner (MGR) configuration, see Configuring Multi-Graph Runner.

Starting the Multi-Graph Runner service

To start MGR:

  1. Run the microservice:

    Attention! Before starting this microservice, the command below silently stops and removes the Docker container named uvap_mgr, if such already exists.

    $ "${UVAP_HOME}"/scripts/run_mgr.sh -- --net=uvap
    

    Attention! First startup of MGR can take a few minutes, especially on Jetson TX2 or other small machines. The cause of this is the optimization of neural networks for the given runtime environment. These optimized model files then are stored in cache (usually ~/.cache/multi-graph-runner/). This script mounts it to /ultinous_app/cache in guest and MGR is configured to use that location.

    The output of the above command contains the following:

    • Information about pulling the required Docker image
    • The ID of the Docker container created
    • The name of the Docker container created: uvap_mgr

    There are more optional parameters for the run_mgr.sh script to override defaults. The following list describes these options in detail:

    • --: any options after -- will be passed to the docker container create command, which is called by the script
    • --help: prints out brief information of the usage of the script
    • --models-dir: directory path of the AI models. Default value: ${UVAP_HOME}/models
    • --config-dir: directory path of the configuration files. Default value: ${UVAP_HOME}/config/uvap_mgr
    • --cache-dir: directory path of model cache files. Default value: ~/.config/mult-graph-runner
    • --image-name: tag of the Docker image to use. The default value will be determined by Git tags
    • --license-data-file: data file of your UVAP license. Default value: ${UVAP_HOME}/license/license.txt
    • --license-key-file: key file of your UVAP license. Default value: ${UVAP_HOME}/license/license.key
    • --gpu-specification: NVIDIA® GPU specification for Docker. Controls which GPU cards should be visible for the container. See the NVIDIA Container Runtime documentation for the possible values. default value:
      • in case of the license is bound to GPU information: the GPU UUID found in the license data file
      • otherwise: the GPU index 0
    • --run-mode: determines, how the service should be started. Possible values: background or foreground Default value: background

    All video devices (/dev/video*) on the host (where MGR is started with run_mgr.sh) are mounted into the uvap_mgr container.

    If prerecorded videos (stored on the local filesystem) are configured as streams to be analyzed, the files need to be mounted into the uvap_mgr container – this can be done by passing regular Docker mount parameters at the end of the above command line (after the -- parameter). For more information on Docker mounting, see the Add bind mounts or volumes using the --mount flag section in the documentation of Docker.

    For example, if there is a video file on the host /mnt/videos/video1.avi, and it is configured for MGR as /some/directory/my_video.avi, the following command runs MGR accordingly:

    $ "${UVAP_HOME}"/scripts/run_mgr.sh -- --net=uvap \
      --mount type=bind,readonly,src=/mnt/videos/video1.avi,dst=/some/directory/my_video.avi
    
  2. Check if the uvap_mgr container is running:

    $ docker container inspect --format '{{.State.Status}}' uvap_mgr
    

    Expected output:

    running
    

    Note: If the status of the UVAP container is not running, send the output of the following command to support@ultinous.com:

    $ docker logs uvap_mgr
    

    These Docker containers can be managed with standard Docker commands. For more information, see the documentation of the docker (base command).

  3. Check if the Kafka topics are created:

    $ docker exec kafka kafka-topics --list --zookeeper zookeeper:2181
    

    Expected output:

    • In case of Base Mode Demos:
      base.cam.0.ages.AgeRecord.json
      base.cam.0.anonymized_original.Image.jpg
      base.cam.0.dets.ObjectDetectionRecord.json
      base.cam.0.frameinfo.FrameInfoRecord.json
      base.cam.0.genders.GenderRecord.json
      base.cam.0.masks.FaceMaskRecord.json
      base.cam.0.original.Image.jpg
      base.cam.0.poses.HeadPose3DRecord.json
      base.cam.1.ages.AgeRecord.json
      base.cam.1.anonymized_original.Image.jpg
      base.cam.1.dets.ObjectDetectionRecord.json
      base.cam.1.frameinfo.FrameInfoRecord.json
      base.cam.1.genders.GenderRecord.json
      base.cam.1.masks.FaceMaskRecord.json
      base.cam.1.original.Image.jpg
      base.cam.1.poses.HeadPose3DRecord.json
      
    • In case of Feature Vector Mode Demos:
      fve.cam.0.dets.ObjectDetectionRecord.json
      fve.cam.0.fvecs.FeatureVectorRecord.json
      fve.cam.0.frameinfo.FrameInfoRecord.json
      fve.cam.0.ages.AgeRecord.json
      fve.cam.0.original.Image.jpg
      fve.cam.1.dets.ObjectDetectionRecord.json
      fve.cam.1.fvecs.FeatureVectorRecord.json
      fve.cam.1.frameinfo.FrameInfoRecord.json
      fve.cam.1.ages.AgeRecord.json
      fve.cam.1.original.Image.jpg
      
    • In case of Skeleton Mode Demos:
      skeleton.cam.0.original.Image.jpg
      skeleton.cam.0.skeletons.SkeletonRecord.json
      skeleton.cam.1.original.Image.jpg
      skeleton.cam.1.skeletons.SkeletonRecord.json
      
  4. Fetch data from a Kafka topic:

    $ docker exec kafka kafka-console-consumer --bootstrap-server kafka:9092 \
      --topic base.cam.0.dets.ObjectDetectionRecord.json
    

    Expected example output:

    {"type":"PERSON_HEAD","detection_confidence":0,"end_of_frame":true}
    {"type":"PERSON_HEAD","detection_confidence":0,"end_of_frame":true}
    {"type":"PERSON_HEAD","bounding_box":{"x":747,"y":471,"width":189,"height":256},"detection_confidence":0.99951756,"end_of_frame":false}
    {"type":"PERSON_HEAD","detection_confidence":0,"end_of_frame":true}
    {"type":"PERSON_HEAD","bounding_box":{"x":730,"y":484,"width":190,"height":255},"detection_confidence":0.991036654,"end_of_frame":false}
    {"type":"PERSON_HEAD","detection_confidence":0,"end_of_frame":true}
    {"type":"PERSON_HEAD","bounding_box":{"x":713,"y":467,"width":173,"height":252},"detection_confidence":0.999676228,"end_of_frame":false}
    {"type":"PERSON_HEAD","detection_confidence":0,"end_of_frame":true}
    {"type":"PERSON_HEAD","bounding_box":{"x":713,"y":467,"width":172,"height":252},"detection_confidence":0.999602616,"end_of_frame":false}
    {"type":"PERSON_HEAD","detection_confidence":0,"end_of_frame":true}
    {"type":"PERSON_HEAD","bounding_box":{"x":701,"y":468,"width":178,"height":253},"detection_confidence":0.999979258,"end_of_frame":false}
    {"type":"PERSON_HEAD","detection_confidence":0,"end_of_frame":true}
    
← Saving Video StreamsStarting Tracker →
  • Prerequisites
  • Starting the Multi-Graph Runner service
Help
UVAP License TermsGlossaryTypographic ConventionsTrademark InformationSupport
Navigation
Key FeaturesFeature DemosInstallationDeveloper GuideTutorialsHelp
Community
GitHubFacebookLinkedInTwitterYouTube
Ultinous
Copyright © 2019-2020 Ultinous