Video Analysis Environment Configuration

The Video Analysis and Machine Learning functions provided by SuperMap iDesktopX depend on the Python environment and scripts. Since the environment package occupies a large amount of disk space, no related environment is provided in the basic product package. If you need to use functions such as Video Analysis and Machine Learning, you need to download the expansion pack and perform simple environment configuration.

The recommended computer hardware configuration is described below:

  • NVIDIA graphics card
  • Video memory ≥ 10 G, the minimum requirement is 6 G, if Object Extraction or SegmentAnything model is used for AI annotation, the video memory must be greater than 8 G
  • The latest graphics driver

1. Configure the Python environment

1.1 Get the Extension Pack

Download and SuperMap iDesktopX at SuperMap official website or Technical Resource Center The SuperMap iDesktopX Extension _ AI for Windows extension package (hereinafter referred to as the extension package) corresponding to the product package (hereinafter referred to as the product package) version.

The following resources are available in the extension pack:

  • Resources _ ml: Machine Learning resource package, including resources such as Sample Data and model Config File;
  • support
    • MiniConda: running environment for AI analysis.
  • Templates: Video Effects resource.

1.2 Configuring the Extension Pack

Copy the resource _ ml, support, and templates folders in the extension package to the root directory of the product package. The path of the product package cannot contain Chinese characters.

2. Configure the Video Analysis environment

2.1 Configuring the Video Analysis Model

2.1.1 Download the Video Analysis model of the corresponding version according to the version of the product package. The download address is https://pan.baidu.com/s/1aLaUMHubD9x66Mw2FRNhPw?pwd=2024 . The extraction code is 2024.
2.2.2 Copy the downloaded video-detection folder to the package root/support folder.

2.2 Configuring Video Analysis Code

2.2.1 Download the Video Analysis code resource package of the corresponding version according to the product package version in the https://gitee.com/SuperMapDesktop/deep-sort-yolov4 .

 

2.2 Video Analysis code resource package deep-sort-yolov4-x to be downloaded (X is the identified version number, if 11.2.0 branch is selected, The package name is "deep-sort-yolov4-11.2.0") is copied to the package/support/video-detection/deep-sort-yolov4 folder.

2.3 High Performance Video Analysis Environment

To improve the detection performance of Video Analysis, it is recommended to configure the Redis environment. Make sure Redis is started before using high performance detection. The configuration of the Redis environment is described as follows:

2.3.1 Download Redis and unzip it.

2.3.2 In the Redis package directory, double-click to start the redis server. Exe.

2.3.3 Open the iDesktopX package/configuration/Desktop. Parameter. XML file, Change HighPerformance Detection = "false" to HighPerformance Detection = "true".

2.3.4 Launch SuperMap iDesktopX.

2.4 Push Live Stream

Video Analysis results can be pushed as a live stream for easy viewing of Analyst Results on the Web. However, this process requires additional environment deployment, which is described as follows:

2.4.1 Download ffmpeg from the official website, download address: https://github.com/BtbN/FFmpeg-Builds/releases .

2.4.2 Copy the ffmpeg. Exe to the root directory/support/video-detection/Tools/of the product package after decompression;

2.5 TensorRTModel Convert

SuperMap iDesktopX supports converting YOLOv5 TorchModel to TensorRT format, which is currently only supported on windows 10. The environment deployment is described as follows:

2.5.1 Download Miniconda3, download address: https://docs.conda.io/en/latest/miniconda.html , select version 3.8, click Download, and install it.

2.5.2 Load and decompress the TensorRT package, and click to download the following package address: https://developer.nvidia.com/nvidia-tensorrt-8x-download ;

2.5.3 Run Miniconda3 as an administrator in the Start menu bar of Windows to pop up a console window;

2.5.4 Input the conda activate conda path, and the conda path is the path after the AI expansion package is decompressed and configured;

2.5.5 The console enters the directory where the TensorRT package is decompressed, installs graphsurgeon, uff, onnx _ graphsurgeon, and TensorRT, and executes the following commands in turn:


  cd .\graphsurgeon\
  pip install .\graphsurgeon-0.4.6-py2.py3-none-any.whl
  cd ../
  d .\uff\
  pip install .\uff-0.6.9-py2.py3-none-any.whl
  cd ../
  cd .\onnx_graphsurgeon\
  pip install .\onnx_graphsurgeon-0.3.12-py2.py3-none-any.whl
  cd ../
  cd .\python\
  pip install .\tensorrt-8.6.1-cp38-none-win_amd64.whl
  

2.5.6 Copy all DLL files in the lib folder of the decompressed directory of the TensorRT package to the root directory of the product package/support/video-detection/TensorRT/.