Model Evaluation

Instructions For Use

To assess the performance of the change detection model, in addition to manual visual comparison, the difference between the model inference results and the real label (Ground Truth) can be quantified. Several evaluation metrics can be used to assist in judgment, such as F1, IoU, Recall, and Precision. In practical applications, based on the specific detection task and the meaning of the metrics, it is usually necessary to combine multiple evaluation metrics to comprehensively judge the model's performance.

SuperMap iDesktopX provides the Model Evaluation function, which supports evaluating image target detection, binary classification, classification of ground objects, and universal change detection models.

The accuracy metrics that Model Evaluation can output are as follows:

Interpretation Task Precision Recall F1-score Kappa IoU mAP OA
Target Detection      
Binary Classification  
Classification of Ground Objects  
Universal Change Detection  

Parameter Description

  • Inference Result: Select the vector interpretation result that requires model evaluation.

  • Real Label: The real label vector dataset used for comparison with the inference result.

  • Model Type: Supports Target Detection, Binary Classification, Classification of Ground Objects, and Universal Change Detection.

  • Inference Result Category Field: Reads the field in the inference result vector dataset that identifies different feature types or target types. The type should be consistent with the real label category field. This parameter is not effective when "Model Type" is Binary Classification or Universal Change Detection. If no field is specified (None), the Value field will be used. If none of these fields exist, all records will be treated as one class.

  • Real Label Category Field: Reads the field in the real label vector dataset that identifies different feature types or target types. The type should be consistent with the inference result category field. This parameter is not effective when "Model Type" is Binary Classification or Universal Change Detection. If no field is specified (None), the Value field will be used. If none of these fields exist, all records will be treated as one class.

  • Evaluation Range Data Source/Dataset: Enter a vector surface dataset to define the range for model evaluation. Within the evaluation range, there must be complete inference results and real labels to ensure the reliability of the evaluation results.

  • Whether the Topology is Checked: Only takes effect when "Model Type" is Classification of Ground Objects. When evaluating classification of ground objects interpretation results, topology errors in the real label can affect the accuracy of the evaluation. Checking this option will output the topology check results for the classification of ground objects real label dataset. Based on the topology check results, the real label can be corrected before performing model evaluation again. Performing a topology check will increase the time required for model evaluation.

  • Overlap Threshold: The threshold for determining whether a target detection inference result bounding box is correct. If the overlap ratio exceeds this threshold, the inference result is considered correct. Its value should be within the range of 0 to 1. The overlap threshold is a ratio, where the numerator is the intersection of the inference bounding box and the real bounding box, and the denominator is the union of the two bounding boxes. This parameter only takes effect when "Model Type" is Target Detection. The default value is 0.5.

  • Result Data: The table for saving the model evaluation results. It is necessary to set the data source for saving and the save name.

Related Topics

Model Training

Binary Classification

Classification of Ground Objects

Target Detection

Universal Change Detection