Generate Tie Points
Feature Description
Tie points are corresponding image points that can be used to construct stereoscopic models or establish connection relationships between adjacent models (images). Generating tie points helps improve accuracy during geometric correction, ensuring the spatial consistency of the imagery.
Supported starting from SuperMap iDesktopX 11i(2023).
Function Entrance
Satellite Image Processing Tab -> Geometric Correction Group -> Generate Tie Points.
Parameter Description
- Input Image Type: Select the type of imagery used for generating tie points. Default is Panchromatic Image. Can be switched to Multispectral Image, Forward-Looking Image, Rear-View Image, or Front View and Rear View Image based on the specific image type.
- Refer to the Adjustment File: Use existing block adjustment file information to make newly generated tie points close to the accuracy of existing points. The Add and Delete buttons on the toolbar facilitate convenient management of multiple referenced adjustment files.
- Plane Accuracy: Image planar accuracy determines whether preprocessing is performed during tie point matching.
- Low: Typically indicates image planar accuracy error is greater than 40 pixels, requiring preprocessing.
- High: Typically indicates image planar accuracy error is less than 15 pixels, no preprocessing needed.
- Medium: Select this option when image accuracy is uncertain; the program will automatically estimate the accuracy.
- Error Threshold: Set the error threshold for gross error elimination during image matching when generating tie points. Value range is [0,40], default is 5, unit is px. A larger threshold preserves more tie points but increases the probability of retaining incorrect points.
- Point Distribution Mode: Choose the tie point distribution mode. Offers Conventional and Uniform modes, default is Conventional.
- Conventional: Divides each overlapping area into N*M sub-regions, then selects n image blocks of size 512*512 from each sub-region for corresponding point matching, ensuring stable and reliable points. Generated tie points will try to cover the entire overlap area.
- Number of Blocks in Column Direction: Number of blocks each overlap area is divided into in the column direction. Default is 4.
- Number of Blocks in Row Direction: Number of blocks each overlap area is divided into in the row direction. Default is 4.
- Matching Method: Provides six matching methods below, selectable based on data characteristics and requirements. Among them, AFHORP and RIFT support multimodal data matching; CASP and DEEPFT are based on deep learning and require additional AI model configuration and CUDA environment installation; Generally, MOTIF, CASP, or DEEPFT are recommended.
- MOTIF (default): A template matching algorithm for multimodal imagery, characterized by using lightweight feature descriptors. MOTIF can overcome nonlinear radiometric distortions caused by differences between SAR and optical images.
- CASP: A novel cascade matching pipeline that benefits from integrating high-level features, helping to reduce the computational cost of low-level feature extraction. This pipeline decomposes the matching stage into two progressive phases, first establishing one-to-many correspondences at a coarser scale as cascade priors. Then, using these priors for guidance, one-to-one matches are determined at the target scale.[1]
- DEEPFT: A deep learning-based image matching method.
- SIFT: A method for extracting distinctive invariant features from images, useful for reliable matching between objects or scenes under different viewpoints.
- RIFT: A feature matching algorithm robust to large-scale nonlinear radiometric distortions. It can enhance the stability of feature detection and overcome the limitations of feature description based on gradient information.
- AFHORP: A feature matching algorithm for multimodal imagery. AFHORP has strong resistance to radiometric distortion and contrast differences in multimodal images, performing excellently in solving issues of orientation reversal and phase extremum mutation.
- Maximum Points per Block: The maximum number of points retained within a block during image matching. Value range is [1,2048], default is 256.
- Uniform: Generated tie points will be evenly distributed across the overlap area. The number of points is less than with regular distribution, but the distribution is more uniform, suitable for situations with significant internal distortion in the imagery.
- Number of Seed Points: Set the number of seed points for corresponding point matching on each image scene. Value range is [64,6400], default is 512. When image texture is poor, increase the number of tie points to ensure enough points are matched, improving subsequent imagery quality.
- Seed Point Search: Set the method for finding seed points. Provides Grid Center Point and Corner Point methods, default is Corner Point.
- Corner Point: Uses points with distinct features within the selected region as seed points.
- Grid Center Point: Uses the center point of the grid as the seed point. This search method has randomness.
- Template Size: Set the spacing size between seed points. Value range is [1,256], default is 40, unit is px. Larger templates result in more reliable search points but longer processing time.
- Search Radius: Set the search radius for seed points during image matching. Value range is [0,256], default is 40, unit is px. A larger search radius increases the matching range but also the processing time.


Figure: Conventional Distribution Figure: Uniform Distribution - Conventional: Divides each overlapping area into N*M sub-regions, then selects n image blocks of size 512*512 from each sub-region for corresponding point matching, ensuring stable and reliable points. Generated tie points will try to cover the entire overlap area.
- Semantic Culling of Non-Ground Points: Not checked by default. When checked, automatically culls tie points in cloud areas and building areas based on AI semantic technology.
- Cloud Area: This parameter appears after checking Semantic Culling of Non-Ground Points. Checked by default, meaning tie points within cloud areas will be automatically culled based on the set dataset. If unchecked, tie points in cloud areas are retained. The dataset must contain an ImageName field, and the name must correspond to the image currently to be processed.
- Building Area: This parameter appears after checking Semantic Culling of Non-Ground Points. Checked by default, meaning building areas will be automatically identified and tie points in those areas will be culled. If unchecked, tie points in building areas are retained.
Related Topics
Generate Ground Control Points
References
[1] Chen, P., Yu, L., Wan, Y., Pei, Y., Liu, X., Yao, Y., ... & Zhang, Y. (2025). CasP: Improving Semi-Dense Feature Matching Pipeline Leveraging Cascaded Correspondence Priors for Guidance. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 28063-28072).