AR视觉定位

原理

将点云数据与真实地理坐标关联,使用AR的过程中识别场景中的点云。若匹配一致,则获取当前匹配点云对应的地理坐标。

应用场景

AR视觉定位与AR图片定位有相似之处,区别在于AR图片定位是实现的初始定位,AR视觉定位可以实现全流程定位,适用于场景恢复,寻找我的位置等。

功能实现

必备类库

进行AR惯导定位功能开发必需的类库为com.supermap.ar.jar、com.supermap.data.jar,必需的so库为libarcore_sdk_c.so、libarcore_sdk_jni.so、libimb.so。开发前,请联系我们,获取相应类库。

关键代码

//1. 特征点采集:建图
//获取当前帧图片
cameraImage = frame.acquireCameraImage();
//YUV格式转成Btimap格式
ByteBuffer cameraPlaneY = cameraImage.getPlanes()[0].getBuffer();
ByteBuffer cameraPlaneU = cameraImage.getPlanes()[1].getBuffer();
ByteBuffer cameraPlaneV = cameraImage.getPlanes()[2].getBuffer();
int ySize = cameraPlaneY.capacity();
int uSize = cameraPlaneU.capacity();
int vSize = cameraPlaneV.capacity();
byte[] compositeByteArray = new byte[(cameraPlaneY.capacity() + cameraPlaneU.capacity() + cameraPlaneV.capacity())];
cameraPlaneY.get(compositeByteArray, 0, ySize);
cameraPlaneV.get(compositeByteArray, ySize, vSize);
cameraPlaneU.get(compositeByteArray, ySize + vSize, uSize);
ByteArrayOutputStream baOutputStream = new ByteArrayOutputStream();
YuvImage yuvImage = new YuvImage(compositeByteArray, ImageFormat.NV21, cameraImage.getWidth(), cameraImage.getHeight(), null);
yuvImage.compressToJpeg(new Rect(0, 0, cameraImage.getWidth(), cameraImage.getHeight()), 75, baOutputStream);
byte[] byteForBitmap = baOutputStream.toByteArray();
Bitmap mCacheBitmap = BitmapFactory.decodeByteArray(byteForBitmap, 0, byteForBitmap.length);
mFrameChain.put(0, 0,compositeByteArray);
//拷贝当前坐标
System.arraycopy(frame.getCamera().getPose().getTranslation(),0,globalPoseData,0,3);    
//拷贝旋转四元数
System.arraycopy(frame.getCamera().getPose().getRotationQuaternion(),0,globalPoseData,3,4);
//保存特征点和姿态
Mat modified =  onCameraFramePose(mCameraFrame,globalPoseData);

//2. 视觉重定位:输出位置
public Mat onCameraFrame(CameraGLViewBase.CvCameraViewFrame inputFrame) {
  //获取rgba值
  mRgba = inputFrame.rgba();
  //获取灰度值
  mGray = inputFrame.gray();
  if (initFinished) {
    Arrays.fill(outRelPose, 0);
    mHasMatchedLocationFlag = false;
    mCurTimestamp = System.currentTimeMillis();
    //根据当前帧图片寻找匹配特征点,输出位置,未找到返回0
    int trackingResult = nativeHelper.processCameraFrameWithOutPose(mGray.getNativeObjAddr(), mRgba.getNativeObjAddr(), outRelPose);
    if (mCurTimestamp - mPreTimestamp > 1000) {
      if(outRelPose[0] != 0.0f && outRelPose[1] != 0.0f && outRelPose[2] != 0.0f){
        //根据位置更新UI
        model.select(new Point3D(outRelPose[0],outRelPose[1],outRelPose[2]));
      }
      mPreTimestamp = mCurTimestamp;
    }
  }
  return mRgba;
}