Main Content

Mapping and Localization Using Vision and Lidar Data

Simultaneous localization and mapping, map building, odometry using vision and lidar data

Use simultaneous localization and mapping (SLAM) algorithms to build a map of the environment while estimating the pose of the ego vehicle at the same time. You can use SLAM algorithms with either visual or point cloud data. For more information on implementing visual SLAM using camera image data, see Implement Visual SLAM in MATLAB and Develop Visual SLAM Algorithm Using Unreal Engine Simulation. For more information on implementing point cloud SLAM using lidar data, see Implement Point Cloud SLAM in MATLAB and Design Lidar SLAM Algorithm Using Unreal Engine Simulation Environment.

You can use measurements from sensors such as inertial measurement units (IMU) and global positioning system (GPS) to improve the map building process with visual or lidar data. For an example, see Build a Map from Lidar Data.

In environments with known maps, you can localize the ego vehicle by estimating its pose relative to the map coordinate frame origin. For an example on localization using a known visual map, see Visual Localization in a Parking Lot. For an example on localization using a known point cloud map, see Lidar Localization with Unreal Engine Simulation.

In environments without known maps, you can use visual-inertial odometry by fusing visual and IMU data to estimate the pose of the ego vehicle relative to the starting pose. For an example, see Visual-Inertial Odometry Using Synthetic Data.

For an application of mapping and location algorithms to detect empty parking spots in a parking lot, see Perception-Based Parking Spot Detection Using Unreal Engine Simulation.

Functions

expand all

quaternionCreate quaternion array (Since R2020a)
distAngular distance in radians (Since R2020a)
rotateframeQuaternion frame rotation (Since R2020a)
rotatepointQuaternion point rotation (Since R2020a)
rotmatConvert quaternion to rotation matrix (Since R2020a)
rotvecConvert quaternion to rotation vector (radians) (Since R2020a)
rotvecdConvert quaternion to rotation vector (degrees) (Since R2020a)
partsExtract quaternion parts (Since R2020a)
eulerConvert quaternion to Euler angles (radians) (Since R2020a)
eulerdConvert quaternion to Euler angles (degrees) (Since R2020a)
compactConvert quaternion array to N-by-4 matrix (Since R2020a)
monovslamVisual simultaneous localization and mapping (vSLAM) with monocular camera (Since R2023b)
imageviewsetManage data for structure-from-motion, visual odometry, and visual SLAM (Since R2020a)
optimizePosesOptimize absolute poses using relative pose constraints (Since R2020a)
createPoseGraphCreate pose graph (Since R2020a)
relativeCameraPose(Not recommended) Calculate relative rotation and translation between camera poses
triangulate3-D locations of undistorted matching points in stereo images
bundleAdjustmentAdjust collection of 3-D points and camera poses
bundleAdjustmentMotionAdjust collection of 3-D points and camera poses using motion-only bundle adjustment (Since R2020a)
bundleAdjustmentStructureRefine 3-D points using structure-only bundle adjustment (Since R2020a)
pcviewsetManage data for point cloud based visual odometry and SLAM (Since R2020a)
optimizePosesOptimize absolute poses using relative pose constraints (Since R2020a)
createPoseGraphCreate pose graph (Since R2020a)
scanContextDistanceDistance between scan context descriptors (Since R2020b)
scanContextDescriptorExtract scan context descriptor from point cloud (Since R2020b)
pctransformTransform 3-D point cloud
pcalignAlign array of point clouds (Since R2020b)
pcregistercorrRegister two point clouds using phase correlation (Since R2020b)
pcregistercpdRegister two point clouds using CPD algorithm
pcregistericpRegister two point clouds using ICP algorithm
pcregisterndtRegister two point clouds using NDT algorithm
pcregisterloamRegister two point clouds using LOAM algorithm (Since R2022a)
pcmapndtLocalization map based on normal distributions transform (NDT) (Since R2021a)

Topics