Calibration and Sensor Fusion
Most modern autonomous or semi-autonomous vehicles are equipped with sensor suites that contain multiple sensors. It is necessary to develop a geometric correspondence between these sensors, to understand and correlate output data. Rotational and translational transformations are required to calibrate and fuse data from these sensors. Fusing lidar data with corresponding camera data is particularly useful in the perception pipeline. The lidar and camera calibration (LCC) workflow serves this purpose. It uses the checkerboard pattern calibration method. To learn more, see What Is Lidar Camera Calibration?.
Lidar Toolbox™ algorithms provide functionalities to extract checkerboard features from images and point clouds and use them to estimate the transformation between camera and lidar sensor. The toolbox also provides downstream LCC functionalities, projecting lidar points on images, fusing color information in lidar point clouds, and transferring bounding boxes from camera data to lidar data. All of these functionalities have been integrated into the Lidar Camera Calibrator app. Using the app, you can interactively calibrate the sensors.
|Lidar Camera Calibrator||Interactively estimate rigid transformation between lidar sensor and camera|
|Project lidar point cloud data onto image coordinate frame|
|Fuse image information to lidar point cloud|
|Estimate 3-D bounding boxes in point cloud from 2-D bounding boxes in image|
|Estimate 2-D bounding box in camera frame using 3-D bounding box in lidar frame|
Integrate lidar and camera data.
These guidelines and procedures apply to lidar-camera calibration.
Interactively calibrate lidar and camera sensors.
This example shows how to read and save images and point cloud data from a rosbag file.