Lidar Toolbox™ provides algorithms, functions, and apps for designing, analyzing, and testing lidar processing systems. You can perform object detection and tracking, semantic segmentation, shape fitting, lidar registration, and obstacle detection. Lidar Toolbox supports lidar-camera cross calibration for workflows that combine computer vision and lidar processing.
You can train custom detection and semantic segmentation models using deep learning and machine learning algorithms such as PointSeg, PointPillar, and SqueezeSegV2. The Lidar Labeler app supports manual and semi-automated labeling of lidar point clouds for training deep learning and machine learning models. The toolbox lets you stream data from Velodyne® lidars and read data recorded by Velodyne and IBEO lidar sensors.
Lidar Toolbox provides reference examples illustrating the use of lidar processing for perception and navigation workflows. Most toolbox algorithms support C/C++ code generation for integrating with existing code, desktop prototyping, and deployment.
Lidar Toolbox provides algorithms, functions, and an app for designing, analyzing, and testing lidar processing systems.
You can process 2D and 3D point clouds, apply deep learning algorithms on lidar point clouds, cross calibrate lidar and camera sensors, and implement 3D SLAM algorithms for autonomous driving and robotics applications.
Lidar Toolbox enables you to read lidar point clouds in different file formats (like PCAP, LAS, PCD and PLY) directly into MATLAB. You can also stream live data from Velodyne Lidar sensors using the corresponding support package.
The Toolbox provides functionality to train, test and deploy deep learning networks on lidar point clouds for object detection and semantic segmentation.
The Lidar Labeler app simplifies ground truth labeling of lidar point clouds.
The app provides an interactive user interface that enables manual and semi-automated labeling of lidar point clouds for training deep learning models.
Lidar Toolbox provides lidar camera calibration functionality to enhance perception algorithms, by estimating rotation and translation between camera and lidar, and then using this data to fuse color information from camera to lidar point cloud, or to transform bounding box coordinates between lidar and camera.
The toolbox also provides functionality to register lidar point clouds that helps in implementing 3D SLAM from ground and aerial lidar data.
You can compare different lidar point clouds by extracting and matching Fast Point Histogram Features (FPFH). Using these matched features, you can register lidar point cloud sequences to progressively build 3D maps.
The toolbox also includes 2D lidar processing functionalities, like estimating positions and creating occupancy grids from real or simulated 2D lidar sensor readings. You can use these processed results in 2D object detection and real-time collision warning workflows.
For more information about Lidar Toolbox, return to the product page.
You can also select a web site from the following list:
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.