Main Content

Lidar Toolbox

Design, analyze, and test lidar processing systems

Lidar Toolbox™ provides algorithms, functions, and apps for designing, analyzing, and testing lidar processing systems. You can perform object detection and tracking, semantic segmentation, shape fitting, lidar registration, and obstacle detection. The toolbox provides workflows and an app for lidar-camera cross-calibration.

The toolbox lets you stream data from Velodyne®, Ouster®, and Hokuyo™ lidars and read data recorded by sensors such as Velodyne, Ouster, and Hesai® lidar sensors. The Lidar Viewer App enables interactive visualization and analysis of lidar point clouds. You can train detection, semantic segmentation, and classification models using machine learning and deep learning algorithms such as PointPillars, SqueezeSegV2, and PointNet++. The Lidar Labeler App supports manual and semi-automated labeling of lidar point clouds for training deep learning and machine learning models.

Lidar Toolbox provides lidar processing reference examples for perception and navigation workflows. Most toolbox algorithms support C/C++ code generation for integrating with existing code, desktop prototyping, and deployment.

Get Started

Learn the basics of Lidar Toolbox

I/O

Read, write, and visualize lidar data

Preprocessing

Downsample, filter, transform, align, block, organize, and extract features from 3-D point cloud

Labeling, Segmentation, and Detection

Label, segment, detect, and track objects in point cloud data using deep learning and geometric algorithms

Calibration and Sensor Fusion

Interactively perform lidar-camera calibration, estimate transformation matrix, and fuse data from multiple sensors

Navigation and Mapping

Point cloud registration and map building, 2-D and 3-D SLAM, and 2-D obstacle detection

Lidar Toolbox Supported Hardware

Support for third-party hardware