configureDetectorMonoCamera

Configure object detector for using calibrated monocular camera

Description

example

configuredDetector = configureDetectorMonoCamera(detector,sensor,objectSize) configures an ACF (aggregate channel features), Faster R-CNN (regions with convolutional neural networks), Fast R-CNN or YOLO v2 object detector to detect objects of a known size on a ground plane. Specify your trained object detector, detector, a camera configuration for transforming image coordinates to world coordinates, sensor, and the range of the object widths and lengths, objectSize.

Examples

collapse all

Configure an ACF object detector for use with a monocular camera mounted on an ego vehicle. Use this detector to detect vehicles within video frames captured by the camera.

Load an acfObjectDetector object pretrained to detect vehicles.

detector = vehicleDetectorACF;

Model a monocular camera sensor by creating a monoCamera object. This object contains the camera intrinsics and the location of the camera on the ego vehicle.

focalLength = [309.4362 344.2161];    % [fx fy]
principalPoint = [318.9034 257.5352]; % [cx cy]
imageSize = [480 640];                % [mrows ncols]
height = 2.1798;                      % height of camera above ground, in meters
pitch = 14;                           % pitch of camera, in degrees
intrinsics = cameraIntrinsics(focalLength,principalPoint,imageSize);

monCam = monoCamera(intrinsics,height,'Pitch',pitch);

Configure the detector for use with the camera. Limit the width of detected objects to a typical range for vehicle widths: 1.5–2.5 meters. The configured detector is an acfObjectDetectorMonoCamera object.

vehicleWidth = [1.5 2.5];
detectorMonoCam = configureDetectorMonoCamera(detector,monCam,vehicleWidth);

Load a video captured from the camera, and create a video reader and player.

videoFile = fullfile(toolboxdir('driving'),'drivingdata','caltech_washington1.avi');
reader = vision.VideoFileReader(videoFile,'VideoOutputDataType','uint8');
videoPlayer = vision.VideoPlayer('Position',[29 597 643 386]);

Run the detector in a loop over the video. Annotate the video with the bounding boxes for the detections and the detection confidence scores.

cont = ~isDone(reader);
while cont
   I = reader();

   % Run the detector.
   [bboxes,scores] = detect(detectorMonoCam,I);
   if ~isempty(bboxes)
       I = insertObjectAnnotation(I, ...
                           'rectangle',bboxes, ...
                           scores, ...
                           'Color','g');
   end
   videoPlayer(I)
   % Exit the loop if the video player figure is closed.
   cont = ~isDone(reader) && isOpen(videoPlayer);
end

Input Arguments

collapse all

Object detector to configure, specified as one of these object detector objects:

Train the object detector before configuring them by using:

Camera configuration, specified as a monoCamera object. The object contains the camera intrinsics, the location, the pitch, yaw, and roll placement, and the world units for the parameters. Use the intrinsics to transform the object points in the image to world coordinates, which you can then compare to the WorldObjectSize property for detector.

Range of object widths and lengths in world units, specified as a [minWidth maxWidth] vector or [minWidth maxWidth; minLength maxLength] vector. Specifying the range of object lengths is optional.

Output Arguments

collapse all

Introduced in R2017a