Main Content

detect

Detect objects using YOLO v2 object detector configured for monocular camera

Description

bboxes = detect(detector,I) detects objects within image I using you only look once version 2 (YOLO v2) object detector configured for a monocular camera. The locations of objects detected are returned as a set of bounding boxes.

When using this function, use of a CUDA®-enabled NVIDIA® GPU is highly recommended. The GPU reduces computation time significantly. Usage of the GPU requires Parallel Computing Toolbox™. For information about the supported compute capabilities, see GPU Computing Requirements (Parallel Computing Toolbox).

[bboxes,scores] = detect(detector,I) also returns the detection confidence scores for each bounding box.

example

[___,labels] = detect(detector,I) returns a categorical array of labels assigned to the bounding boxes in addition to the output arguments from the previous syntax. The labels used for object classes are defined during training using the trainYOLOv2ObjectDetector function.

example

[___] = detect(___,roi) detects objects within the rectangular search region specified by roi. Use output arguments from any of the previous syntaxes. Specify input arguments from any of the previous syntaxes.

detectionResults = detect(detector,ds) detects objects within the series of images returned by the read function of the input datastore.

[___] = detect(___,Name,Value) also specifies options using one or more Name,Value pair arguments in addition to the input arguments in any of the preceding syntaxes.

example

Examples

collapse all

Configure a YOLO v2 object detector for detecting vehicles within a video captured by a monocular camera.

Load a yolov2ObjectDetector object pretrained to detect vehicles.

vehicleDetector = load('yolov2VehicleDetector.mat','detector');
detector = vehicleDetector.detector;

Model a monocular camera sensor by creating a monoCamera object. This object contains the camera intrinsics and the location of the camera on the ego vehicle.

focalLength = [309.4362 344.2161];    % [fx fy]
principalPoint = [318.9034 257.5352]; % [cx cy]
imageSize = [480 640];                % [mrows ncols]
height = 2.1798;                      % Height of camera above ground, in meters
pitch = 14;                           % Pitch of camera, in degrees
intrinsics = cameraIntrinsics(focalLength,principalPoint,imageSize);

sensor = monoCamera(intrinsics,height,'Pitch',pitch);

Configure the detector for use with the camera. Limit the width of detected objects to 1.5-2.5 meters. The configured detector is a yolov2ObjectDetectorMonoCamera object.

vehicleWidth = [1.5 2.5];
detectorMonoCam = configureDetectorMonoCamera(detector,sensor,vehicleWidth);

Set up the video reader and read the input monocular video.

videoFile = '05_highway_lanechange_25s.mp4';
reader = VideoReader(videoFile);

Create a video player to display the video and the output detections.

videoPlayer = vision.DeployableVideoPlayer();

Detect vehicles in the video by using the detector. Specify the detection threshold as 0.6. Annotate the video with the bounding boxes for the detections, labels, and detection confidence scores.

cont = hasFrame(reader);
while cont
    I = readFrame(reader);
    [bboxes,scores,labels] = detect(detectorMonoCam,I,'Threshold',0.6); % Run the YOLO v2 object detector
    
    if ~isempty(bboxes)
        displayLabel = strcat(cellstr(labels),':',num2str(scores));
        I = insertObjectAnnotation(I,'rectangle',bboxes,displayLabel);
    end
    step(videoPlayer, I);    
    cont = hasFrame(reader) && isOpen(videoPlayer); % Exit the loop if the video player figure window is closed
end

Input Arguments

collapse all

YOLO v2 object detector configured for monocular camera, specified as a yolov2ObjectDetectorMonoCamera object. To create this object, use the configureDetectorMonoCamera function with a monoCamera object and trained yolov2ObjectDetector object as inputs.

Input image, specified as an H-by-W-by-C-by-B numeric array of images. Images must be real, nonsparse, grayscale or RGB image.

  • H — Height in pixels.

  • W — Width in pixels

  • C — The channel size in each image must be equal to the network's input channel size. For example, for grayscale images, C must be equal to 1. For RGB color images, it must be equal to 3.

  • B — Number of images in the array.

The detector is sensitive to the range of the input image. Therefore, ensure that the input image range is similar to the range of the images used to train the detector. For example, if the detector was trained on uint8 images, rescale this input image to the range [0, 255] by using the im2uint8 or rescale function. The size of this input image should be comparable to the sizes of the images used in training. If these sizes are very different, the detector has difficulty detecting objects because the scale of the objects in the input image differs from the scale of the objects the detector was trained to identify. Consider whether you used the SmallestImageDimension property during training to modify the size of training images.

Data Types: uint8 | uint16 | int16 | double | single | logical

Datastore, specified as a datastore object containing a collection of images. Each image must be a grayscale, RGB, or multichannel image. The function processes only the first column of the datastore, which must contain images and must be cell arrays or tables with multiple columns.

Search region of interest, specified as a four-element vector of the form [x y width height]. The vector specifies the upper left corner and size of a region in pixels.

Name-Value Arguments

Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Before R2021a, use commas to separate each name and value, and enclose Name in quotes.

Example: detect(detector,I,'Threshold',0.25)

Detection threshold, specified as a comma-separated pair consisting of 'Threshold' and a scalar in the range [0, 1]. Detections that have scores less than this threshold value are removed. To reduce false positives, increase this value.

Select the strongest bounding box for each detected object, specified as the comma-separated pair consisting of 'SelectStrongest' and true or false.

  • true — Returns the strongest bounding box per object. The method calls the selectStrongestBboxMulticlass function, which uses nonmaximal suppression to eliminate overlapping bounding boxes based on their confidence scores.

    By default, the selectStrongestBboxMulticlass function is called as follows

     selectStrongestBboxMulticlass(bbox,scores,...
                                   'RatioType','Min',...
                                   'OverlapThreshold',0.5);

  • false — Return all the detected bounding boxes. You can then write your own custom method to eliminate overlapping bounding boxes.

Minimum region size, specified as the comma-separated pair consisting of 'MinSize' and a vector of the form [height width]. Units are in pixels. The minimum region size defines the size of the smallest region containing the object.

By default, 'MinSize' is 1-by-1.

Maximum region size, specified as the comma-separated pair consisting of 'MaxSize' and a vector of the form [height width]. Units are in pixels. The maximum region size defines the size of the largest region containing the object.

By default, 'MaxSize' is set to the height and width of the input image, I. To reduce computation time, set this value to the known maximum region size for the objects that can be detected in the input test image.

Hardware resource on which to run the detector, specified as the comma-separated pair consisting of 'ExecutionEnvironment' and 'auto', 'gpu', or 'cpu'.

  • 'auto' — Use a GPU if it is available. Otherwise, use the CPU.

  • 'gpu' — Use the GPU. To use a GPU, you must have Parallel Computing Toolbox and a CUDA-enabled NVIDIA GPU. If a suitable GPU is not available, the function returns an error. For information about the supported compute capabilities, see GPU Computing Requirements (Parallel Computing Toolbox).

  • 'cpu' — Use the CPU.

Performance optimization, specified as the comma-separated pair consisting of 'Acceleration' and one of the following:

  • 'auto' — Automatically apply a number of optimizations suitable for the input network and hardware resource.

  • 'mex' — Compile and execute a MEX function. This option is available when using a GPU only. Using a GPU requires Parallel Computing Toolbox and a CUDA enabled NVIDIA GPU. If Parallel Computing Toolbox or a suitable GPU is not available, then the function returns an error. For information about the supported compute capabilities, see GPU Computing Requirements (Parallel Computing Toolbox).

  • 'none' — Disable all acceleration.

The default option is 'auto'. If 'auto' is specified, MATLAB® will apply a number of compatible optimizations. If you use the 'auto' option, MATLAB does not ever generate a MEX function.

Using the 'Acceleration' options 'auto' and 'mex' can offer performance benefits, but at the expense of an increased initial run time. Subsequent calls with compatible parameters are faster. Use performance optimization when you plan to call the function multiple times using new input data.

The 'mex' option generates and executes a MEX function based on the network and parameters used in the function call. You can have several MEX functions associated with a single network at one time. Clearing the network variable also clears any MEX functions associated with that network.

The 'mex' option is only available for input data specified as a numeric array, cell array of numeric arrays, table, or image datastore. No other types of datastore support the 'mex' option.

The 'mex' option is only available when you are using a GPU. You must also have a C/C++ compiler installed. For setup instructions, see MEX Setup (GPU Coder).

'mex' acceleration does not support all layers. For a list of supported layers, see Supported Layers (GPU Coder).

Output Arguments

collapse all

Location of objects detected within the input image or images, returned as an M-by-4 matrix or a B-by-1 cell array. M is the number of bounding boxes in an image, and B is the number of M-by-4 matrices when the input contains an array of images.

Each row of bboxes contains a four-element vector of the form [x y width height]. This vector specifies the upper left corner and size of that corresponding bounding box in pixels.

Detection confidence scores, returned as an M-by-1 vector or a B-by-1 cell array. M is the number of bounding boxes in an image, and B is the number of M-by-1 vectors when the input contains an array of images. A higher score indicates higher confidence in the detection.

Labels for bounding boxes, returned as an M-by-1 categorical array or a B-by-1 cell array. M is the number of labels in an image, and B is the number of M-by-1 categorical arrays when the input contains an array of images. You define the class names used to label the objects when you train the input detector.

Detection results, returned as a 3-column table with variable names, Boxes, Scores, and Labels. The Boxes column contains M-by-4 matrices, of M bounding boxes for the objects found in the image. Each row contains a bounding box as a 4-element vector in the format [x,y,width,height]. The format specifies the upper-left corner location and size in pixels of the bounding box in the corresponding image.

Version History

Introduced in R2019a