objectDetectorTrainingData

Create training data for an object detector

Description

example

[imds,blds] = objectDetectorTrainingData(gTruth) creates an image datastore and a box label datastore training data from the specified ground truth.

You can combine the image and box label datastores using combine(imds,blds) to create a datastore needed for training. Use the combined datastore with the training functions, such as trainACFObjectDetector, trainYOLOv2ObjectDetector, trainFastRCNNObjectDetector, trainFasterRCNNObjectDetector, and trainRCNNObjectDetector.

This function supports parallel computing using multiple MATLAB® workers. Enable parallel computing using the Computer Vision Toolbox Preferences dialog.

example

trainingDataTable = objectDetectorTrainingData(gTruth) returns a table of training data from the specified ground truth. You can use the table to train an object detector using the training functions.

___ = objectDetectorTrainingData(gTruth,Name,Value) returns a table of training data with additional options specified by one or more name-value pair arguments. If you create the groundTruth objects in gTruth using a video file or a custom data source, then you can specify any combination of name-value pair arguments. If you create the groundTruth objects from an image collection or image sequence data source, then you can specify only the 'SamplingFactor' name-value pair argument.

Examples

collapse all

Train a vehicle detector based on a YOLO v2 network.

Add the folder containing images to the workspace.

imageDir = fullfile(matlabroot,'toolbox','vision','visiondata','vehicles');
addpath(imageDir);

Load the vehicle ground truth data.

data = load('vehicleTrainingGroundTruth.mat');
gTruth = data.vehicleTrainingGroundTruth;

Load the detector containing the layerGraph object for training.

vehicleDetector = load('yolov2VehicleDetector.mat');
lgraph = vehicleDetector.lgraph
lgraph = 
  LayerGraph with properties:

         Layers: [25×1 nnet.cnn.layer.Layer]
    Connections: [24×2 table]

Create an image datastore and box label datastore using the ground truth object.

[imds,bxds] = objectDetectorTrainingData(gTruth);

Combine the datastores.

cds = combine(imds,bxds);

Configure training options.

options = trainingOptions('sgdm', ...
       'InitialLearnRate', 0.001, ...
       'Verbose',true, ...
       'MiniBatchSize',16, ...
       'MaxEpochs',30, ...
       'Shuffle','every-epoch', ...
       'VerboseFrequency',10); 

Train the detector.

[detector,info] = trainYOLOv2ObjectDetector(cds,lgraph,options);
Training on single CPU.
|========================================================================================|
|  Epoch  |  Iteration  |  Time Elapsed  |  Mini-batch  |  Mini-batch  |  Base Learning  |
|         |             |   (hh:mm:ss)   |     RMSE     |     Loss     |      Rate       |
|========================================================================================|
|       1 |           1 |       00:00:00 |         7.83 |         61.4 |          0.0010 |
|       1 |          10 |       00:00:05 |         2.12 |          4.5 |          0.0010 |
|       2 |          20 |       00:00:11 |         1.39 |          1.9 |          0.0010 |
|       2 |          30 |       00:00:16 |         1.83 |          3.3 |          0.0010 |
|       3 |          40 |       00:00:22 |         1.56 |          2.4 |          0.0010 |
|       3 |          50 |       00:00:27 |         1.60 |          2.5 |          0.0010 |
|       4 |          60 |       00:00:32 |         1.52 |          2.3 |          0.0010 |
|       4 |          70 |       00:00:37 |         1.58 |          2.5 |          0.0010 |
|       5 |          80 |       00:00:43 |         1.54 |          2.4 |          0.0010 |
|       5 |          90 |       00:00:48 |         1.20 |          1.5 |          0.0010 |
|       6 |         100 |       00:00:53 |         1.16 |          1.3 |          0.0010 |
|       7 |         110 |       00:00:58 |         1.02 |          1.0 |          0.0010 |
|       7 |         120 |       00:01:03 |         1.05 |          1.1 |          0.0010 |
|       8 |         130 |       00:01:09 |         1.13 |          1.3 |          0.0010 |
|       8 |         140 |       00:01:14 |         1.06 |          1.1 |          0.0010 |
|       9 |         150 |       00:01:19 |         1.15 |          1.3 |          0.0010 |
|       9 |         160 |       00:01:24 |         1.03 |          1.1 |          0.0010 |
|      10 |         170 |       00:01:30 |         1.10 |          1.2 |          0.0010 |
|      10 |         180 |       00:01:35 |         0.90 |          0.8 |          0.0010 |
|      11 |         190 |       00:01:40 |         0.67 |          0.4 |          0.0010 |
|      12 |         200 |       00:01:45 |         0.87 |          0.8 |          0.0010 |
|      12 |         210 |       00:01:50 |         0.73 |          0.5 |          0.0010 |
|      13 |         220 |       00:01:56 |         1.00 |          1.0 |          0.0010 |
|      13 |         230 |       00:02:01 |         0.73 |          0.5 |          0.0010 |
|      14 |         240 |       00:02:06 |         0.97 |          0.9 |          0.0010 |
|      14 |         250 |       00:02:11 |         0.76 |          0.6 |          0.0010 |
|      15 |         260 |       00:02:17 |         0.99 |          1.0 |          0.0010 |
|      15 |         270 |       00:02:22 |         0.76 |          0.6 |          0.0010 |
|      16 |         280 |       00:02:27 |         0.71 |          0.5 |          0.0010 |
|      17 |         290 |       00:02:32 |         0.79 |          0.6 |          0.0010 |
|      17 |         300 |       00:02:38 |         0.77 |          0.6 |          0.0010 |
|      18 |         310 |       00:02:43 |         0.80 |          0.6 |          0.0010 |
|      18 |         320 |       00:02:48 |         0.74 |          0.5 |          0.0010 |
|      19 |         330 |       00:02:53 |         0.90 |          0.8 |          0.0010 |
|      19 |         340 |       00:02:59 |         0.79 |          0.6 |          0.0010 |
|      20 |         350 |       00:03:04 |         1.01 |          1.0 |          0.0010 |
|      20 |         360 |       00:03:09 |         0.70 |          0.5 |          0.0010 |
|      21 |         370 |       00:03:14 |         0.63 |          0.4 |          0.0010 |
|      22 |         380 |       00:03:20 |         0.77 |          0.6 |          0.0010 |
|      22 |         390 |       00:03:25 |         0.61 |          0.4 |          0.0010 |
|      23 |         400 |       00:03:30 |         0.63 |          0.4 |          0.0010 |
|      23 |         410 |       00:03:35 |         0.56 |          0.3 |          0.0010 |
|      24 |         420 |       00:03:41 |         0.84 |          0.7 |          0.0010 |
|      24 |         430 |       00:03:46 |         0.63 |          0.4 |          0.0010 |
|      25 |         440 |       00:03:51 |         0.77 |          0.6 |          0.0010 |
|      25 |         450 |       00:03:56 |         0.62 |          0.4 |          0.0010 |
|      26 |         460 |       00:04:01 |         0.60 |          0.4 |          0.0010 |
|      27 |         470 |       00:04:07 |         0.66 |          0.4 |          0.0010 |
|      27 |         480 |       00:04:12 |         0.55 |          0.3 |          0.0010 |
|      28 |         490 |       00:04:17 |         0.57 |          0.3 |          0.0010 |
|      28 |         500 |       00:04:23 |         0.51 |          0.3 |          0.0010 |
|      29 |         510 |       00:04:28 |         0.72 |          0.5 |          0.0010 |
|      29 |         520 |       00:04:33 |         0.60 |          0.4 |          0.0010 |
|      30 |         530 |       00:04:38 |         0.65 |          0.4 |          0.0010 |
|      30 |         540 |       00:04:43 |         0.62 |          0.4 |          0.0010 |
|========================================================================================|

Read a test image.

I = imread('highway.png');

Run the detector.

[bboxes,scores] = detect(detector,I);

Display the results.

if(~isempty(bboxes))
  I = insertObjectAnnotation(I,'rectangle',bboxes,scores);
end
figure
imshow(I)

Use training data to train an ACF-based object detector for stop signs

Add the folder containing images to the MATLAB path.

imageDir = fullfile(matlabroot, 'toolbox', 'vision', 'visiondata', 'stopSignImages');
addpath(imageDir);

Load ground truth data, which contains data for stops signs and cars.

load('stopSignsAndCarsGroundTruth.mat','stopSignsAndCarsGroundTruth')

View the label definitions to see the label types in the ground truth.

stopSignsAndCarsGroundTruth.LabelDefinitions

Select the stop sign data for training.

stopSignGroundTruth = selectLabels(stopSignsAndCarsGroundTruth,'stopSign');

Create the training data for a stop sign object detector.

trainingData = objectDetectorTrainingData(stopSignGroundTruth);
summary(trainingData)
Variables:

    imageFilename: 41×1 cell array of character vectors

    stopSign: 41×1 cell

Train an ACF-based object detector.

acfDetector = trainACFObjectDetector(trainingData,'NegativeSamplesFactor',2);
ACF Object Detector Training
The training will take 4 stages. The model size is 34x31.
Sample positive examples(~100% Completed)
Compute approximation coefficients...Completed.
Compute aggregated channel features...Completed.
--------------------------------------------
Stage 1:
Sample negative examples(~100% Completed)
Compute aggregated channel features...Completed.
Train classifier with 42 positive examples and 84 negative examples...Completed.
The trained classifier has 19 weak learners.
--------------------------------------------
Stage 2:
Sample negative examples(~100% Completed)
Found 84 new negative examples for training.
Compute aggregated channel features...Completed.
Train classifier with 42 positive examples and 84 negative examples...Completed.
The trained classifier has 20 weak learners.
--------------------------------------------
Stage 3:
Sample negative examples(~100% Completed)
Found 84 new negative examples for training.
Compute aggregated channel features...Completed.
Train classifier with 42 positive examples and 84 negative examples...Completed.
The trained classifier has 54 weak learners.
--------------------------------------------
Stage 4:
Sample negative examples(~100% Completed)
Found 84 new negative examples for training.
Compute aggregated channel features...Completed.
Train classifier with 42 positive examples and 84 negative examples...Completed.
The trained classifier has 61 weak learners.
--------------------------------------------
ACF object detector training is completed. Elapsed time is 30.3579 seconds.

Test the ACF-based detector on a sample image.

I = imread('stopSignTest.jpg');
bboxes = detect(acfDetector,I);

Display the detected object.

annotation = acfDetector.ModelName;
I = insertObjectAnnotation(I,'rectangle',bboxes,annotation);

figure 
imshow(I)

Remove the image folder from the path.

rmpath(imageDir); 

Use training data to train an ACF-based object detector for vehicles.

imageDir = fullfile(matlabroot,'toolbox','driving','drivingdata','vehiclesSequence');
addpath(imageDir);

Load the ground truth data.

load vehicleGroundTruth.mat

Create the training data for an object detector for vehicles

trainingData = objectDetectorTrainingData(gTruth,'SamplingFactor',2);

Train the ACF-based object detector.

acfDetector = trainACFObjectDetector(trainingData,'ObjectTrainingSize',[20 20]);
ACF Object Detector Training
The training will take 4 stages. The model size is 20x20.
Sample positive examples(~100% Completed)
Compute approximation coefficients...Completed.
Compute aggregated channel features...Completed.
--------------------------------------------
Stage 1:
Sample negative examples(~100% Completed)
Compute aggregated channel features...Completed.
Train classifier with 71 positive examples and 355 negative examples...Completed.
The trained classifier has 68 weak learners.
--------------------------------------------
Stage 2:
Sample negative examples(~100% Completed)
Found 76 new negative examples for training.
Compute aggregated channel features...Completed.
Train classifier with 71 positive examples and 355 negative examples...Completed.
The trained classifier has 120 weak learners.
--------------------------------------------
Stage 3:
Sample negative examples(~100% Completed)
Found 54 new negative examples for training.
Compute aggregated channel features...Completed.
Train classifier with 71 positive examples and 355 negative examples...Completed.
The trained classifier has 170 weak learners.
--------------------------------------------
Stage 4:
Sample negative examples(~100% Completed)
Found 63 new negative examples for training.
Compute aggregated channel features...Completed.
Train classifier with 71 positive examples and 355 negative examples...Completed.
The trained classifier has 215 weak learners.
--------------------------------------------
ACF object detector training is completed. Elapsed time is 28.4547 seconds.

Test the ACF detector on a test image.

I = imread('highway.png');
[bboxes, scores] = detect(acfDetector,I,'Threshold',1);

Select the detection with the highest classification score.

[~,idx] = max(scores);

Display the detected object.

annotation = acfDetector.ModelName;
I = insertObjectAnnotation(I,'rectangle',bboxes(idx,:),annotation);

figure 
imshow(I)

Remove the image folder from the path.

rmpath(imageDir);

Input Arguments

collapse all

Ground truth data, specified as a scalar or an array of groundTruth objects. You can create ground truth objects from existing ground truth data by using the groundTruth object.

If you use custom data sources in groundTruth with parallel computing enabled, then the reader function is expected to work with a pool of MATLAB workers to read images from the data source in parallel.

Name-Value Pair Arguments

Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside quotes. You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.

Example: 'SamplingFactor',5

Factor for subsampling images in the ground truth data source, specified as 'auto', an integer, or a vector of integers. For a sampling factor of N, the returned training data includes every Nth image in the ground truth data source. The function ignores ground truth images with empty label data.

ValueSampling Factor
'auto'The sampling factor N is 5 for data sources with timestamps, and 1 for a collection of images.
IntegerAll ground truth data sources in gTruth are sampled with the same sampling factor N.
Vector of integersThe kth ground truth data source in gTruth is sampled with a sampling factor of N(k).

Folder name to write extracted images to, specified as a string scalar or character vector. The specified folder must exist and have write permissions. This property applies only for groundTruth objects created using a video file or a custom data source.

Image file format, specified as a string scalar or character vector. File formats must be supported by imwrite. This argument applies only for groundTruth objects created using a video file or a custom data source.

Prefix for output image file names, specified as a string scalar or character vector. The image files are named as:

<name_prefix><image_number>.<image_format>

The default value uses the name of the data source that the images were extracted from, strcat(sourceName,'_'). This property applies only for groundTruth objects created using a video file or a custom data source.

Flag to display training progress at the MATLAB command line, specified as either true or false. This property applies only for groundTruth objects created using a video file or a custom data source.

Output Arguments

collapse all

Image datastore, returned as an imageDatastore object containing images extracted from the gTruth objects. The images in imds contain at least one class of annotated labels. The function ignores images that are not annotated.

Box label datastore, returned as a boxLabelDatastore object. The datastore contains categorical vectors for ROI label names and M-by-4 matrices of M bounding boxes. The locations and sizes of the bounding boxes are represented as double M-by-4 element vectors in the format [x,y,width,height].

Training data table, returned as a table with two or more columns. The first column of the table contains image file names with paths. The images can be grayscale or truecolor (RGB) and in any format supported by imread. Each of the remaining columns contain M-by-4 matrices that represent a single object class, such as vehicle, flower, or biological cell type. The columns contain M bounding boxes in the format [x,y,width,height]. The format specifies the upper-left corner location and size of the bounding box in the corresponding image. To create a ground truth table, you can use the Image Labeler app or Video Labeler app.

The output table ignores any sublabel or attribute data present in the input gTruth object.

Introduced in R2017a