Azzera filtri
Azzera filtri

Bounding Box Not Drawn/Some Variables Are Empty

9 visualizzazioni (ultimi 30 giorni)
Matpar il 19 Gen 2020
Commentato: France il 19 Mar 2020
Hi Professionals,
Me here again,
I managed based on some advice but the advice only got me thus far as i 've tried google and this community as well but i am grateful!
What I am trying to do is draw the yellow box around the object of interest and it's challenging!
Can someone tell me why these variables are empty when the data is loading and traning without error please?
please point me in the right direction so that i can get this sorted!!
Thank you in adavance for acknowledging me!
I will upload and image of the empty variables and dispense my code!
Please see screen shot for details
%% Training the R-CNN detector. Training can take a few minutes to complete.
% Loading .MAT file, the ground truths and the Network layers
net = alexnet
%load('gimlab.mat', 'gTruth', 'net');
rcnn = trainRCNNObjectDetector(gTruth, netTransfer, opts, 'NegativeOverlapRange', [0 0.3])
%% Testing the R-CNN detector on a test image.
img = imread('Gun00011.jpg');
[bbox, score, label] = detect(rcnn, img, 'MiniBatchSize', 32);
%% Displaying strongest detection result.
[score, idx] = max(score);
bbox = bbox(idx, :);
annotation = sprintf('%s: (Confidence = %f)', label(idx), score);
detectedImg = insertObjectAnnotation(img,'rectangle', bbox, annotation);

Risposte (3)

Dinesh Yadav
Dinesh Yadav il 22 Gen 2020
Hi Matpar,
Even though you have loaded and trained on the data without error, the reason your bounding box shows empty is because during testing the RCNN detector is unable to find the object (matching class) in the image and hence is does not draw any bounding box(therefore, bbox is empty matrix).
  4 Commenti
Matpar il 10 Feb 2020
you won't believe that i am currently working on this as we speak! same result and I am yet to understand the issue!
I have deleted every line and started over line by line and still the issue is challenging me for weeks now!
I would really love it if a professional can assist with this to get me to move onward!
sorry pal I would love to help you all i can do is post my code for you to see what i have done and if it makes some sense to you!
Some things may seem out of place! I am a student so please don't hold me to any thing I am trying to get this the same! forgive me!
my code thus far!
%% Extract region proposals with selective search
%% Conducting Feature Extraction With RCNN
%% Classifing Features With SVM
%% Improving The Bounding Box
close all
%% Step 1 Creating Filenames /Loading Data
% anet = alexnet
save Wgtruth.mat Wgtruth;
save rcnnGuns.mat;
save anet.mat anet;
load('rcnnGuns.mat', 'Wgtruth', 'anet');
%% Step 2 Highlighting Image Input Size
inputSize = anet.Layers(1).InputSize;
total_images = size(Wgtruth,1);
%% Step 3 Adding Image Directory For Path To Image Data
imDir = '/Users/mmgp/Documents/MATLAB/2020/RCNN/Wgtruth';
% imDir = fullfile(matlabroot, 'toolbox', 'vision', 'visiondata','Wgtruth');
% addpath(imDir);
%% Step 4 Accessing Contents Of Folder TrainingSet Using Datastore
imds =imageDatastore(imDir,'IncludeSubFolders',true,'LabelSource','Foldernames');
%% Step 5 Splitting Inputs Into Training and Testing Sets
[imdsTrain,imdsValidation] = splitEachLabel(imds,0.7,'randomized');
%% Step 6 Replacing Final Layer/Last 3 Configure For Network classes
% Complex Architecture Layers Has Inputs/Outputs From Multiple Layers
% Finetuning These 3 Layers For New Classification
% Extracting All Layers Except The Last 3
layersTransfer = anet.Layers(1:end-3)
%% Step 7 Specifying Image Categories/Clases From 1000 to Gun(One Class):
numClasses = numel(categories(imdsTrain.Labels));
Tlayers = [
softmaxLayer('name', 'Softmax')
%% Step 8 Displaying and Visualising Layer Features Of FC8
% layer(16) = maxPooling2dLayer(5,'stride',2)
% disp(Tlayers)
% layer = 22;
% channels = 1:30;
% I = deepDreamImage(net,layer,channels,'PyramidLevels',1);
% figure
% I = imtile(I,'ThumbnailSize',[64 64]);
% imshow(I)
% name = net.Layers(layer).Name;
% title(['Layer ',name,' Features'])
%% Warp Image & Pixel Labels
% Creates A Randomized 2-D Affine Transformation From A Combination Of Rotation,
% Translation, Scaling (Resizing), Reflection, And Shearing
% Rotate Input Properties By An Angle Selected Randomly From Range [-50,50] Degrees.
%% Step 9 Setting Output Function(images may have size variation resizing for consistency with pretrain net)
pixelRange = [-70 70]
imageAugmenter = imageDataAugmenter('RandRotation',[-70 70],...
'RandXTranslation',pixelRange, ...
augimdsTrain = augmentedImageDatastore(inputSize(1:2),imdsTrain, ...
%% Step 10 Resizing Images, Assists With Preventing Overfitting
% Utilising Data Augmentation For Resizing Validation Data
% Implemented Without Specifying Overfit Prevention Procedures
% By Not Specifying These Procedures The System Will Be Precise Via
% Predicitons Data Augmentation Prevent The Network From
% Overfitting/ MemorizingExact Details Of Training Images
augmentedTrainingSet = augmentedImageDatastore(inputSize ,imdsTrain,'ColorPreprocessing', 'gray2rgb')
augimdsValidation = augmentedImageDatastore(inputSize,imdsValidation,'ColorPreprocessing', 'gray2rgb')
%% Step 11 Specifying Training Options
% Keep features from earlier layers of pretrained networked for transfer learning
% Specify epoch training cycle, the mini-batch size and validation data
% Validate the network for each iteration during training.
% (SGDM)groups the full dataset into disjoint mini-batches This reaches convergence faster
% as it updates the network's weight value more frequently and increases the
% computationl speed
% Implementing **WITH** The RCNN Object Detector
options = trainingOptions('sgdm',...
'InitialLearnRate', 1e-4,...
'LearnRateSchedule', 'piecewise', ...
'LearnRateDropFactor', 0.1, ...
'Shuffle','every-epoch', ...
'LearnRateDropPeriod', 8, ...
'L2Regularization', 1e-4, ...
'MaxEpochs', 10,...
'Verbose', true)
%% Step 12 Training network Consisting Of Transferred & New Layers.
netTransfer = trainNetwork(augmentedTrainingSet,Tlayers,options)
rcnn = trainRCNNObjectDetector(Wgtruth, netTransfer, options, 'NegativeOverlapRange', [0 0.3]);
save('rcnn.mat', 'rcnn')
%% Step 13 Testing R-CNN Detector On Test Image.
img = imread('11.jpg');
[bbox, score, label] = detect(rcnn, img, 'MiniBatchSize', 80)
numObservations = 4;
images = repelem({img},numObservations,1);
bboxes = repelem({bbox},numObservations,1);
labels = repelem({label},numObservations,1);
%% Step 14 Displaying Strongest Detection Results.
[score, idx] = max(score)
bbox = bbox(idx, :)
annotation = sprintf('%s: (Confidence = %f)', label(idx), score)
detectedImg = insertObjectAnnotation(img, 'rectangle', bbox, annotation);
Matpar il 10 Feb 2020
Modificato: Matpar il 10 Feb 2020
what i do is go to the workspace with the same link you provided and observe what is being produced when the code is in progress!
ohhhhhh! by the way Dinesh Yadav provided some insight but this same code produced the result perfectly but when i try another image is did not detect anything and when I swapped back Unknown.pngthe image that had the perfect result, the bounding box was empty and this has been the case since then!
I have been searching for a method to get this accurate but this is ongoing and the challenge is still with me!
if you have another link i would gladly take a look at it to see if i can move forward but thus far this is where i have gotten too!

Accedi per commentare.

France il 19 Mar 2020
Dear Matpar,
I'm in the same situation. have you understood the problem so far?
thank you!

Matpar il 19 Mar 2020
Modificato: Matpar il 19 Mar 2020
Hey France ,
Yes I manage to the solve this by myself and i am so proud, thus far it was frustrating but I got through with this!
  1. ensure your labling is completed properley this is what is causing the issue. if the system has nothing to compare the positive region of interest with guess what? it will not present the bounding box and the results you are looking for! unfortunately in the examples loads of details are missing and well by trial and error you have to figure this out, or if a kind soul decides to assist you!
  2. Specify the negative blob in the labelling of the same image or point the system to a folder of images similar in gradient intesities and highlight the negative region of interest, (ie) everything that is not the object that you are trying to find!
  3. it is easier to specify the negative blog in the same image
  4. ensure that blob is small so that you can select the negative area as well i will attach an image check it out
  5. the region of the positive blog must be that same size with the negative blob on the same image
  6. if you are confuse that is normal check the image and you will see what i am talking about!
  7. export the labelling as a table to workspace not
  8. then the code above will work
  9. the labelling must be done for every image in the data set and yes it is tedious I suggest playing somemmusic whilst you do this!

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by