- You can check the model’s performance on class-wise basis. It is possible that the model might not be working for a subset of classes.
- You can also check the model’s prediction manually using some test images to get more insights about the model’s prediction.
- There can also be some other reasons such as wrong annotations, high threshold, or inefficient preprocessing which can lead to zero recall.
precision_recall curve problem
1 visualizzazione (ultimi 30 giorni)
Mostra commenti meno recenti
hi everyone
i am facing probwm while finding precision recall curve .i got precision values but my recall values are zero i change threshold valusbut didnot get recall values this is my code
clear
load('C:\Users\ZBook\OneDrive\Desktop\detector\detectorFasterRCNN.mat');
dataRoad = load("D:\this is.mat");
img = imageDatastore(dataRoad.gTruth.DataSource.Source);
labeldata = dataRoad.gTruth.LabelData;
blds = boxLabelDatastore(labeldata);
cds = combine(img, blds);
preview(cds)
tbl = countEachLabel(blds);
% Define the split ratios (e.g., 70% training, 15% validation, 15% testing)
trainRatio = 0.2;
valRatio = 0.10;
testRatio = 0.7;
% Count the total number of images
numImages = numel(img.Files);
% Calculate the number of images for each split
numTrain = round(trainRatio * numImages);
numVal = round(valRatio * numImages);
numTest = numImages - numTrain - numVal;
% Shuffle the datastore
cds = shuffle(cds);
inputSize = [224 224 3];
% Split the datastore into training, validation, and testing sets
trainingData = subset(cds, 1:numTrain);
validationData = subset(cds, numTrain+1:numTrain+numVal);
testData = subset(cds, numTrain+numVal+1:numTrain+numVal+numTest);
testdata = transform(testData, @(data) preprocessData(data, inputSize));
% Make predictions
detectionResults = detect(detector,testData,'MinibatchSize',1,'Threshold', 0.4);
numClasses = 8; % Replace with your specific number of classes
% Initialize arrays to store precision and recall for each class
precision = zeros(numClasses, 1);
recall = zeros(numClasses, 1);
for classID = 1:numClasses
% Evaluate object detection metrics
metrics = evaluateObjectDetection(detectionResults, testData);
% Get precision and recall for the current class
precision(classID) = mean(metrics.ClassMetrics.Precision{classID});
recall(classID) = mean(metrics.ClassMetrics.Recall{classID});
fprintf('Class %d - Average Precision: %.4f, Average Recall: %.4f\n', classID, precision(classID), recall(classID));
end
% Optionally, you can plot the precision-recall curves for each class
figure;
plot(recall, precision, 'o-');
xlabel('Recall');
ylabel('Precision');
title('Precision-Recall Curves for Each Class');
legend('Class 1', 'Class 2', 'Class 3', 'Class 4', 'Class 5', 'Class 6', 'Class 7', 'Class 8');
grid on;
function data = preprocessData(data, targetSize)
% Resize image and bounding boxes to the targetSize.
sz = size(data{1}, [1, 2]);
scale = targetSize(1:2) ./ sz;
data{1} = imresize(data{1}, targetSize(1:2));
% Pass imageSize to helperSanitizeBoxes
imageSize = targetSize; % You may adjust this based on your needs
data{2} = helperSanitizeBoxes(data{2}, imageSize);
% Resize boxes to new image size.
data{2} = bboxresize(data{2}, scale);
end
0 Commenti
Risposte (1)
Shivansh
il 7 Dic 2023
Hi Ahmad,
I understand that you are getting an issue where your recall values are zero with non-zero precision values when evaluating your object detection model using a Faster R-CNN detector.
This indicates that the model is not able to predict true positives for some cases. This issue can arise due to some potential reasons:
If the issue persists after addressing these issues, you can provide more information about the problem statement and code files and I would be happy to look into it!
Hope it helps!
2 Commenti
Shivansh
il 3 Gen 2024
Hi Ahmad!
I understand that you have observed low recall values across different thresholds and some recall for individual classes, it suggests that your model is detecting some true positives but is missing many others. Since the precision is not zero, your model does make correct predictions, but the low recall indicates a high number of false negatives.
The problem must lie in training or preprocessing based on your explanation. You can start with reviewing the training process and working with preprocessed data to analyse the preservation of necessary features. You can then analyze the instances of false positives to get a better idea of the issue. Kindly check the error on training data to check for the possibility of underfitting or overfitting.
You can also try tweaking the training parameter, network architecture and hyperparameter tuning.
Hope it helps!
Vedere anche
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!