リアルタイムのカメラで目と口を検出する

11 visualizzazioni (ultimi 30 giorni)
tsuyoshi tsunoda
tsuyoshi tsunoda il 5 Lug 2021
Commentato: tsuyoshi tsunoda il 11 Lug 2021
リアルタイムのカメラで目と口の同時検出をしようとしているのですが、どのようなコードで進めればいいか教えていただきたいです。
現状のコードは、目のみは検出することができます
% Create the face detector object.
eyeDetector=vision.CascadeObjectDetector('EyePairBig');
% Create the point tracker object.
pointTracker = vision.PointTracker('MaxBidirectionalError', 2);
% Create the webcam object.
cam = webcam();
% Capture one frame to get its size.
videoFrame = snapshot(cam);
frameSize = size(videoFrame);
% Create the video player object.
videoPlayer = vision.VideoPlayer('Position', [100 100 [frameSize(2), frameSize(1)]+30]);
runLoop = true;
numPts = 0;
frameCount = 0;
while runLoop && frameCount < 400
% Get the next frame.
videoFrame = snapshot(cam);
videoFrameGray = rgb2gray(videoFrame);
frameCount = frameCount + 1;
if numPts < 10
% Detection mode.
bboxeye = eyeDetector.step(videoFrameGray);
if ~isempty(bboxeye)
% Find corner points inside the detected region.
points = detectMinEigenFeatures(videoFrameGray, 'ROI', bboxeye(1, :));
% Re-initialize the point tracker.
xyPoints = points.Location;
numPts = size(xyPoints,1);
release(pointTracker);
initialize(pointTracker, xyPoints, videoFrameGray);
% Save a copy of the points.
oldPoints = xyPoints;
% Convert the rectangle represented as [x, y, w, h] into an
% M-by-2 matrix of [x,y] coordinates of the four corners. This
% is needed to be able to transform the bounding box to display
% the orientation of the face.
bboxPoints = bbox2points(bboxeye(1, :));
% Convert the box corners into the [x1 y1 x2 y2 x3 y3 x4 y4]
% format required by insertShape.
bboxPolygon = reshape(bboxPoints', 1, []);
% Display a bounding box around the detected face.
videoFrame = insertShape(videoFrame, 'Polygon', bboxPolygon, 'LineWidth', 3);
% Display detected corners.
videoFrame = insertMarker(videoFrame, xyPoints, '+', 'Color', 'white');
end
else
% Tracking mode.
[xyPoints, isFound] = step(pointTracker, videoFrameGray);
visiblePoints = xyPoints(isFound, :);
oldInliers = oldPoints(isFound, :);
numPts = size(visiblePoints, 1);
if numPts >= 10
% Estimate the geometric transformation between the old points
% and the new points.
[xform, inlierIdx] = estimateGeometricTransform2D(...
oldInliers, visiblePoints, 'similarity', 'MaxDistance', 4);
oldInliers = oldInliers(inlierIdx, :);
visiblePoints = visiblePoints(inlierIdx, :);
% Apply the transformation to the bounding box.
bboxPoints = transformPointsForward(xform, bboxPoints);
% Convert the box corners into the [x1 y1 x2 y2 x3 y3 x4 y4]
% format required by insertShape.
bboxPolygon = reshape(bboxPoints', 1, []);
% Display a bounding box around the face being tracked.
videoFrame = insertShape(videoFrame, 'Polygon', bboxPolygon, 'LineWidth', 3);
% Display tracked points.
videoFrame = insertMarker(videoFrame, visiblePoints, '+', 'Color', 'white');
% Reset the points.
oldPoints = visiblePoints;
setPoints(pointTracker, oldPoints);
end
end
% Display the annotated video frame using the video player object.
step(videoPlayer, videoFrame);
% Check whether the video player window has been closed.
runLoop = isOpen(videoPlayer);
end
% Clean up.
clear cam;
release(videoPlayer);
release(pointTracker);
release(eyeDetector);

Risposta accettata

Kenta
Kenta il 10 Lug 2021
Modificato: Kenta il 10 Lug 2021
こちらは、目のdetectorのみ用意されているのですね。
こちらをみると、他の部位を検出することもできそうですので、まずはこのファイルを試してみて、それをWEBカメラにあうように改良されてはいかがでしょうか。
  3 Commenti
Kenta
Kenta il 11 Lug 2021
demo.mを実行すれば以下のような結果が得られます。ここでは、lena.pngを読み込んでいますが、そこで、tsunodaさんの解析したい画像をしていすればよいです。
tsuyoshi tsunoda
tsuyoshi tsunoda il 11 Lug 2021
色々と教えてくださりありがとうございます。

Accedi per commentare.

Più risposte (0)

Tag

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!