MATLAB Answers

Raymond
1

Why does my videoplayer hang at the first frame while streaming live camera.

Asked by Raymond
on 17 Mar 2016
Latest activity Commented on by Dima Lisin
on 18 Mar 2016
Hi,
I am testing out my camera on face tracking and detection. When I run the code below, my video player will always hang at the first frame of streaming but the framecount is still increasing. After that, I will receive an error: Error using vision.VideoPlayer/step Changing the size on input 1 is not allowed without first calling the release() method.
% Create the face detector object.
faceDetector = vision.CascadeObjectDetector();
% Create the point tracker object.
pointTracker = vision.PointTracker('MaxBidirectionalError', 2);
% Create the webcam object.
%Connect camera and select mode format7 mode0
%let vid be the vid capture from the camera
vid = videoinput('pointgrey', 1, 'F7_Mono8_1024x4608_Mode0');
%vid = videoinput('pointgrey', 1, 'Mono8_1024x768');
src = getselectedsource(vid);
%src.FrameRate = '30';
src.FrameRatePercentage = 100;
vid.ReturnedColorspace = 'grayscale';
vid.FramesPerTrigger = inf;
% Capture one frame to get its size.
videoFrame = getsnapshot(vid);
frameSize = size(videoFrame);
% Create the video player object.
videoPlayer = vision.VideoPlayer('Position', [100 100 [frameSize(2), frameSize(1)]+30]);
runLoop = true;
numPts = 0;
frameCount = 0;
while runLoop && frameCount < 400
% Get the next frame.
videoFrameGray = getsnapshot(vid);
%videoFrameGray = rgb2gray(videoFrame);
frameCount = frameCount + 1;
if numPts < 10
% Detection mode.
bbox = faceDetector.step(videoFrameGray);
if ~isempty(bbox)
% Find corner points inside the detected region.
points = detectMinEigenFeatures(videoFrameGray, 'ROI', bbox(1, :));
% Re-initialize the point tracker.
xyPoints = points.Location;
numPts = size(xyPoints,1);
release(pointTracker);
initialize(pointTracker, xyPoints, videoFrameGray);
% Save a copy of the points.
oldPoints = xyPoints;
% Convert the rectangle represented as [x, y, w, h] into an
% M-by-2 matrix of [x,y] coordinates of the four corners. This
% is needed to be able to transform the bounding box to display
% the orientation of the face.
bboxPoints = bbox2points(bbox(1, :));
% Convert the box corners into the [x1 y1 x2 y2 x3 y3 x4 y4]
% format required by insertShape.
bboxPolygon = reshape(bboxPoints', 1, []);
% Display a bounding box around the detected face.
videoFrame = insertShape(videoFrame, 'Polygon', bboxPolygon, 'LineWidth', 3);
% Display detected corners.
videoFrame = insertMarker(videoFrame, xyPoints, '+', 'Color', 'white');
end
else
% Tracking mode.
[xyPoints, isFound] = step(pointTracker, videoFrameGray);
visiblePoints = xyPoints(isFound, :);
oldInliers = oldPoints(isFound, :);
numPts = size(visiblePoints, 1);
if numPts >= 10
% Estimate the geometric transformation between the old points
% and the new points.
[xform, oldInliers, visiblePoints] = estimateGeometricTransform(...
oldInliers, visiblePoints, 'similarity', 'MaxDistance', 4);
% Apply the transformation to the bounding box.
bboxPoints = transformPointsForward(xform, bboxPoints);
% Convert the box corners into the [x1 y1 x2 y2 x3 y3 x4 y4]
% format required by insertShape.
bboxPolygon = reshape(bboxPoints', 1, []);
% Display a bounding box around the face being tracked.
videoFrame = insertShape(videoFrame, 'Polygon', bboxPolygon, 'LineWidth', 3);
% Display tracked points.
videoFrame = insertMarker(videoFrame, visiblePoints, '+', 'Color', 'white');
% Reset the points.
oldPoints = visiblePoints;
setPoints(pointTracker, oldPoints);
end
end
% Display the annotated video frame using the video player object.
step(videoPlayer, videoFrame);
% Check whether the video player window has been closed.
runLoop = isOpen(videoPlayer);
end
The camera I am using is Point Grey Ladybug2
  • It is an omnidirectional camera
  • It is grayscale on Matlab Pointgrey hardware
Thanks

  0 Comments

Sign in to comment.

1 Answer

Answer by Dima Lisin
on 17 Mar 2016
 Accepted Answer

Sounds like the size of your frame has changed. Most likely it went from being M-by-N grayscale to M-by-N-by-3 RGB or vice versa.
It looks like you are getting grayscale images from the camera, but insertShape and insertMarker always return RGB images. Could there be a situation when sometimes you call insertShape and sometimes you don't? That would result in videoFrame having a different size, which would cause the error you are seeing.

  2 Comments

I manage to get it to run yet I am not able to see the shape and the marker, and the videoplayer is streaming very slowly as well.
Hi Raymond,
The reason it is slow is that you are streaming 4k video. Try using vsion.DeployableVideoPlayer instead of vision.VideoPlayer. That should help.
However, whatever you do, acquiring, processing, and displaying 4k video is not likely to be very fast. Does your application really need this much resolution?
As to why you are not seeing the markers, I can't really say without debugging your code. Try setting a breakpoint after you insert the markers, and then display the resulting image using imshow. Also, set a breakpoint right before you call step on the video player, and see what that image looks like.

Sign in to comment.