projecting 2D (x, y) coordinates onto a 3D plane

67 visualizzazioni (ultimi 30 giorni)
I'm looking to project 2D points (x, y) from my image onto a 3D plane (X, Y, Z). The plane into which I want to project my points is defined by the position of my checkerboard in one of my calibration images (image 9). However, when using the following formula:
wxy1=K[Rt]⎡XYZ1
The calculated 3D points do not lie on the specified plane. In fact, I am attaching a photo to my question that shows both the extrinsic parameters of my camera calibration and the detected checkerboard points, reprojected onto the 3D plane using the previously mentioned formula.
I am also attaching a snippet of my code associated with this transformation. Does anyone have a solution to provide? Is there an error in my code or my reasoning?
[param_int]= estimateCameraParameters(imagepoints_tot(:,:,:,1), worldPoints,'EstimateSkew', false, 'EstimateTangentialDistortion', true, ...
'NumRadialDistortionCoefficients', 3, 'WorldUnits', 'mm', ...
'InitialIntrinsicMatrix', [], 'InitialRadialDistortion', [], ...
'ImageSize',imagesize );
%%
Mat_transfo=[param_int.RotationMatrices(:,:,9),transpose(param_int.TranslationVectors(9,:))]
Mat_transfo=transpose(param_int.IntrinsicMatrix)*Mat_transfo
Mat_transfo=pinv(Mat_transfo)
translationVector=[param_int.TranslationVectors(9,:),1]
%%
for i = 1:88
pointsdamier=Mat_transfo*[imagepoints_tot(i,1,9,1);imagepoints_tot(i,2,9,1);1]
pointsdamier=pointsdamier/pointsdamier(4)+transpose(translationVector)
hold on
plot3(pointsdamier(1),pointsdamier(3),pointsdamier(2),'bO','MarkerSize',1)
end
showExtrinsics(param_int)
  7 Commenti
Matt J
Matt J il 18 Dic 2023
Attach a .mat file with param_init and imagepoints.

Accedi per commentare.

Risposta accettata

William Rose
William Rose il 17 Dic 2023
I think you know more than I do about Matlab's camera projection algorithms. Therefore I do not expect that I will be of that much help to you.
You use 9 images of the same set of 88 world points to calibrate the camera. I think that this generates one estimate of the camera intrinsic parameters and nine estimates of the extrinsic parameters. It is as if you took 9 images of the same set of 88 points, while relocating and re-orienting the camera for each image.
Your world points do not have a W (equivalent to Z) coordinate. In this case, does Matlab assume that each world point is on a plane with W=0? I am not sure why the world points lack a W coordinate.
You computed a method to reconstruct the 88 points from image 9. You plot those reconstructed points on the same 3D plot as produced by 'showExtrinsics', which shows the planes of the 9 original images, in camera coordinates. The reconstructed points are close to plane 9, but do not coincide exactly with the plane. I suspect that you would like to know why the reconstructed points do not exactly coincide. I cannot say, since I do not fully understand your method for back-projection of the points in image 9.
You assemble the total matrix that converts world points to image points, for image 9. Let us call this Mat1. Mat 1 is 3x4. I think that you believe that Mat1*[U;V;W;1]=[x,y,1]; where [U;V;W] are the coordinates of each point, in the world coordinate system, and x,y are the image coordinates of each point. Am I correct? Homogeneous coordinates are used so that the matrix can do translation as well as rotation. I am looking at your lines of code, which I have modified slightly, to give new names to the matrices on the left hand side:
Mat_transfo=[param_int.RotationMatrices(:,:,9),transpose(param_int.TranslationVectors(9,:))];
Mat1=transpose(param_int.IntrinsicMatrix)*Mat_transfo;
Why do you use the transpose of IntrinsicMatrix, rather than the un-transposed IntrinsicMatrix, when constructing Mat1? If Matlab's camera matrices are designed to operate on row vectors, then I would expect you would have to transpose the RotationMatrix and the IntrinsicMatrix. But you did not transpose the rotation matrix.
Then you compute the pseudoinverse of Mat2:
Mat2=pinv(Mat1);
Mat2 is 4x3. One can verify that Mat1*Mat2=eye(3).
You compute Mat2*[x;y;1]. This produces a 4-component column vector. Then you normalize the vector by the value of its 4th component. Why? I would expect this component to be one, anyway. Then you add the translation vector. Why? I would expect that the use of homogeneous coordinates, and the incorporation of the translation vector into Mat1, would mean that Mat2 takes care of the translation (or its inverse), and therefore you would not have to translate again.
Why do you do
plot3(pointsdamier(1),pointsdamier(3),pointsdamier(2))
instead of
plot3(pointsdamier(1),pointsdamier(2),pointsdamier(3))
?
The projection from a plane to a plane (unlike the projection from thee dimensions to a plane) is reversible. It can be represented by a 3x3 matrix, if the projection is projective, or close to it (i.e. not too much distortion). One may find the 3x3 matrix that minimizes the sum squared error by singular value decomposition. I need to think a bit more about this for this. I wonder if this approach could be used instead of Matlab's camera estimation routine.
  5 Commenti
Giuseppe Cecchelli
Giuseppe Cecchelli il 19 Dic 2023
Hello, thank you so much for your response; it's exactly what I needed! Furthermore, your explanations are very clear and have helped me fully understand the process you followed and why my approach was incorrect. Once again, thank you for taking the time to address my issue.
William Rose
William Rose il 19 Dic 2023
@Giuseppe, you’re welcome, and thank you for your kind comments.

Accedi per commentare.

Più risposte (0)

Prodotti


Release

R2021a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by