Note: This page has been translated by MathWorks. Click here to see

To view all translated materials including this page, select Country from the country navigator on the bottom of this page.

To view all translated materials including this page, select Country from the country navigator on the bottom of this page.

Estimate camera projection matrix from world-to-image point correspondences

`camMatrix = estimateCameraMatrix(imagePoints,worldPoints)`

`[camMatrix,reprojectionErrors] = estimateCameraMatrix(imagePoints,worldPoints)`

returns the camera projection matrix determined from known world points and their
corresponding image projections by using the direct linear transformation (DLT)
approach.`camMatrix`

= estimateCameraMatrix(`imagePoints`

,`worldPoints`

)

`[`

also returns the reprojection error that quantifies the accuracy of the projected image
coordinates.`camMatrix`

,`reprojectionErrors`

] = estimateCameraMatrix(`imagePoints`

,`worldPoints`

)

You can use the `estimateCameraMatrix`

function to estimate a camera projection
matrix:

If the world-to-image point correspondences are known, and the camera intrinsics and extrinsics parameters are not known.

For use with the

`findNearestNeighbors`

object function of the`pointCloud`

object. The use of a camera projection matrix speeds up the nearest neighbors search in a point cloud generated by an RGB-D sensor, such as Microsoft^{®}Kinect^{®}.

Given the world points * X* and the image
points

λ* x* =

The equation is solved using the direct linear transformation (DLT) approach [1]. This approach formulates a homogeneous linear system of equations, and the solution is obtained through generalized eigenvalue decomposition.

Because the image point coordinates are given in pixel values, the approach for computing the camera projection matrix is sensitive to numerical errors. To avoid numerical errors, the input image point coordinates are normalized, so that their centroid is at the origin. Also, the root mean squared distance of the image points from the origin is . These steps summarize the process for estimating the camera projection matrix.

Normalize the input image point coordinates with transform

*T*.Estimate camera projection matrix

*C*from the normalized input image points.^{N}Compute the denormalized camera projection matrix

*C*as*C*^{N}*T*^{-1}.Compute the reprojected image point coordinates

as**x**^{E}*C*.**X**Compute the reprojection errors as

*reprojectionErrors*= |−**x**|.**x**^{E}

[1] Richard, H. and A. Zisserman.
*Multiple View Geometry in Computer Vision*. Cambridge: Cambridge
University Press, 2000.

`cameraMatrix`

| `estimateCameraParameters`

| `estimateEssentialMatrix`

| `estimateFundamentalMatrix`

| `estimateWorldCameraPose`

| `findNearestNeighbors`