- Converting the pixel location from viewport coordinates to normalized device coordinates (NDC).
- Using the inverse of the orthographic projection matrix to transform the NDC back to camera space coordinates.
- Applying the inverse of the camera's view transformation matrix to convert camera space coordinates back to world space coordinates.
How can I convert from the pixel position in an image with orthographic projection to 3D "world" coordinates?
5 visualizzazioni (ultimi 30 giorni)
Mostra commenti meno recenti
I'd like to do the same thing here, but assuming orthographic projection.
0 Commenti
Risposte (1)
Sudarsanan A K
il 22 Dic 2023
Hi John,
The MATLAB Answer thread that you mentioned outlines a method to convert pixel positions in an image to 3D world coordinates under the assumption of a perspective projection. The steps involve reversing the graphics pipeline used to project 3D points onto a 2D image plane.
However, you are particularly interested in orthographic projection. The process for orthographic projection is different because it does not involve perspective division (step 7 in the forward transformation and step 3 in the reverse transformation), and the projection matrix is different.
For orthographic projection, the projection matrix does not skew the coordinates as a function of depth (z-value). Instead, it scales and translates the coordinates directly. The reverse process for orthographic projection would involve:
Below is a sample working example in MATLAB that demonstrates how to convert a pixel position in an image to a 3D ray in world coordinates using an orthographic projection.
% Step 1: Define a point in world coordinates
x = [1; 2; 3]; % Column vector representing a point in 3D space
% Convert the model's Cartesian coordinates to homogeneous coordinates
xHomogeneous = [x; 1];
% Step 2: Perform a model transform
% Create an axes
figure;
a = axes;
% Get the limits of the axes
xl = xlim(a);
yl = ylim(a);
zl = zlim(a);
% Calculate the scale factors
xscale = 1/diff(xl);
yscale = 1/diff(yl);
zscale = 1/diff(zl);
% Construct the model transform matrix
model_xfm = [xscale, 0, 0, -xl(1)*xscale; ...
0, yscale, 0, -yl(1)*yscale; ...
0, 0, zscale, -zl(1)*zscale; ...
0, 0, 0, 1];
% Step 3: Obtain the view matrix
v = view(a);
% Step 4: Convert the coordinates from right-handed to left-handed
leftHandedToRightHanded = [ 1.0 0.0 0.0 0.0; ...
0.0 1.0 0.0 0.0; ...
0.0 0.0 -1.0 0.0; ...
0.0 0.0 0.0 1.0];
view_xfm = leftHandedToRightHanded * v;
% Step 5: Construct the viewport
old_units = a.Units;
a.Units = 'pixels';
viewport = a.Position;
a.Units = old_units;
ar = viewport(3)/viewport(4);
% Step 6: Compute the orthogonal projection transform
% Define the orthogonal projection limits
n = 0.1; % Near clipping plane
f = 10; % Far clipping plane
r = xl(2);
l = xl(1);
t = yl(2);
b = yl(1);
% Construct the orthogonal projection transform matrix
proj_xfm = [2/(r-l), 0, 0, -(r+l)/(r-l); ...
0, 2/(t-b), 0, -(t+b)/(t-b); ...
0, 0, -2/(f-n), -(f+n)/(f-n); ...
0, 0, 0, 1];
% Step 7: Apply the transformations
yHomogeneous = proj_xfm * view_xfm * model_xfm * xHomogeneous;
% No perspective division is needed for orthographic projection
yNDC = yHomogeneous(1:3);
% Convert to viewport coordinates
yViewport = [viewport(1) + 0.5*viewport(3)*(1 + yNDC(1)), ...
viewport(2) + 0.5*viewport(4)*(1 + yNDC(2))];
% The rest of the code for reversing the process is not necessary for the explanation of orthographic projection.
% However, if you wish to reverse the process, you would simply apply the inverse transformations without perspective division.
% Now, let's reverse the process
% Step 1: Pick a pixel location and convert to NDC
u = yViewport(1);
v = yViewport(2);
uNDC = 2*(u - viewport(1))/viewport(3) - 1;
vNDC = 2*(v - viewport(2))/viewport(4) - 1;
% Step 2: Expand to two 4D homogeneous coordinates
% These points are on the near and far planes respectively
pixNear = [uNDC; vNDC; -1; 1]; % Near plane point
pixFar = [uNDC; vNDC; 1; 1]; % Far plane point
% Step 3: Reverse the forward transformation for both points
total_xfm = proj_xfm * view_xfm * model_xfm;
pNear = total_xfm \ pixNear;
pFar = total_xfm \ pixFar;
% Step 4: Since there is no perspective division in orthographic projection,
% we can directly use the x, y, and z components of the transformed points
pNear = pNear(1:3);
pFar = pFar(1:3);
% Step 5: Find a direction vector connecting these two points
% This step is not needed for orthographic projection, as the direction vector
% would be parallel to the view direction and does not give us meaningful information.
% Step 6: The points pNear and pFar already define the line segment within the view volume.
% We can use them directly to plot the line segment.
% Step 7: Plot the line in a new figure with 3D axes
figure;
ax = axes('projection', 'orthographic'); % Explicitly create a 3D axes with orthographic projection
plot3(ax, [pNear(1) pFar(1)], ...
[pNear(2) pFar(2)], ...
[pNear(3) pFar(3)], ...
'k-', 'LineWidth', 2);
xlabel('X');
ylabel('Y');
zlabel('Z');
title('Orthographic Projection Line');
grid on;
axis equal;
Further, kindly refer to the answer of the following MATLAB Answer question for useful resources in this context
I hope this helps!
0 Commenti
Vedere anche
Categorie
Scopri di più su Lighting, Transparency, and Shading in Help Center e File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!