- detectSURFFeatures: https://www.mathworks.com/help/vision/ref/detectsurffeatures.html
- detectORBFeatures: https://www.mathworks.com/help/vision/ref/detectorbfeatures.html
- extractFeatures: https://www.mathworks.com/help/vision/ref/extractfeatures.html
- matchFeatures: https://www.mathworks.com/help/vision/ref/matchfeatures.html
Generation of a 3D map using 2D images, Photogrammetry
41 visualizzazioni (ultimi 30 giorni)
Mostra commenti meno recenti
Hi,
I am currently working on a project to develop a mapping drone that will use photogrammetry to generate 3D maps of a desired terrain, specifically a rocky area with with varying elevations.
I am simulating the drone's flight (before integrating the system on the drone) on a robotIc arm that I have programmed with MATLAB to move in a grid pattern while alternating the camera's orientation at the end of each row. That way I take pictures of the objects placed on the table at each grid point and save it with the corresponding (X,Y,Z) coordinates and the orientation (Rx,Ry,Rz) that I receive in real time from the robotic arm. I have already calibrated the camera.
I am facing a challenge in creating a point cloud from these 2D images.
I have found several examples and forums discussing similar topics like the following:
However, since I already have the exact coordinates and orientation of each image, I would like to know what tools and methods in MATLAB I can use to generate a sufficiently dense and accurate point cloud to proceed with 3D map creation.
Thank you in advance.
0 Commenti
Risposte (1)
Aravind
il 6 Mar 2025
While Structure from Motion (SFM) is one approach for creating a 3D map with images, since you have the camera positions and orientations, you directly use the "Computer Vision" Toolbox in MATLAB to generate a 3D map or point cloud from the 2D images. Here is a simplified guide to help you begin this process:
1) Image Preprocessing: Ensure the images have adequate contrast and features for matching. You might need to apply filters or enhancement techniques to improve feature detection.
2) Finding Camera Intrinsics: Determine the intrinsic parameters of the camera used, which include calibration and lens distortion details. You can use the "Camera Calibrator" app for this purpose. More information is available at https://www.mathworks.com/help/vision/ref/cameracalibrator-app.html.
3) Feature Detection and Matching: Identify key points in images for 3D reconstruction. Use functions like "detectSURFFeatures" or "detectORBFeatures". After detecting features, use "extractFeatures" to extract them and "matchFeatures" to match them across images. More information on these functions can be found here:
4) Camera Pose Estimation: Perform camera pose estimation using the known coordinates and orientations to obtain a camera projection matrix. Alternatively, you can refine the camera poses using "estworldpose" if noise or errors are present in the measurements. More details are available at https://www.mathworks.com/help/vision/ref/estworldpose.html. Typically, at the end of this step, you should have a “cameraProjection” object that contains the camera projection matrix derived from the camera intrinsics and the estimated camera position. More information about the “cameraProjection” function can be found at: https://www.mathworks.com/help/vision/ref/cameraprojection.html.
5) Triangulation: Use triangulation with the prepared data to find the 3D locations of matched points. You can use the "triangulate" function in MATLAB which uses the matched feature points and camera poses (projections). More information can be found at: https://www.mathworks.com/help/vision/ref/triangulate.html.
6) Generate Point Cloud: Steps 1 to 5 will allow you to find the 3D locations of the matched features of a pair of consecutive images. Using a “for” loop perform the steps 1 to 5 for each pair of consecutive images to obtain 3D locations of the triangulated points. You can then create the point cloud by accumulating these 3D locations of all images. The "pointCloud" class in MATLAB allows for creating and manipulating point cloud data. Optionally, refine the point cloud using tools like "pcdenoise", "pcdownsample", and "pcmerge". More information can be found at: https://www.mathworks.com/help/vision/point-cloud-processing.html.
By the end of step 6, you should have a fairly dense point cloud, depending on the number of features obtained while matching features. You can further use the point cloud processing tools mentioned above to fine tune the point cloud and visualize it.
I hope this helps answer your question.
0 Commenti
Vedere anche
Categorie
Scopri di più su Point Cloud Processing in Help Center e File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!