how to read different pcd file heights in ML algorithm lidar labeler app

1 visualizzazione (ultimi 30 giorni)
I am trying to utilize a pcd file with a height of 1 for my machine learning algorithm. but for some reason I am getting an error:
Unable to perform assignment because the size of the left side is 123398-by-3 and the size of the right
side is 123398-by-1.
Error in (line 139)
image(:,:,4) = ptcloud.Intensity;
I = helperPointCloudToImage(pointCloud);
videoLabels = run(this, frame);
Error in lidar.internal.lidarLabeler.tool.TemporalLabelingTool/runAlgorithm
Error in vision.internal.labeler.tool.AlgorithmTab/setAlgorithmModeAndExecute
Error in vision.internal.labeler.tool.AlgorithmTab
feval(callback, src, event);
internal.Callback.execute(this.PushPerformedFcn, this, eventdata);
this.PeerEventListener = addlistener(this.Peer, 'peerEvent', @(event, data) PeerEventCallback(this, event, data));
Error in hgfeval (line 62)
feval(fcn{1},varargin{:},fcn{2:end});
hgfeval(response, java(o), e.JavaEvent)
@(o,e) cbBridge(o,e,response));
while it works for pandaSets pcd files with a height of 64 and a size of 8 compared to the other ones I am testing with having a size of 1. How would I overcome this difference in .pcd file and make my ML algorithm work with multiple differences in size and height.
classdef LidarSemanticSegmentation < lidar.labeler.AutomationAlgorithm
% LidarSemanticSegmentation Automation algorithm performs semantic
% segmentation in the point cloud.
% LidarSemanticSegmentation is an automation algorithm for segmenting
% a point cloud using SqueezeSegV2 semantic segmentation network
% which is trained on Pandaset data set.
%
% See also lidarLabeler, groundTruthLabeler
% lidar.labeler.AutomationAlgorithm.
% Copyright 2021 The MathWorks, Inc.
% ----------------------------------------------------------------------
% Step 1: Define the required properties describing the algorithm. This
% includes Name, Description, and UserDirections.
properties(Constant)
% Name Algorithm Name
% Character vector specifying the name of the algorithm.
Name = 'Lidar Semantic Segmentation';
% Description Algorithm Description
% Character vector specifying the short description of the algorithm.
Description = 'Segment the point cloud using SqueezeSegV2 network.';
% UserDirections Algorithm Usage Directions
% Cell array of character vectors specifying directions for
% algorithm users to follow to use the algorithm.
UserDirections = {['ROI Label Definition Selection: select one of ' ...
'the ROI definitions to be labeled'], ...
'Run: Press RUN to run the automation algorithm. ', ...
['Review and Modify: Review automated labels over the interval ', ...
'using playback controls. Modify/delete/add ROIs that were not ' ...
'satisfactorily automated at this stage. If the results are ' ...
'satisfactory, click Accept to accept the automated labels.'], ...
['Accept/Cancel: If the results of automation are satisfactory, ' ...
'click Accept to accept all automated labels and return to ' ...
'manual labeling. If the results of automation are not ' ...
'satisfactory, click Cancel to return to manual labeling ' ...
'without saving the automated labels.']};
end
% ---------------------------------------------------------------------
% Step 2: Define properties you want to use during the algorithm
% execution.
properties
% AllCategories
% AllCategories holds the default 'unlabelled', 'Vegetation',
% 'Ground', 'Road', 'RoadMarkings', 'SideWalk', 'Car', 'Truck',
% 'OtherVehicle', 'Pedestrian', 'RoadBarriers', 'Signs',
% 'Buildings' categorical types.
AllCategories = {'unlabelled'};
% PretrainedNetwork
% PretrainedNetwork saves the pretrained SqueezeSegV2 network.
PretrainedNetwork
end
%----------------------------------------------------------------------
% Note: this method needs to be included for lidarLabeler app to
% recognize it as using pointcloud
methods (Static)
% This method is static to allow the apps to call it and check the
% signal type before instantiation. When users refresh the
% algorithm list, we can quickly check and discard algorithms for
% any signal that is not support in a given app.
function isValid = checkSignalType(signalType)
isValid = (signalType == vision.labeler.loading.SignalType.PointCloud);
end
end
%----------------------------------------------------------------------
% Step 3: Define methods used for setting up the algorithm.
methods
function isValid = checkLabelDefinition(algObj, labelDef)
% Only Voxel ROI label definitions are valid for the Lidar
% semantic segmentation algorithm.
isValid = labelDef.Type == lidarLabelType.Voxel;
if isValid
algObj.AllCategories{end+1} = labelDef.Name;
end
end
function isReady = checkSetup(algObj)
% Is there one selected ROI Label definition to automate.
isReady = ~isempty(algObj.SelectedLabelDefinitions);
end
end
%----------------------------------------------------------------------
% Step 4: Specify algorithm execution. This controls what happens when
% the user presses RUN. Algorithm execution proceeds by first
% executing initialize on the first frame, followed by run on
% every frame, and terminate on the last frame.
methods
function initialize(algObj,~)
% Load the pretrained SqueezeSegV2 semantic segmentation network.
outputFolder = fullfile(tempdir, 'Pandaset');
pretrainedSqueezeSeg = load(fullfile(outputFolder,'trainedSqueezeSegV2PandasetNet.mat'));
% Store the network in the 'PretrainedNetwork' property of this object.
algObj.PretrainedNetwork = pretrainedSqueezeSeg.net;
end
function autoLabels = run(algObj, pointCloud)
% Setup categorical matrix with categories including
% 'Vegetation', 'Ground', 'Road', 'RoadMarkings', 'SideWalk',
% 'Car', 'Truck', 'OtherVehicle', 'Pedestrian', 'RoadBarriers',
% and 'Signs'.
autoLabels = categorical(zeros(size(pointCloud.Location,1), size(pointCloud.Location,2)), ...
0:12,algObj.AllCategories);
% Convert the input point cloud to five channel image.
I = helperPointCloudToImage(pointCloud);
% Predict the segmentation result.
predictedResult = semanticseg(I, algObj.PretrainedNetwork);
autoLabels(:) = predictedResult;
end
end
end
function image = helperPointCloudToImage(ptcloud)
% helperPointCloudToImage converts the point cloud to 5 channel image
image = ptcloud.Location;
image(:,:,4) = ptcloud.Intensity;
rangeData = iComputeRangeData(image(:,:,1),image(:,:,2),image(:,:,3));
image(:,:,5) = rangeData;
index = isnan(image);
image(index) = 0;
end
function rangeData = iComputeRangeData(xChannel,yChannel,zChannel)
rangeData = sqrt(xChannel.*xChannel+yChannel.*yChannel+zChannel.*zChannel);
end

Risposte (1)

Milan Bansal
Milan Bansal il 28 Dic 2023
Hi Kevin Harianto,
It is my understanding that you are facing an error while assigning the intensity values of a .pcd file to the "image" variable.
After assigning the locations of the .pcd file to the "image" variable, the size of the "image" variable becomes 123398-by-3. Now, assign the intensity values with the size 123398-by-1 as fourth column in "image" variable. The input to “iComputeRangeData” function should also be changed. Please refer to the code snippet below:
function image = helperPointCloudToImage(ptcloud)
% helperPointCloudToImage converts the point cloud to 5 channel image
image = ptcloud.Location;
image(:,4) = ptcloud.Intensity;
rangeData = iComputeRangeData(image(:,1),image(:,2),image(:,3));
image(:,5) = rangeData;
index = isnan(image);
image(index) = 0;
end
Hope it helps!

Categorie

Scopri di più su Labeling, Segmentation, and Detection in Help Center e File Exchange

Prodotti


Release

R2022a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by