Main Content


Get positions and orientations of targets in sensor range relative to host vehicle

Since R2023a



poses = targetPoses(sensorSim,sensorID) returns poses of all the targets that are in the range of the sensor specified by sensorID, with respect to its host vehicle actor. The sensor simulation object sensorSim specifies the RoadRunner scenario that contains the host vehicle actor.


collapse all

Define sensor models in MATLAB®, and add them to vehicle actors in a RoadRunner Scenario. Then, obtain ground truth measurements from RoadRunner Scenario, process them into detections for visualization.

Set Up RoadRunner Scenario — MATLAB Interface

Configure your RoadRunner installation and project folder properties. Open the RoadRunner app.

rrInstallationPath = "C:\Program Files\RoadRunner R2022b\bin\win64";
rrProjectPath = "D:\RR\TestProjects";

s = settings;
s.roadrunner.application.InstallationFolder.PersonalValue = rrInstallationPath;
rrApp = roadrunner(rrProjectPath);

To open the scenario this example uses, you must add the TrajectoryCutIn-longRun.rrscenario file from the example folder to your RoadRunner project folder. Then, open the scenario.


Create a ScenarioSimulation object to connect MATLAB to the RoadRunner Scenario simulation and set the step size.

scenarioSim = createSimulation(rrApp);
stepSize = 0.1;

Create a SensorSimulation object to control the sensor configuration for the RoadRunner Scenario simulation.

sensorSim = get(scenarioSim,"SensorSimulation");

Configure Sensors and Add to RoadRunner Scenario

Configure the sensor model consisting of one vision sensor at the front of the ego vehicle using the visionDetectionGenerator object. If you want to add multiple sensors, specify unique IDs for each sensor.

sensorID = 1;
sensor = visionDetectionGenerator(SensorIndex=sensorID, ...
            SensorLocation=[2.4 0], ...
            MaxRange=50, ...
            DetectorOutput="Objects only", ...

Add the sensor to the ego vehicle actor in the RoadRunner scenario. Specify the Actor ID property for the vehicle.

egoVehicleID = 1;

Set up bird's-eye-plot for visualization.

[lbPlotter,detPlotter,bepAxes] = helperSetupBEP(sensor);

Simulate RoadRunner Scenario and Visualize Sensor Data

Use the ScenarioSimulation object to step through the RoadRunner scenario. Retrieve target poses in the sensor range using the targetPoses function, and process them into detections using the sensor model. Visualize detections and ground truth lane boundaries using birdsEyePlot.

simTime = 0.0;

while ~isequal(get(scenarioSim,"SimulationStatus"),"Stopped")

    % Get ground truth target poses and lane boundaries from the sensor
    tgtPoses = targetPoses(sensorSim,1);
    gTruthLbs = laneBoundaries(sensorSim,1,OutputOption="EgoAdjacentLanes",inHostCoordinate=true);
    if ~isempty(gTruthLbs)
        % Plot ground-truth lane boundaries
        % Generate detections
        [objDets,numObjDets,objDetsValid] = sensor(tgtPoses,simTime);
        % Plot detections
        if objDetsValid
            detPos = cellfun(@(d)d.Measurement(1:2),objDets,UniformOutput=false);
            detPos = vertcat(zeros(0,2),cell2mat(detPos')');

    if ~isequal(get(scenarioSim,"SimulationStatus"),"Stopped")

    simTime = simTime + stepSize;

Helper Functions

helperSetupBEP function creates a bird's-eye-plot and configures all the plotters for visualization.

helperPlotLaneBoundaries function plots the lane boundaries on the birds'eye-plot.

Input Arguments

collapse all

Sensor simulation object, specified as a SensorSimulation object. The sensor simulation object must correspond to the desired scenario created in RoadRunner Scenario.

Unique index of the sensor in the RoadRunner scenario, specified as a positive integer. This must be same as the SensorIndex property of the detection generator object that corresponds to the desired sensor.

Output Arguments

collapse all

Target poses, in ego vehicle coordinates, returned as a structure or as an array of structures. This output does not include the pose of the host vehicle actor.

A target pose defines the position, velocity, and orientation of a target in ego vehicle coordinates. Target poses also include the rates of change in actor position and orientation.

Each pose structure has these fields.


Scenario-defined actor identifier, returned as a positive integer.

ClassIDClassification identifier, returned as a nonnegative integer. 0 represents an object of an unknown or unassigned class.

Position of actor, returned as a real-valued vector of the form [x y z]. Units are in meters.


Velocity (v) of actor in the x-, y-, and z-directions, returned as a real-valued vector of the form [vx vy vz]. Units are in meters per second.


Roll angle of actor, returned as a real scalar. Units are in degrees.


Pitch angle of actor, returned as a real scalar. Units are in degrees.


Yaw angle of actor, returned as a real scalar. Units are in degrees.


Angular velocity (ω) of actor in the x-, y-, and z-directions, returned as a real-valued vector of the form [ωx ωy ωz]. Units are in degrees per second.

For full definitions of these structure fields, see the actor and vehicle functions.

Version History

Introduced in R2023a