Markovian linear jump systems (MJLS) simulation
8 visualizzazioni (ultimi 30 giorni)
Mostra commenti meno recenti
Hi, I'm looking for a Matlab script/code to simulate a Markovian jump linear system
Risposte (1)
Arnav
il 22 Ago 2024
To simulate a MJLS, we can implement a function that iteratively updates the state of the system based on the current Markov state and applies the appropriate state-space matrices. This uses a transition matrix to determine the probability of moving between different Markov states.
function [x, y, u_values, r_values] = mjls_simulate(P, A, B, C, R_w, R_v, U, r_0, x_0, N)
% Create Markov chain object
mc = dtmc(P);
% Initial state probabilities
X0 = zeros(1, size(P, 1));
X0(r_0) = 1; % Start with certainty in the initial state
% Simulate Markov chain
r_values = simulate(mc, N, 'X0', X0)';
% Initialize state, output, and control input
n = length(x_0); % State dimension
p = size(C{1}, 1); % Output dimension
x = zeros(n, N + 1);
y = zeros(p, N);
u_values = zeros(1, N);
% Initial conditions
x(:, 1) = x_0;
% Precompute Cholesky decompositions for efficiency
chol_Rw = chol(R_w, 'lower');
chol_Rv = chol(R_v, 'lower');
for k = 1:N
% Generate random noise
w = chol_Rw * randn(n, 1);
v = chol_Rv * randn(p, 1);
% Current Markov state
r = r_values(k);
% Compute control input
u = U(r, x(:, k), y(:, max(k - 1, 1)));
u_values(k) = u;
% Update state and output
x(:, k + 1) = A{r} * x(:, k) + B{r} * u + w;
y(:, k) = C{r} * x(:, k) + v;
end
% Remove the last element of x to match dimensions
x = x(:, 1:end-1);
end
Here I have taken a few random processes to be random noise (normally distributed), namely w_n and v_n .
Let's take an example of a system involving a car on a road that can be made of gravel or asphalt. The car's dynamics change depending on the type of road surface, which is modelled as different states in the Markov chain.
Parameters:
- A, B, C: State-space matrices for each road condition (asphalt and gravel). Define how the system state evolves, and outputs are generated.
- P : Transition matrix of the Markov chain. Represents probabilities of switching between asphalt and gravel.
- R_w, R_v : Covariance matrices for process and measurement noise, respectively.
- U : Control function defining how the car's velocity is adjusted.
- r_0, x_0 : Initial Markov state and state vector.
- N: Number of simulation steps.
Example Simulation:
% Define system matrices for each terrain
v_max = 20; % Max speed
a_asphalt = 0.1; % asphalt accl
a_gravel = 0.05; % gravel accl
A1 = [1, 1; 0, 1];
B1 = [0; a_asphalt];
C1 = [1, 0];
A2 = [1, 1; 0, 1];
B2 = [0; a_gravel];
C2 = [1, 0];
A = {A1, A2};
B = {B1, B2};
C = {C1, C2};
% Define transition matrix for more frequent state changes
P = [0.7, 0.3; 0.3, 0.7];
% Initial conditions
r_0 = 1; % Start on asphalt
x_0 = [0; 0]; % Initial position and velocity
% Covariance matrices
R_w = 0.01 * eye(2); % Small process noise
R_v = 0.01; % Small measurement noise
% Control function: Simple speed control
desired_speed = 15; % Desired cruising speed
Kp = 0.5; % Proportional gain for control
U = @(r, x, y) Kp * (desired_speed - x(2)); % Ensure u is a scalar
% Number of steps
N = 500; % Increase the number of simulation steps
% Run simulation
[x, y, u_values, r_values] = mjls_simulate(P, A, B, C, R_w, R_v, U, r_0, x_0, N);
% Plot results
figure;
% Plot position
subplot(3, 1, 1);
plot(1:N, x(1, :), 'b', 'DisplayName', 'Position');
title('Position over Time');
xlabel('Time step');
ylabel('Position');
% Plot velocity and control input on the same graph
subplot(3, 1, 2);
plot(1:N, x(2, :), 'r', 'DisplayName', 'Velocity');
hold on;
plot(1:N, u_values, 'g', 'DisplayName', 'Control Input');
hold off;
title('Velocity and Control Input over Time');
xlabel('Time step');
ylabel('Value');
legend('show');
% Plot Markov state
subplot(3, 1, 3);
stairs(0:N, r_values);
title('Markov State over Time');
xlabel('Time step');
ylabel('Markov State');
I have used the simulate function to simulate the states of the Markov Chain according to the transition matrix provided and the governing equations are taken from the link in the comment posted above.
You can learn more about it here:
I hope it helps!
0 Commenti
Vedere anche
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!