Markov Decision Processes (MDP) Toolbox

Functions related to the resolution of discrete-time Markov Decision Processes.
15,2K download
Aggiornato 20 gen 2015

Visualizza la licenza

The MDP toolbox proposes functions related to the resolution of discrete-time Markov Decision Processes: backwards induction, value iteration, policy iteration, linear programming algorithms with some variants.
The functions were developped with MATLAB (note that one of the functions requires the Mathworks Optimization Toolbox) by Iadine Chadès, Marie-Josée Cros, Frédérick Garcia, Régis Sabbadin of the Biometry and Artificial Intelligence Unit of INRA Toulouse (France).
Toolbox page: http://www.inra.fr/mia/T/MDPtoolbox

Cita come

Marie-Josee Cros (2024). Markov Decision Processes (MDP) Toolbox (https://www.mathworks.com/matlabcentral/fileexchange/25786-markov-decision-processes-mdp-toolbox), MATLAB Central File Exchange. Recuperato .

Compatibilità della release di MATLAB
Creato con R2014b
Compatibile con qualsiasi release
Compatibilità della piattaforma
Windows macOS Linux
Riconoscimenti

Ispirato: Betavol(x,R,fig)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!
Versione Pubblicato Note della release
1.6

Add the possibility to download as a toolbox (.mltbx file).

1.5.0.0

Complete Other Requirements.

1.4.0.0

Mainly improve documentation (Jan. 2014)

1.3.0.0

Update the zip file !

1.2.0.0

The version 4.0 (October 2012) is entirely compatible with GNU Octave (version 3.6), the output of several functions: mdp_relative_value_iteration, mdp_value_iteration and mdp_eval_policy_iterative, were modified.

1.1.0.0

Add all authors names

1.0.0.0