Create and Train DQN Agent with just a State Path and Not Action Path
1 visualizzazione (ultimi 30 giorni)
Mostra commenti meno recenti
Huzaifah Shamim
il 5 Lug 2020
Commentato: Huzaifah Shamim
il 6 Lug 2020
Every example I have seen of a DQN on MATLAB is with two inputs, the state and action. However, it is possible for DQN RL to be done with just one input, the state but there are no examples for that case. How can that be done on MATLAB? My Input would basically be a binary vector and my output would be that I can do two actions?
Basically I am trying to recreate this: http://cwnlab.eecs.ucf.edu/wp-content/uploads/2019/12/2019_MLSP_ANCS_NAZMUL.pdf
0 Commenti
Risposta accettata
Emmanouil Tzorakoleftherakis
il 6 Lug 2020
Hello,
This page shows how this can be done in 20a. We will have examples that show this workflow in the next release.
Hope that helps.
9 Commenti
Emmanouil Tzorakoleftherakis
il 6 Lug 2020
This sounds doable.You may even be able to do this without custom loops using built-in agents (something like centralized multi-agent RL). You can use a single agent and at each step extract the appropriate action and apply it to the appropriate part of the environment. The tricky part is (typical of multi-agent RL) to pick the right amount of observation to make sure your process is Markov. This will likely require observations from each 'subagent' etc.
Più risposte (0)
Vedere anche
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!