- In your "rlTrainingOptions" object, if "UseParallel" is set to "True", and the actor and critic are set to use GPU, then MATLAB will automatically use multiple GPUs for training. In this case, calling "train" in a "parfor" or "spmd" is not supported.
- If in the "rlTrainingOptions" object, "UseParallel" is set to "False" and the actor and critic are set to use GPU, you may call "train" in a "parfor" loop.
How can I optimize GPU usage while training multiple RL PPO Agents using multiple GPUs?
11 visualizzazioni (ultimi 30 giorni)
Mostra commenti meno recenti
MathWorks Support Team
il 6 Mar 2024
Risposto: MathWorks Support Team
il 18 Mar 2024
I wish to train multiple PPO agents asynchronously and using multiple GPUs. What is the best way to optimize GPU and CPU resources to achieve this?
Risposta accettata
MathWorks Support Team
il 6 Mar 2024
If the network size is small, the best way to train would be to just train on CPU in a parallel pool instead of using a GPU, with an appropriate number of workers. This may be the most effective workaround considering that PPO tends to be better with larger training datasets and network sizes may not be big enough to impact a huge change by training on GPU instead.
If training on GPUs, please ensure that you restrict the parallel pool worker count to the same number as the number of GPUs available. This way, each worker can access a unique GPU and perform training. For more information on training using multiple GPUs, please refer to the following page:
With reference to the information in the above link, please keep the following additional information in mind:
0 Commenti
Più risposte (0)
Vedere anche
Categorie
Scopri di più su Parallel and Cloud in Help Center e File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!