Speed up 'dlgradient' with parallelism?
6 visualizzazioni (ultimi 30 giorni)
Mostra commenti meno recenti
Hi all,
I am wondering if there is a way to speed up the 'dlgradient' function evaluation using parallelism or GPUs.
0 Commenti
Risposte (1)
Jon Cherrie
il 12 Apr 2021
You can use a GPU for the dlgradient computation by using a gpuArray with dlarray.
In this example, the minibtachqueue, puts data on to the GPU and thus the GPU is used for the rest of the computation, both the "forward" pass the "backward" (gradient) pass:
0 Commenti
Vedere anche
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!