Speed up 'dlgradient' with parallelism?
6 visualizzazioni (ultimi 30 giorni)
Jon Cherrie il 12 Apr 2021
You can use a GPU for the dlgradient computation by using a gpuArray with dlarray.
In this example, the minibtachqueue, puts data on to the GPU and thus the GPU is used for the rest of the computation, both the "forward" pass the "backward" (gradient) pass: