Many small Eigenvalue Decompositions in parallel on GPU?

13 visualizzazioni (ultimi 30 giorni)
I have some code that involves a couple billion 3x3 and 4x4 eigenvalue decompositions. I have run this code with parfors on the CPU and the runtime is just barely bearable, but I'd really like to speed this up.
I have a GTX 780 available. I realize that a GPU is generally better suited for large matrix operations than a large number of small matrix operations. I looked at pagefun, which looks like the best way that Matlab has to run many small matrix operations in parallel. However, the functions available for pagefun are all element by element operations, with a few exceptions such as mtimes, rdivide, and ldivide. Unfortunately eig is not one of those functions.
Is there any other way to run this code on the GPU?
  2 Commenti
Matt J
Matt J il 16 Ago 2015
Modificato: Matt J il 16 Ago 2015
Are you sure you mean "several thousand"? My old machine from 2008 can do 10000 such decompositions without breaking a sweat,
>> tic; for i=1:10000, eig(rand(4)); end; toc
Elapsed time is 0.196188 seconds.
ervinshiznit
ervinshiznit il 16 Ago 2015
Oops. I just said "several thousand" without actually looking at how many times I'm calling eig. Looking at it, it's actually 2,200,570,000 calls to eig.
I'll edit the original post
Of course this code involves other calculations as well which contribute to the runtime, but the eig is the slowest portion.

Accedi per commentare.

Risposte (3)

Brian Neiswander
Brian Neiswander il 18 Ago 2015
The "pagefun" function does not currently support the function "eig". However, note that the "eig" function will accept GPU arrays generated with the "gpuArray" function:
X = rand(1e3,1e3);
G = gpuArray(X);
Y = eig(G);
Depending on your data, this can be faster than the non-GPU approach but it is not parallelized across the pages.
It is possible to implement your own CUDA kernel using the CUDAKernel object or MEX functions. This allows for you to process custom functions using a distribution scheme of your choice. See the links below for more information:
  2 Commenti
ervinshiznit
ervinshiznit il 19 Ago 2015
I already tried gpuArray. It's far too slow, the transfer times to and from the GPU kill me. It does provide a speedup for larger matrices, but not 3x3 or 4x4.
CUDA kernels will not work for me because that's a lot of development time that I do not have. Looks like I'm just stuck with the runtimes.
Birk Andreas
Birk Andreas il 16 Lug 2019
So, its already 2019 and there are already some MAGMA eigenvalue functions implemented. However, still no eig for pagefun...
What prevents the progress?
Could you give an estimate, when it will be implemented?
It would really be very welcome!

Accedi per commentare.


Joss Knight
Joss Knight il 21 Ago 2015
Modificato: Joss Knight il 21 Ago 2015
Have you tried just concatenating your matrices in block-diagonal form and calling eig? You may then be limited by memory, but the eigenvalues and vectors of a block-diagonal system are just the union of the eigenvalues and vectors of the blocks:
N = 1000;
A = rand(3,3,N);
maskCell = mat2cell(ones(3,3,N),3,3,ones(N,1));
mask = logical(blkdiag(maskCell{:}));
Ablk = gpuArray.zeros(3*[N,N]);
Ablk(mask) = A(:);
[Vblk,Dblk] = eig(gpuArray(Ablk));
V = reshape(Vblk(mask), [3 3 N]);
D = reshape(Dblk(mask), [3 3 N]);
You should then find that A(:,:,i)*V(:,:,i) == V(:,:,i)*D(:,:,i) as required. Because of the way eigendecomposition works, I would expect the extra unnecessary zeros not to affect the performance much, the system should converge straightforwardly and parallelize well.
  5 Commenti
Joss Knight
Joss Knight il 24 Ago 2015
Also, I see that the GTX 780 has a terrible double-precision performance of 166 GFlops versus 3977 for single precision. Try running your code in single precision.
kunx
kunx il 22 Gen 2022
thank you. your direction is very helpful.

Accedi per commentare.


James Tursa
James Tursa il 20 Ago 2015
If you just need the eigenvalues, you might look at this FEX submission by Bruno Luong:
Maybe you can expand it for 4x4 as well.
  4 Commenti
ervinshiznit
ervinshiznit il 21 Ago 2015
I know, but like I said in a comment to Brian's answer, transfer times of 3x3 and 4x4 matrices to the GPU kill me. I was saying that maybe I should do an explicit formula on the CPU, not the GPU. But your answer of doing a block diagonal matrix might work out.
Joss Knight
Joss Knight il 24 Ago 2015
Modificato: Joss Knight il 24 Ago 2015
Why do you need to transfer 3x3 and 4x4 matrices to the GPU independently? Just transfer it all as one 3D array. You have to anyway to use pagefun.

Accedi per commentare.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by