Computationally efficient Matrix-Vector multiplication
19 visualizzazioni (ultimi 30 giorni)
Mostra commenti meno recenti
Hello All,
I have come across a problem of multiplying a matrix and a vector in a most efficient way possible. Instead of using N^2 operations, may be use N operations of (nlogn)^2 or something like that. I have seen and worked with some loops for making it fast and efficient which is in the link given below.
https://www.mathworks.com/matlabcentral/answers/7039-how-to-multiply-a-vector-with-each-column-of-a-matrix-most-efficiently.
But I am looking for some more options or ideas if anyone has to make it more efficient and reduce the number of operations as much as possible. Whereas, the above link only deals with making the operation fast multiplying each element in a loop but what would be the best way to reduce the number of operations. Ideas and suggestions or any help is welcome.
2 Commenti
Adam
il 4 Nov 2016
What version of Matlab are you using? If you have R2016b you can simply multiply them together:
result = myMatrix .* myVector;
In prior versions I would just use bsxfun, but if I was really bothered about speed I would run all the alternatives and time them, including repmat and a for loop and any other ideas I might have.
Risposte (1)
James Tursa
il 4 Nov 2016
Modificato: James Tursa
il 4 Nov 2016
I am not following your suggestion. The link in question refers to an element-wise multiplication, which results in N^2 unique individual multiply results. You can't reduce this. For large sizes you can potentially speed the overall process up by multi-threading and optimizing how you access memory, but you can't change the fact that you need to do N^2 individual multiplies to get the NxN result. E.g., one of these depending on what you actually want:
result = bsxfun(@times,myMatrix,myVector);
or
result = bsxfun(@times,myMatrix,myVector.');
3 Commenti
James Tursa
il 4 Nov 2016
Modificato: James Tursa
il 4 Nov 2016
This article is almost certainly referring to a matrix multiply, not an element-by-element multiply. Completely different problem than the link in your post. For a matrix multiply, yes there are things you can do to speed things up by reordering the operations and multi-threading etc. You can do this for a variety of linear algebra operations, not just matrix multiply. In fact, companies have developed highly optimized libraries of code just for this purpose called BLAS and LAPACK, and MATLAB uses them in the background for linear algebra operations. E.g.,
a = rand(1000); % arbitrary 1000x1000 matrix
b = rand(1000); % arbitraty 1000x1000 matrix
c = a * b; % matrix multiply
For the matrix multiply * operation, MATLAB actually calls a BLAS library function (dgemm) to do the work. Efficient memory access patterns and multi-threading are already built in to this library. This BLAS library also contains routines for matrix*vector multiplication (dgemv) ... again optimized for efficient memory access and multi-threaded.
So from a practical standpoint, you are not going to be able to write anything yourself that will beat this BLAS library for speed ... certainly not at the m-file level.
Vedere anche
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!