Slow down when accessing arrays

5 visualizzazioni (ultimi 30 giorni)
Joseph Cullen
Joseph Cullen il 25 Mag 2012
I have some code that really slows down when storing results. Code 2 runs orders of magnitude faster than Code1, especially as k increase in size (e.g. k = 500). I would need to store the results from each iteration i, but it seems very costly computationally. Why is this? Is there a better way to code it.
Code 1:
Q=zeros(k,k,N)
for i=1:N
Q(:,:,i) = Q(:,:,i) + e(i)^2*XtXt;
end
Code 2:
Q=zeros(k,k)
for i=1:N
Q = Q + e(i)^2*XtXt;
end
Here is the entire function if it is helpful. There are a lot of loops, but I found a clever way to eliminate any of them.
function varBhat = NeweyWest(e, X, L, invXX )
% e = (T x N)
% X = (T x k)
% L = scalar
% invXX = (k x k)
% Determine the size of the matrix of regressors
[T, k] =size(X);
% Determine the number of units
N = size(e,2);
% Calculate the Newey-West autocorrelation consistent covariance
Q = zeros(k,k,N);
for l = 0:L
w_l = 1-l/(L+1);
for t = l+1:T
if (l==0) % This calculates the S_0 portion
XtXt = compute_XtXt(X,t); %(k x k)
% reuse the computed matrix for all of N units
for i=1:N
Q(:,:,i) = Q(:,:,i) + e(t,i)^2 *XtXt;
end
else % This calculates the off-diagonal terms
XtXl = compute_XtXl(X,t,l,w_l); % (k x k)
% reuse the computed matrix for all of the generators
for i=1:N
Q(:,:,i) = Q(:,:,i) + e(t,i)*e(t-l,i)*XtXl;
end
end
end
end
Q = 1/(T-k) * Q;
% Calculate Newey-White standard errors (loops over each unit)
varBhat = finalNW(T, X, Q, invXX, N);
end
  2 Commenti
Walter Roberson
Walter Roberson il 25 Mag 2012
Is XtXl a scalar, or is it a k x k array?
Joseph Cullen
Joseph Cullen il 26 Mag 2012
Yeah, I wasn't specific.
XtXl is a kxk matrix
e(i) is a scalar
Q is a kxk matrix

Accedi per commentare.

Risposte (2)

Nathaniel
Nathaniel il 25 Mag 2012
How much memory in your computer, and how big is N? If you run out of available physical memory, then you will spend a lot of time waiting while your O/S swaps to and from your hard disk.
  1 Commento
Joseph Cullen
Joseph Cullen il 26 Mag 2012
It is not a issue with running out of memory. The slow down occurs gradually with k.
I was testing it with k=40 and N=10. Importantly this loop is nested inside of another loop so it is being executed many, many times.
Eventually k and N will be large, 1000 and 300 respectively, but this shouldn't create memory problems.

Accedi per commentare.


Image Analyst
Image Analyst il 26 Mag 2012
Well yeah! If the second case you're just overwriting a scalar N times so you're just doing N operations. In the first case you're assigning a kxk matrix N times. And with k = 500 that means 250,000 times N memory locations are being assigned. So I don't doubt that case 1 would run hundreds of times slower simply because you're assigning and storing hundreds of thousands more elements. What is the value of N? If it's less than about 5, then this should still happen in just a few seconds. But if N is also like 500, then it could take a very long time, and you might even run out of memory.
  5 Commenti
Joseph Cullen
Joseph Cullen il 30 Mag 2012
Thanks for the thoughts guys.
I compiled this to a mex file (using matlab coder) to speed up the looping. Interestingly when it runs, the total CPU usage runs at about 12% on quad core machine. It seems like I have a memory bottleneck rather than a CPU bottle neck.
I also defined this as a 2D array rather than a 3D array, but as expected it didn't make any difference.
Joseph Cullen
Joseph Cullen il 11 Lug 2012
Not a memory bottleneck. The 12% usage comes from the mex file being serially coded and the Intel's hyperthreading (HT) being enabled. If I disable HT, then the mex code uses 25% of CPU resources which one full core maxed out.

Accedi per commentare.

Categorie

Scopri di più su Performance and Memory in Help Center e File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by