Azzera filtri
Azzera filtri

Multiple Linear Regression Trouble

3 visualizzazioni (ultimi 30 giorni)
Jess
Jess il 3 Lug 2014
Commentato: dpb il 8 Lug 2014
I'm using regression techniques to attempt spectra unfolding but the result matlab is giving is wrong. I start with several response functions that have been modeled for mono-energetic values like in IgorPlot.png. Each of these responses is then multiplied by a unique coefficient "a(n)" and the summation is taken. Using regression techniques, the summation is then fitted to an example function like in FitTest.png to produce the coefficient matrix "a" which should indicate the which modeled response functions are most prevalent in the example. This technique has been proven to work in similar applications, but I'm running into a problem where, as can be seen in FitTest, matlab favors the last response function above every other function used.
I've tried rewriting the code to include both fewer and more responses and in both cases, regardless of the number of responses used, Matlab will consistently favor there last response listed in the summation equation. Am I going about this completely wrong, or is there something I'm missing here?
  6 Commenti
dpb
dpb il 7 Lug 2014
...Image that didn't attach.
How in the world did you generate the wiggles in the "LSR fitted response"??? That would take a heckuva' lot more than 13 terms it would seem...
Jess
Jess il 8 Lug 2014
The "wiggles" as you call them are actually a result of something called the Bragg peak from nuclear physics and are present in each of the 13 base responses used. Although the angle may not be ideal in the IgorPlot file, you can still see these peaks to an extent.

Accedi per commentare.

Risposte (1)

dpb
dpb il 8 Lug 2014
Modificato: dpb il 8 Lug 2014
I think on reflection the comment I made earlier about Matlab building the X.'*X matrix actually is your problem. I just happened to have the data still in memory from the current other thread where a poor misguided soul is trying to fit a nonlinear model of incredible complexity to a set of data that are an essentially perfect quadratic. Anyway, having solved for that quadratic with polyfit, I demonstrate the same computation with mldivide --
>> x=[1:length(yh)].'; % I'd lost his x; just used 1:N instead for demo
>> b=polyfit(x,yh,2) % the polyfit results
b =
-4.1158e-05 0.0600 -22.9005
Now build the design matrix X for a quadratic...
>> X=[x.*x x ones(size(x))]; % the design matrix
>> best=X\y % solve the least squares problem
>> best
best =
-0.0000
0.0600
-22.9005
>> b.'-best
ans =
1.0e-14 *
-0.0000
0.0028
-0.7105
>>
Note the results aren't identical but are within E-14 or so which is down in the precision noise level.
Hence, NB: Matlab formed the X'*X matrix internally in the process of solving via \, you do not need to create it and it'll be the wrong answer if you do. I'm guessing there's your source of confusion and difficulty in understanding your results.
  4 Commenti
Jess
Jess il 8 Lug 2014
Thank you for clarifying that. I've done a shortened run to test it out, and allowing matlab to generate the x-matrix appears to have solved part of the problem with the coefficients. I'll try running one of the larger codes and let you know how it goes.
dpb
dpb il 8 Lug 2014
For a linear model in N variables, the design matrix would be X=[x1 x2 x3 ... xN];...
NB: this is a zero-intercept model; your X_matrix including the term n indicates you probably have an intercept. The design matrix in that case includes a column of ones...
X=[ones(size(x1) x1 x2 x3 ... xN];
for the model
y=a+b*x1+c*x2+...

Accedi per commentare.

Categorie

Scopri di più su Particle & Nuclear Physics in Help Center e File Exchange

Prodotti

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by