How can I solve a problem of lsqcurvefit?

5 visualizzazioni (ultimi 30 giorni)
Hello,
I've been figthing with a problem of lsqcurvefit and I couldn't solve it. I have this code:
gam=[580 580 580 1004 1004 1004]';
del=[0 240 120 210 150 90]';
tlag=[-100 -130 60 100 550 200]';
xdata12 = [del gam];
ydata=tlag;
x0=[100 5];
f = @(b,xdata12)xdata12(:,2).*cosd(xdata12(:,1)-b(1))/b(2);
[x error] = lsqcurvefit(f,x0,xdata12,ydata)
But I got a error
Local minimum possible. lsqcurvefit stopped because the size of the current step is less than the default value of the step size tolerance. <stopping criteria details>
I tried to change TotalFun, but I always got a new lower minimum.
I hope someone could help me.
Thanks Francisco
  2 Commenti
John D'Errico
John D'Errico il 25 Gen 2015
My guess is it was TolFun, not TotalFun you changed. But then what do I know? :)
Marc
Marc il 25 Gen 2015
As Star Strider points out this works fine in 2014b and it works fine in 2012b as well. The above is not an ERROR, it is a Warning. You can turn these warnings off if you would like using optimset.
As someone who constantly does this, sometimes the mathematical minimum does not make physical sense. Is this your problem? Is this "measured" data?? In my experience, I see people taking data from papers or books and then using them to fit models and get odd answers. Usually this happens because the data did not report the experimental error well. When you start to add experimental error into the data set, this starts to help explain why math and reality don't always agree.
It sounds like you are getting plenty of answers, just maybe not ones you like.... Sounds like my research :)

Accedi per commentare.

Risposta accettata

Star Strider
Star Strider il 25 Gen 2015
Nonlinear parameter estimation is sensitive to the initial parameter estimates. In R2014b, your code runs without errors or warnings, appears to converge successfully, and produces these parameter estimates:
x =
139.1046e+000 2.5285e+000
although starting with:
x0 = [-10 -10];
produces an equally acceptable fit (same ‘resnorm’) but with these parameter estimates:
x =
-40.8954e+000 -2.5285e+000
Note that this estimate for ‘b(1)’ differs from the first by 180°. So considering that ‘b(1)’ is an argument to cosd, there are an infinite number acceptable parameter estimates, all modulo 360. I would experiment with several values for ‘x0’ if these are not acceptable, and choose those that make the most sense in the context of your problem.
The easiest way to plot it is to use stem3:
figure(1)
stem3(del, gam, tlag)
hold on
stem3(del, gam, f(x,xdata12))
hold off
grid on
It seems to produce a good fit. The warning you got is likely due to the large residual. The lsqcurvefit function throws it if the residual is larger than a value it considers acceptable, and therefore warns you of a possible local — rather than global — minimum. I would not be concerned.

Più risposte (2)

John D'Errico
John D'Errico il 25 Gen 2015
Modificato: John D'Errico il 25 Gen 2015
I'll look at your problem. But first, some comments about the data.
6 data points, with one of your independent variables taking on only two levels is simply pleading to get crapola for results. So if you actually want to have some chance to estimate those parameters with some degree of confidence, next time get better data, get more data.
Off my soapbox now, lets look at the model. Can we estimate those parameters with no iterative scheme needed at all?
Your model is
tlag = gam*cosd(del-b1)/b2;
Here b1 and b2 are unknowns. First, what can we do about b1? This basic identity will apply, still valid in terms of degrees instead of radians.
cosd(u - v) = cosd(u)*cosd(v) + sind(u)*sind(v)
How does that help us?
tlag = gam*cosd(del)*cosd(b1)/b2 + gam*sind(del)*sind(b1)/b2
Now, we have no idea what are the values of b1 and b2 here. So lets try a simple transformation.
c1 = cosd(b1)/b2
c2 = sind(b1)/b2
Therefore
tlag = (gam*cosd(del))*c1 + (gam*sind(del))*c2
The nice thing about this model form, is c1 and c2 are TRIVIALLY estimable from the data, with no iterative scheme needed at all. No starting values are needed. Not even the optimization toolbox is needed.
gam=[580 580 580 1004 1004 1004]';
del=[0 240 120 210 150 90]';
tlag=[-100 -130 60 100 550 200]';
c1c2 = [gam.*cosd(del),gam.*sind(del)]\tlag
c1c2 =
-0.29895
0.25892
We can now recover the original b1 and b2 from your model, by undoing the transformation.
b1 = atan2d(c1c2(2),c1c2(1))
b1 =
139.1
Here b1 came from the ratio of c2 and c1, since we would have
c2/c1 = sind(b1)/cosd(b1) = tand(b1)
Note that I used the 4 quadrant version of the inverse tangent there to recover b1.
b2 = cosd(b1)/c1c2(1)
b2 =
2.5285
tlaghat = gam.*cosd(del-b1)/b2;
[gam,del,tlag,tlaghat]
ans =
580 0 -100 -173.39
580 240 -130 -43.357
580 120 60 216.75
1004 210 100 129.96
1004 150 550 389.91
1004 90 200 259.95
plot(del,tlag,'ro',del,tlaghat,'b+')
grid on
This is as good as you can get from the data. But again, see that NO iterative scheme was ever employed, nor were any starting values needed. The nice thing about this method is it has no sensitivity at all to starting values. Just a few simple lines of code, and an understanding of basic trigonometry. Of course, model transformations like this are not always available, and some transformations that would seem to yield simple solutions can cause problem with the error structure of your data. In this case, there are no serious issues at all.

Francisco
Francisco il 25 Gen 2015
Thanks for all answers. I was concerned due to the residual gives very high values, and thought that this was wrong. Thanks for the detail in your answers. I can't use more data, because these are all data (just 6).
Thanks

Prodotti

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by