Hyperbolic Least Squares Interpolation

1 visualizzazione (ultimi 30 giorni)
Hello Everybody,
I have got 4 datapoints from trials. They seem to be aligned in a hyperbolic manner. So what i want to do is to find the least squares regression of those values with a kind of a/(bx+c)-Function, where the c-value is equal to zero.
Does matlab provide a sort of standard-function like polyfit for such a problem? Or is it possible to modify the data in a way (coordinate-transformation) to apply polyfit?
Thanks for your help! Georg

Risposta accettata

Star Strider
Star Strider il 10 Set 2016
Modificato: Star Strider il 10 Set 2016
You can use core MATLAB functions to do the regression:
x = ...; % Independent Variable
y = ...; % Dependent Variable
fcn1 = @(b,x) b(1)./(b(2).*x + b(3)); % Objective Function #1
fcn2 = @(b,x) b(1)./(b(2).*x); % Objective Function #2
SSECF = @(b) sum((y - fcn2(b,x)).^2); % Sum-Squared-Error Cost Function (Use ‘fcn2’ Here)
B0 = [1; 1]; % Initial Parameter Estimates
[B,SSE] = fminsearch(SSECF, [1; 1]); % Estimate Parameters
xv = linspace(min(x), max(x));
figure(1)
plot(x, y, 'bp')
hold on
plot(xv, fcn2(B,xv), '-r')
hold off
grid
I tested this with random vectors and it ran without error.
EDIT Note that the two-parameter model you want requires only one parameter. A simple ratio (or product) of parameters will not uniquely identify either of them, only the ratio (or product). The three-parameter model actually makes sense.
  2 Commenti
Georg Söllinger
Georg Söllinger il 10 Set 2016
Thanks a lot for your help, it works very well!! So this approach should work for each arbitrary function, doesn't it?
Star Strider
Star Strider il 10 Set 2016
My pleasure!
It should work for any well-characterised objective function you give it. The ‘B0’ vector has to have one element for each parameter that you want to estimate. The closer the initial estimates are to the ‘best’ fit (in both magnitude and sign), the better.
The Nelder-Meade algorithm used in fminsearch works best when it is minimising at most 7 parameters. Since it is derivative-free, it is more likely to converge than those that use a Jacobian matrix.

Accedi per commentare.

Più risposte (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by