Need help to code the power Low fit using least square regression

4 visualizzazioni (ultimi 30 giorni)
I have the following data:
X = [0.17,0.17,0.5,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,3,3.5,3.5,4,4,4.5,6,6,10,10,10,12,15,18,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,37,49,72,72,72,96,96,120,192];
Y = [30,30,60,50,15,40,40,30,100,30,17,29,44,40,125,52,102,64,52,85,34,12.7,62.3,12.8,50,169,33.3,40,15,20,25,10.4,12.7,17.8,6.25,7.5,12.5,6.25,6.33,5.29,7.42,17.7,16.7,19.4,28.6,6.25,10.21,4.46,5.5,30,8.42,13.96,6.53,6.56,6.94,6.72,2.65,2.94,4.17,4.18];
I need a code to deriv the power law as Y= 14.82 X^(-0.39) (which is of the form y=b. x.^m) using constrained least squares. Could you please help me with the code how to get the b and m as 14.82 and -.39 respectively? I think we need to apply some optimization for getting the exact values.
Any help will be much appreciated.

Risposta accettata

John D'Errico
John D'Errico il 2 Gen 2019
Modificato: John D'Errico il 2 Gen 2019
Frustrating. You mention a constrained least squares in your question. But no place did you EVER suggest what the constraint was. I had to dig way down into the comments to find anything.
Your model is :
Y = a*X.^b
The simple answer is to first plot EVERYTHING. And that model implies an important suggestion, if the model is even remotely valid. I.e., you probably want to log the model. That is, if we log things to get...
log(Y) = log(a) + log(X)*b
then we should see a roughly linear relationship, between log(X) and log(Y).
loglog(X,Y,'o')
grid on
Hmm. That does suggest a roughly linear model could be appropriate in log space. Of course if we try that, we will see:
BA = polyfit(log(X),log(Y),1)
BA =
-0.443558076798707 3.77850311606457
Admittedly, this is not a nonlinear regression, but it should give us some feeling for what is happening. And, we would need to exponentiate the constant term to get back to your original parameters. That would yield roughly 44.
So then I started reading through your comments. As well, I plotted the data, and the model you posed as "correct". The model did not come close to even passing through the data. Finally, I saw the only comment where you stated the blasted objective of the problem!!!!!!!!!!
You want to solve a least squares problem, subject to the constraint that Yhat(i) >= a*X(i)^b. So your constraint is non-negativity of the residuals. Could you maybe have told us that in the first place?
The simplest way to solve this problem in a nonlinear least squares sense is to use fmincon. But we can use the log trick from above to convert the problem to a linear one, then use lsqlin. (Both of these tools are in the optimization toolbox. I think you do not have that toolbox from another of your comments.) It does help us if you tell us information like that up front.
N = numel(X);
Amat = [log(X'),ones(N,1)];
BA = lsqlin(Amat,log(Y'),Amat,log(Y'))
Minimum found that satisfies the constraints.
Optimization completed because the objective function is non-decreasing in
feasible directions, to within the default value of the optimality tolerance,
and constraints are satisfied to within the default value of the constraint tolerance.
<stopping criteria details>
BA =
-0.381649116037055
2.70805020110219
exp(BA(2))
ans =
14.9999999999997
So, your model would be estimated as:
Yhat = 14.9999999999997 * X.^(-0.381649116037055)
Again, that uses a linear regression, and it uses a toolfrom the optimization toolbox, and you don't have that, so let me find a reasonable compromise.
We can always use a penalty constrained fminsearch. So, something like this, as an objective function:
function R = fmsconobj(ab,X,Y)
R = Y - ab(1) * X.^ab(2);
if any(R < 0)
R = inf;
else
R = norm(R);
end
Now, call fminsearch:
ab = fminsearch(@(ab) fmsconobj(ab,X,Y),[10, -.5])
ab =
14.9999818205206 -0.381653090395052
That does what you want, but you do need to be careful. Start from the wrong place, and fminsearch will fail.
  10 Commenti
John D'Errico
John D'Errico il 4 Gen 2019
Let me explain how the fminbnd solution works, as it is considerably better. The solution I wrote is based on a technique called partitioned or separable least squares.
Suppose you had a value for b in this model? I do not require that the value for b is optimal. Only that you picked SOME value. Then it is pretty easy to show that
aest = min(Y ./ X^b)
is optimal, in the sense that conditional on the value of b, this estimator for a minimizes the sum of squares of the residuals in this model, while still ensuring that NO residual is ever negative.
Consider the alternatives. If we were to increase aest by even a tiny amount, then at least ONE residual would be negative. Conversely, if we decrease aest by even a tiny amount, then the sum of squares of the residuals will be increased. Both of these claims can be proved fairly easily for this problem. In fact, we can think of a as estimated here as a FUNCTION of b, as well as the data.
Given that, the objective function I wrote computes the value of a for ANY b that is supplied. As well, it computes the norm of the residuals. Consider the first set of data you posed.
X = [0.17,0.17,0.5,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,3,3.5,3.5,4,4,4.5,6,6,10,10,10,12,15,18,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,37,49,72,72,72,96,96,120,192];
Y = [30,30,60,50,15,40,40,30,100,30,17,29,44,40,125,52,102,64,52,85,34,12.7,62.3,12.8,50,169,33.3,40,15,20,25,10.4,12.7,17.8,6.25,7.5,12.5,6.25,6.33,5.29,7.42,17.7,16.7,19.4,28.6,6.25,10.21,4.46,5.5,30,8.42,13.96,6.53,6.56,6.94,6.72,2.65,2.94,4.17,4.18];
Pick any value of b.
[R,aest] = fminbndobj(-0.5,X,Y)
R =
292.088457214875
aest =
12.369316876853
[R,aest] = fminbndobj(-0.2,X,Y)
R =
310.044765623414
aest =
6.60237397996224
I've chosen two examples there. As you see, the objective function returns the norm of the residuals, as well as the value of a, conditionally estimated for the corresponding b.
So now we can just minimize the value of R, as a function of b. fminbnd will do that.
[best,fval] = fminbnd(@(b) fminbndobj(b,X,Y),-3,-.01)
best =
-0.381650656104941
fval =
280.567547771458
Now, recover the value of a, corresponding to best.
[R,aest] = fminbndobj(best,X,Y)
R =
280.567547771458
aest =
15
So the globally optimal set of parameters is given by [aest,best], here seen as [-0.381650656104941,15].
Again, the general method is one described in Seber & Wild's book on Nonlinear Regression as the method of separable nonlinear least squares. I've seen it described before that under the name of partitioned nonlinear least sqaures.

Accedi per commentare.

Più risposte (2)

Michael Madelaire
Michael Madelaire il 2 Gen 2019
Here is my attempt. But it does not use a non linear fit as the suggestion above.
Instead I fit the log.
%% Init
clear all; close all; clc;
%% Data
X = [0.17,0.17,0.5,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,3,3.5,3.5,4,4,4.5,6,6,10,10,10,12,15,18,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,37,49,72,72,72,96,96,120,192];
Y = [30,30,60,50,15,40,40,30,100,30,17,29,44,40,125,52,102,64,52,85,34,12.7,62.3,12.8,50,169,33.3,40,15,20,25,10.4,12.7,17.8,6.25,7.5,12.5,6.25,6.33,5.29,7.42,17.7,16.7,19.4,28.6,6.25,10.21,4.46,5.5,30,8.42,13.96,6.53,6.56,6.94,6.72,2.65,2.94,4.17,4.18];
y_log = log(Y);
x_log = log(X);
%% Plot data
figure;
subplot(2,1,1);
plot(X, Y, '.', 'MarkerSize', 10);
grid on;
title('True data');
subplot(2,1,2);
plot(x_log, y_log, '.', 'MarkerSize', 10);
grid on;
title('Log of data');
%% build data kernel
G = ones(length(X),2);
G(:,2) = x_log;
%% Solve LSQ
model = G\y_log';
%% Extract model coefficients
b = exp(model(1));
m = exp(model(2));
fprintf('\nThe fit to your function is Y=%.3f*X^{%.3f}\n',b,m)
%% Predict
x_pred = -2:0.01:6;
G_pred = ones(length(x_pred), 2);
G_pred(:, 2) = x_pred;
y_pred = G_pred*model;
%% Plot fit
figure;
subplot(2,1,1);
plot(X, Y, '.', 'MarkerSize', 10);
hold on;
plot(exp(x_pred), exp(y_pred));
grid on;
title('True data - x truncated at 200');
legend('Data', 'Fit');
xlim([0,200]);
subplot(2,1,2);
plot(x_log, y_log, '.', 'MarkerSize', 10);
hold on;
plot(x_pred, y_pred);
grid on;
title('Log of data');
legend('Data', 'Fit');
%% Residuals
G = ones(length(X),2);
G(:,2) = x_log;
r_my_model = exp(y_log-G*model);
your_model = [14.82; -.39];
r_your_model = Y-your_model(1)*X.^your_model(2);
figure;
subplot(2,1,1);
histogram(r_my_model, 'binWidth', 0.5);
grid on;
title('My model - residuals');
subplot(2,1,2);
histogram(r_your_model, 'binWidth', 0.5);
grid on;
title('Your model - residuals');
res.jpg
  1 Commento
geethu th
geethu th il 2 Gen 2019
Thank you for your answer.
But I want the code to get the coefficient m and b as 14.82 and -.39 repectivy for the given X and Y. The fitted equation of given X and Y is "Y = 14.82 X^(-0.39)" which is a lower bound for the given data X and Y. Using constrained least squares
1.JPG
The final plot will be like this(plotted using loglog plot function):
caine.jpg
Actual X-Y
X=[0.17,0.17,0.5,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,3,3.5,3.5,4,4,4.5,6,6,10,10,10,12,15,18,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,37,49,72,72,72,96,96,120,192];
Y = [30,30,60,50,15,40,40,30,100,30,17,29,44,40,125,52,102,64,52,85,34,12.7,62.3,12.8,50,169,33.3,40,15,20,25,10.4,12.7,17.8,6.25,7.5,12.5,6.25,6.33,5.29,7.42,17.7,16.7,19.4,28.6,6.25,10.21,4.46,5.5,30,8.42,13.96,6.53,6.56,6.94,6.72,2.65,2.94,4.17,4.18];
calculated Y(the blue line in the graph based on the equation)
Y= [29.5782942,29.5782942,19.42003025,14.82,14.82,14.82,14.82,14.82,14.82,14.82,14.82,14.82,14.82,14.82,14.82,14.82,14.82,14.82,14.82,14.82,9.655414319,9.09204688,9.09204688,8.630675876,8.630675876,8.243190249,7.36833251,7.36833251,6.037375717,6.037375717,6.037375717,5.622992674,5.154334656,4.800560271,4.291072175,4.291072175,4.291072175,4.291072175,4.291072175,4.291072175,4.291072175,4.291072175,4.291072175,4.291072175,4.291072175,4.291072175,4.291072175,4.291072175,4.291072175,4.291072175,4.291072175,3.624511911,3.248416684,2.795686891,2.795686891,2.795686891,2.498977942,2.498977942,2.290696318,1.907044049];

Accedi per commentare.


Torsten
Torsten il 2 Gen 2019
function main
X = [0.17,0.17,0.5,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,3,3.5,3.5,4,4,4.5,6,6,10,10,10,12,15,18,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,37,49,72,72,72,96,96,120,192];
Y = [30,30,60,50,15,40,40,30,100,30,17,29,44,40,125,52,102,64,52,85,34,12.7,62.3,12.8,50,169,33.3,40,15,20,25,10.4,12.7,17.8,6.25,7.5,12.5,6.25,6.33,5.29,7.42,17.7,16.7,19.4,28.6,6.25,10.21,4.46,5.5,30,8.42,13.96,6.53,6.56,6.94,6.72,2.65,2.94,4.17,4.18];
% Fit data
p0 = [14.82,-0.39];
p = fmincon(@(p)fun(p,X,Y),[],[],[],[],[],[],@(p)nonlcon(p,X,Y))
end
function obj = fun(p,X,Y)
obj = sum((p(1)*X.^p(2) - Y).^2);
end
function [c,ceq] = nonlcon(p,X,Y)
c = p(1)*X.^p(2) - Y;
ceq = [];
end
  12 Commenti
geethu th
geethu th il 4 Gen 2019
I am getting the following errors while running his code.
Error using fmincon (line 221)
FMINCON requires the following inputs to be of data type double: 'UB'.
Error in powerLowFit_V1 (line 6)
p = fmincon(@(p)fun(p,X,Y),[],[],[],[],[],[],@(p)nonlcon(p,X,Y));
Torsten
Torsten il 4 Gen 2019
Use
p = fmincon(@(p)fun(p,X,Y),p0,[],[],[],[],[],[],@(p)nonlcon(p,X,Y));

Accedi per commentare.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by