least absolute deviation when we have data set

12 visualizzazioni (ultimi 30 giorni)
I have this data
x = (1:10)';
y = 10 - 2*x + randn(10,1);
y(10) = 0;
how can I use least absolute value regression?

Risposta accettata

Bjorn Gustavsson
Bjorn Gustavsson il 22 Ago 2020
You can do it rather straight-forwardly with fminsearch (or other similar tools on the file exchange: fminsearchbnd, minimize etc):
M = [ones(size(x)),x]; % Matrix for linear LSQ-regression, we could do centering and scaling etc...
p0 = M\y; % straight least-square fit - to get ourselves a sensible start-guess (hopefully)
errfcn = @(p,y,M) sum(abs(y-M*p)); % L1 error-function
p1 = fminsearch(@(p) errfcn(p,y,M),p0); % L1-optimization
subplot(2,1,1)
plot(x,y,'.-')
hold on
plot(x,M*p0)
plot(x,M*p1)
subplot(2,1,2)
plot(x,y*0,'.-')
hold on
plot(x,y-M*p0,'.-')
plot(x,y-M*p1,'.-')
% For my test I got L1-error-function-value for the least-square-fit p0:
% errfcn(p0,y,M)
% ans =
% 22.058
% and for the L1-optimal parameters:
% >> errfcn(p1,y,M)
% ans =
% 20.067
This would generalize to more interesting problems too. Also have a look at Huber-norms, for an error-norm kind of intermediate between L1 and L2.
HTH
  8 Commenti
NA
NA il 6 Set 2020
Modificato: NA il 11 Set 2020
Thank you for taking the time to answer my question. I used your code and compare with 'robustfit'.
Why I could not get same regression?
Bjorn Gustavsson
Bjorn Gustavsson il 7 Set 2020
Because they use different algorithms, and from the robust-fit documentation you can look up the weighting used for its different settings of wfun and tune. Do the regressions differ by much? How do they vary if you vary the different tuning-parameters? When using robust fitting you should always check the residuals and their relative contributions to the total error-function.

Accedi per commentare.

Più risposte (1)

Bruno Luong
Bruno Luong il 22 Ago 2020
Modificato: Bruno Luong il 22 Ago 2020
% Test data
x = (1:10)';
y = 10 - 2*x + randn(10,1);
y(10) = 0;
order = 1; % polynomial order
M = x(:).^(0:order);
m = size(M,2);
n = length(x);
Aeq = [M, speye(n,n), -speye(n,n)];
beq = y(:);
c = [zeros(1,m) ones(1,2*n)]';
%
LB = [-inf(1,m) zeros(1,2*n)]';
% no upper bounds at all.
UB = [];
sol = linprog(c, [], [], Aeq, beq, LB, UB);
Pest = sol(m:-1:1); % here is the polynomial
% Check
clf(figure(1));
plot(x, y, 'or', x, polyval(Pest,x), 'b');
  3 Commenti
Bruno Luong
Bruno Luong il 22 Dic 2020
Modificato: Bruno Luong il 22 Dic 2020
"2) I am totally new to the ways of linear programming, so I am wondering how come you have no inequality constraints? I am guessing you are saying the solution must adhere to the objective function, precisely? "
Because I don't need it. I formulate the problem as
M*P - u + v = y
where u and v a extra variables, they meant to be positive
v =( M*P - y) = u
so
argmin (u + v) is sum(abs( M*P - y)) is L1 norm of the fit.
I could formulate with inequality but they are equivalent. There is no unique way to formulate LP, as long as it does what we want.
And as comment; all LP can be showed to be equivalent to a "canonical form" where all the inequalities are replaced by only linear equalities + positive bounds
argmin f'*x
A*x = b
x >= 0
Terry nichols
Terry nichols il 22 Dic 2020
Modificato: Terry nichols il 25 Dic 2020
Thanks much for all of your help!

Accedi per commentare.

Categorie

Scopri di più su Descriptive Statistics in Help Center e File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by