Different results using curve fitting app

5 visualizzazioni (ultimi 30 giorni)
Hello,
I want to fit an exponential function of y=a*x^b + c to the x and y array below. I am using the Curve Fitting app with default options and Custom Equation.The issue I encountered is that the app won't automatically give me the highest R2 fit. At the beginning, the fit generated is bad. I had to manually test the value of b to find a high R2. Can the app give me a fit with the absolutly highest R2 using the exponential function I defined? Thanks.
x = [262088, 323208, 390728, 464648, 544968, 631688];
y = [0.629, 0.64, 0.93, 0.972, 1.355, 1.56];

Risposta accettata

Matt J
Matt J il 20 Ott 2022
Modificato: Matt J il 20 Ott 2022
The performance of a custom fit generally does depend on the StartPoint that you've selected.
You should probably use the non-custom 'power2' fit type:
x = [262088, 323208, 390728, 464648, 544968, 631688]';
y = [0.629, 0.64, 0.93, 0.972, 1.355, 1.56]';
[fobj,gof]=fit(x,y,'power2')
fobj =
General model Power2: fobj(x) = a*x^b+c Coefficients (with 95% confidence bounds): a = 1.789e-11 (-6.681e-10, 7.039e-10) b = 1.868 (-0.9566, 4.692) c = 0.361 (-0.5579, 1.28)
gof = struct with fields:
sse: 0.0220 rsquare: 0.9691 dfe: 3 adjrsquare: 0.9485 rmse: 0.0856
plot(fobj,x,y)
  3 Commenti
Matt J
Matt J il 20 Ott 2022
Modificato: Matt J il 20 Ott 2022
No, the StartPoint is an initial guess, supplied by you, of all the unknown parameters. It is under Fit Options:
It is usually less important when you are not doing a custom fit which, as I said, you shouldn't be in this case.
Jen W
Jen W il 21 Ott 2022
The power2 solution you gave is very elegant. Thank you.

Accedi per commentare.

Più risposte (2)

Torsten
Torsten il 20 Ott 2022
Modificato: Torsten il 20 Ott 2022
Scale your problem:
x = [262088, 323208, 390728, 464648, 544968, 631688];
y = [0.629, 0.64, 0.93, 0.972, 1.355, 1.56];
xscaled = 1e-6*x;
fun = @(p,xscaled) p(1)*xscaled.^p(2)+p(3);
fun1 = @(p) fun(p,xscaled)-y;
p0 = [1 1 1];
options = optimset('MaxFunEvals',10000,'MaxIter',10000);
p = lsqnonlin(fun1,p0,[],[],options);
Local minimum found. Optimization completed because the size of the gradient is less than the value of the optimality tolerance.
p(1) = p(1)*10^(-6*p(2));
a = p(1)
a = 1.7907e-11
b = p(2)
b = 1.8677
c = p(3)
c = 0.3610
hold on
plot(x,y,'o')
xx = 0:1000:x(end);
plot(xx,fun(p,xx))
xlim([0 6.5e5])
ylim([0 1.6])
grid on
hold off

Image Analyst
Image Analyst il 20 Ott 2022
Try fitnlm Full code in test7 with your data. The other m-file is my old, existing demo.
% Uses fitnlm() to fit a non-linear model (a power law curve) through noisy data.
% Requires the Statistics and Machine Learning Toolbox, which is where fitnlm() is contained.
% Initialization steps.
clc; % Clear the command window.
close all; % Close all figures (except those of imtool.)
clear; % Erase all existing variables. Or clearvars if you want.
workspace; % Make sure the workspace panel is showing.
format long g;
format compact;
fontSize = 20;
% Create the X and Y coordinates.
X = [262088, 323208, 390728, 464648, 544968, 631688];
Y = [0.629, 0.64, 0.93, 0.972, 1.355, 1.56];
% Now we have noisy training data that we can send to fitnlm().
% Plot the noisy initial data.
plot(X, Y, 'b*', 'LineWidth', 2, 'MarkerSize', 20);
grid on;
% Convert X and Y into a table, which is the form fitnlm() likes the input data to be in.
tbl = table(X', Y');
% Define the model as Y = a * (x .^ b) + c
% Note how this "x" of modelfun is related to big X and big Y.
% x((:, 1) is actually X and x(:, 2) is actually Y - the first and second columns of the table.
modelfun = @(b,x) b(1) * x(:, 1) .^ b(2) + b(3);
beta0 = [.1, .4, -2]; % Guess values to start with. Just make your best guess. They don't have to match the [a,b,c] values from above because normally you would not know those.
% Now the next line is where the actual model computation is done.
mdl = fitnlm(tbl, modelfun, beta0);
% Now the model creation is done and the coefficients have been determined.
% YAY!!!!
% Extract the coefficient values from the the model object.
% The actual coefficients are in the "Estimate" column of the "Coefficients" table that's part of the mode.
coefficients = mdl.Coefficients{:, 'Estimate'}
% Create smoothed/regressed data using the model:
yFitted = coefficients(1) * X .^ coefficients(2) + coefficients(3);
% Do another fit but for a lot more points, including points in between the training points.
X1000 = linspace(X(1), X(end), 1000);
yFitted1000 = coefficients(1) * X1000 .^ coefficients(2) + coefficients(3);
% Now we're done and we can plot the smooth model as a red line going through the noisy blue markers.
hold on;
% Plot red diamonds fitted values at the training X values.
plot(X, yFitted, 'rd', 'LineWidth', 2, 'MarkerSize', 10);
% Plot fitted values at all the 1000 X values with a red line.
plot(X1000, yFitted1000, 'r-', 'LineWidth', 2);
grid on;
title('Power Law Regression with fitnlm()', 'FontSize', fontSize);
xlabel('X', 'FontSize', fontSize);
ylabel('Y', 'FontSize', fontSize);
legendHandle = legend('Noisy Training Y', 'Fitted Y at training X', 'Fitted Y everywhere', 'Location', 'north');
legendHandle.FontSize = 25;
message = sprintf('Coefficients for Y = a * X ^ b + c:\n a = %8.5f\n b = %8.5f\n c = %8.5f',...
coefficients(1), coefficients(2), coefficients(3))
text(min(X), 1.2, message, 'FontSize', 23, 'Color', 'r', 'FontWeight', 'bold', 'Interpreter', 'none');
% Set up figure properties:
% Enlarge figure to full screen.
set(gcf, 'Units', 'Normalized', 'OuterPosition', [0, 0.04, 1, 0.96]);
% Get rid of tool bar and pulldown menus that are along top of figure.
% set(gcf, 'Toolbar', 'none', 'Menu', 'none');
% Give a name to the title bar.
set(gcf, 'Name', 'Demo by ImageAnalyst', 'NumberTitle', 'Off')
  2 Commenti
Jen W
Jen W il 20 Ott 2022
Thanks for your response. Your b value is very different from the other two answers somehow. What is the R2 error?
Image Analyst
Image Analyst il 21 Ott 2022
It's easier to look at the mean absolute error, MAE. For my coefficients:
>> mean(abs(yFitted - Y))
ans =
0.0715720322205556
What is it for the other approaches? If we look at Matt's numbers:
X = [262088, 323208, 390728, 464648, 544968, 631688];
Y = [0.629, 0.64, 0.93, 0.972, 1.355, 1.56];
a = 1.789e-11;
b = 1.868;
c = 0.361;
yFittedMatt = a * X.^b + c;
mean(abs(yFittedMatt - Y))
ans = 0.0569
If we look at Torsten's code
a = 1.7907e-11;
b = 1.8677;
c = 0.3610;
yFittedTorsten = a * X.^b + c;
mean(abs(yFittedTorsten - Y))
ans = 0.0566

Accedi per commentare.

Categorie

Scopri di più su Linear and Nonlinear Regression in Help Center e File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by