Why there is a large numerical error in simple arithmatics?

2 visualizzazioni (ultimi 30 giorni)
x = rand(1,1);
x_1000 = x*1000;
errors_ = [];
for N = -20 : 20
errors_(end+1) = x*10^-N - x_1000 *10^-(N+3);
end
% plot errors
figure
plot(errors_, '.')
% display errors
unique(errors_)
ans = 1×21
1.0e+03 * -2.0480 -0.0020 -0.0000 -0.0000 -0.0000 -0.0000 -0.0000 -0.0000 -0.0000 -0.0000 -0.0000 -0.0000 -0.0000 -0.0000 -0.0000 -0.0000 -0.0000 -0.0000 -0.0000 0 0.0000
When I run this code, I can see error at some N values.
Why is that? it is unexpected and breaks my code.
It is very interesting to get different results running these three codes:
% returns ZERO error
error_ = []; for N = -20:20; error_(end+1) = (x * 10^N) - (x_1000 * 10^-3 * 10^N); end; figure; plot(error_, '.'); unique(error_)
% returns ZERO error
error_ = []; for N = -20:20; error_(end+1) = (x * 10^(-N)) - (x_1000 * 10^-3 * 10^(-N)); end; figure; plot(error_, '.'); unique(error_)
% returns ERROR!
error_ = []; for N = -20:20; error_(end+1) = (x * 10^(-N)) - (x_1000 * 10^(-N-3)); end; figure; plot(error_, '.'); unique(error_)

Risposte (2)

Matt J
Matt J il 18 Lug 2023
Modificato: Matt J il 18 Lug 2023
You need to look at the relative errors. You will see that they are super-small:
x = rand(1,1);
x_1000 = x*1000;
N = -20 : 20;
errors = abs(x*10.^-N - x_1000 *10.^-(N+3));
rel_errors=errors./(x*10.^-N);
plot(N,rel_errors, '.--')
  4 Commenti
Steven Lord
Steven Lord il 18 Lug 2023
Look at the numbers that you're subtracting when N is -20. I ran this a couple times to generate different values of x; in the previous run the difference was exactly 0.
x = rand(1,1);
x_1000 = x*1000;
N = -20;
format longg
firstTerm = x*10^-N
firstTerm =
7.31485847971567e+19
secondTerm = x_1000*10^-(N+3)
secondTerm =
7.31485847971567e+19
The error is around the order of machine epsilon.
difference = firstTerm-secondTerm
difference =
-8192
eps(firstTerm)
ans =
8192
In absolute terms a difference of 8192 is large. But compared to the numbers you're subtracting it's just barely non-negligible. It's like taking $100 from Scrooge McDuck's money bin; $100 is nothing to sneeze at, but Scrooge has plenty more where that came from.
If a difference around eps breaks your code, I'd be concerned about how robust your algorithm is.
Stephen23
Stephen23 il 18 Lug 2023
Modificato: Stephen23 il 18 Lug 2023
"Is this behavior typical in Matlab?"
It is typical behavior of all numeric applications that use binary floating point numbers, which is nearly all of them because then they can use the very fast inbuilt floating point operations supported by your computer HW:

Accedi per commentare.


Walter Roberson
Walter Roberson il 18 Lug 2023
10^N is not exactly representable in any binary floating point representation when N < 0 . For example, 10^-1 = 1/10 = 0.1 cannot be exactly represented as a finite-length binary fraction.
The reason is mathematical, and is exactly the same reason that 1/3 is cannot be exactly represented in a finite-length decimal representation. For example with 20 decimal digits, 1/3 --> 0.33333333333333333333 and multiply that by 3 you get 0.99999999999999999999 not exactly 1. It would not matter whether you used 10 decimal digits or 10000 decimal digits, 1/3 exactly cannot be represented with a finite number of decimal fraction digits. 1/7 also cannot be exactly represented in finite decimal fraction, nor 1/9 or 1/11 or 1/13 or 1/14 or 1/15 ...
Likewise, there are a lot of fractions that cannot be exactly represented in any finite-length binary fraction, not even if you used 10000 binary fraction digits.
For any integer base M, 1/(M+1) cannot be exactly represented in any finite number of fraction digits of the base -- if you were to use base 60 numbers then 1/61 would not be exactly representable in any finite number of base-60 fraction digits.
MATLAB uses hardware standard IEEE 754 Binary Double Precision Floating Point -- it relies on your floating point hardware in your CPU, and so is doing exactly as well as your hardware floating point supports.
Different programming languages might postpone the problem by using larger numbers of binary digits. That typically requires implementing the floating point in hardware. Some Intel chips and some AMD chips support extended precision (80 bit) floating point in hardware, but that has some oddities.

Prodotti


Release

R2022b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by