Different behaviour of 'Integer Rounding Mode' between version 2019a and 2022b

1 visualizzazione (ultimi 30 giorni)
Hello, everyone,
I am experiencing a rounding problem and I have noticed that actually with the same 'Integer Rounding Mode' two versions of simulink work differently, let me explain.
My goal is to interpret a uint16 as a fixdt(0, 11, 0.0025,-1), then I want to round it up in steps of 0.1 with rounding up.
To do this I used a constant uint16 which feeds a 'Data Type converter (SI)' fixdt(0, 11, 0.0025,-1) which in turn feeds another 'Data Type converter' fixdt(0, 11, 0.1, 0). The 'Integer Rounding Mode' of the first data converter is left at Floor by default (but should be indifferent because there is no rounding) while for the second one I set 'Round' and, in the positive range, I expect rounding up according to the documentation.
The problem is that in version 2019a it is rounded down while in version 2022b it is rounded up. Could I have done anything wrong or is there something I am not considering?
Attached are screenshots of the two simulations and the models with the simulations.

Risposte (1)

Will Walker
Will Walker il 11 Lug 2024
Hello Pasquale,
In R2020a, we introduced an improvement to how we quantize the net slope, specifically when using small (less than 16-bit) fixed-point types.
In R2019b and earlier, we would use the input wordlength to perform the quantization.
Let us look at an example quantization (which will be used below).
We wish to quantize the value 1.6 into a power of 2 scaled fixed-point value (this is so we can emit code consisting of an integer multiplication followed by a shift right)
If we use the input wordlength (11-bits in your case), we get this:
>> a = fi(1.6,0,11)
a =
1.599609375
DataTypeMode: Fixed-point: binary point scaling
Signedness: Unsigned
WordLength: 11
FractionLength: 10
Instead, we may be able to use a larger wordlength for quantization which can yield a more accurate representation, with only a small impact on the code efficiency:
>> b = fi(1.6,0,20)
b =
1.60000038146973
DataTypeMode: Fixed-point: binary point scaling
Signedness: Unsigned
WordLength: 20
FractionLength: 19
Now, let's lay out the formula for a fixed-point cast, where we will solve for the output stored integer Qout:
Qin * 0.0025 - 1 = Qout * 0.1
We can normalize the slopes into a Slope Adjustment Factor and a Fixed Exponent:
Qin * 1.28 * 2^-9 - 1 = Qout * 1.6 * 2^-4
Solving for the output stored integer (Qout):
Qout = Qin * 1.28 / 1.6 * 2^-5 - 2^4/1.6
This can be reduced to:
Qout = Qin * 0.8 * 2^-5 - 10
We can now do another normalization:
Qout = Qin * 1.6 * 2^-6 - 10
Now, we need to decide how to handle the 1.6 term.
We cannot accurately represent 1.6
If we go with the smaller quantization (older releases), we get a less accurate value that is slightly under the ideal value of 1.6.
If we go with the larger quantization (newer releases), we get a more accurate value that is slightly above the ideal value of 1.6.
If we plug in the value for Qin used in your model:
Qout = 1900 * 1.6 * 2^-6 -10 = 37.5
However, we cannot use 1.6, we have to use a quantized value:
Smaller quantization (R2019b and earlier):
Qout = 1900 * 1.599609375 * 2^-6 -10 = 37.4884033203125
Larger quantization (R2020a and later):
Qout = 1900 * 1.60000038146973 * 2^-6 -10 = 37.5000113248826
With a "nearest" style of rounding, they will round to different integer values, since they reside on different sides of the "half mark".

Prodotti


Release

R2019a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by