# Matlab Code for IQ Samples Compression and Decompression based on Block Floating point

20 views (last 30 days)
Sukshith Shetty on 2 Jul 2021
Answered: Andy Bartlett on 2 Jul 2021
I require a complete matlab code for bitwidth compression from 16 to 9,12 and 14 bits and decompression back to 16 bits. The Compression and Decompression algorithm should be based on block floating point format.
It should compress the 16 bit (2 bytes) data to 9 or 12 or 14 bits based on users choice.
Can you please either share the matlab code or methodlogy/algorithm that is followed to do this task. A block diagram with good explaination would do the job for me.
Thank you

Andy Bartlett on 2 Jul 2021
I'll describe some concepts that will hopefully help you figure out the code needed for your specific case.
I'm assuming the input is 16-bit fixed-point (and not 16-bit half-precision floating-point).
Going from 16 bits down to say 12 involves dropping 4 bits.
When dropping those bits, to keep the error modest, you want to avoid overflows.
The simplest way to do that is always discard the 4 least significant bits, with round to floor (fastest), or round to nearest for half the worst-case error. When you decompressing from 12 to 16, just stuff 0000 on the least significant end.
But that simple design is not optimal interms of accuracy. You can increase the accuracy by testing which of the most significant bits are actually needed to prevent overflow. For example, if all the values are non-negative and the maximum value has this stored integer bit pattern
'0011111111111101'
then you can drop up to 2 most significant bits without overflow, and then only drop 2 bits of precision to get 4 total bits dropped.
You can figure out the optimal number of range bits to drop with simple inequalities on the maximum input and, if signed, on the minimum input.

R2020b

### Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!