You have vector of values. Calculate
bit_usage = ceil(log2(NumberOfPossibleStates)) * NumberOfElements;
If your output is a vector of values each of which is 0 or 1, or '0' or '1', then the number of states is 2, log2(2) is 1, and the calculation becomes identical to NumberOfElements.
If you output is a vector of uint8 and all values from 0 to 255 can occur, then that is 256 states, log2() is 8, and the number of bits is then 8 * NumberOfElements.
This calculation is also valid on the input, but on the input side you need think more about whether you want to calculate based upon possible states or upon representation. For example if the input is in hexadecimal, then you have an 8 bit character that is restricted to 0 to 9 or A, B, C, D, E, F, which is 16 possible states, log2(16) = 4. So you need to think about whether you want to count the 8 bits per character of input that was the external representation, or the 4 bits per character that is the information content.
When you are dealing with the theory of arithmetic encoding and other forms of compression, it is most common to be more directly concerned about the information content rather than about the file representation. However, people are more often concerned about file compression ratios rather than about information content. When you do talk about file compression ratios, then it is important that you include all file overheads, such as storing dictionaries and symbol tables and "magic numbers" that indicate the file is already compressed, along with anything like archive comments fields and other metadata like EXIF that people expect to have preserved even though they have nothing to do with the techniques for compressing information. Like if you have an image that has a bunch of EXIF headers, you could "compress" the image by creating a new version that skipped the EXIF headers, and people might naively think you did a good job of compression, but you would have lost the information...