- The target FPGA platform you selected (DLXCKU5PE) is not officially supported by the quantization workflow in MATLAB/HDL Coder/Deep Learning HDL Toolbox.
- The platform is either custom or not included in the list of supported boards for quantized deployment.
- Review the list of supported FPGA boards for quantized deep learning deployment in the https://in.mathworks.com/help/deep-learning-hdl/ug/supported-network-boards-and-tools.html
- If your board is not listed, quantized workflows (especially with int8) may not be possible out-of-the-box.
- For custom boards, you may need to create a custom platform registration using the dlhdl.Target and dlhdl.Board classes, but quantization support may still be limited.
- If quantized (int8) deployment is not supported, you may be able to deploy your network using single (floating point) precision instead.


