Deep Learning Toolbox Model Quantization Library
Quantize and compress deep learning models
2,2K download
Aggiornato
16 ott 2024
Deep Learning Toolbox Model Quantization Library enables quantization and compression of your deep learning models to reduce the memory footprint and computational requirements of your deep neural network.
Quantization to INT8 is supported for CPUs, FPGAs, and NVIDIA GPUs, for supported layers. The library enables you to collect layer level data on the weights, activations, and intermediate computations. Using this data, the library quantizes your model and provides metrics to validate the accuracy of the quantized network against the single precision baseline. The iterative workflow allows you to optimize the quantization strategy.
The library also supports structural compression of models with pruning and projection. Both techniques reduce the sizes of deep neural networks by removing elements that have the smallest impact on inference accuracy.
Please refer to the documentation here: https://www.mathworks.com/help/deeplearning/quantization.html
Quantization Workflow Prerequisites can be found here:
If you have download or installation problems, please contact Technical Support - www.mathworks.com/contact_ts
Additional Resources
- Learn more about MATLAB and Simulink for TinyML
- Quantization Aware Training (QAT) with MobileNet-v2 (Example, GitHub Repo)
- Overview Video - https://www.youtube.com/watch?v=jufOpBeSvHM
Compatibilità della release di MATLAB
Creato con
R2020a
Compatibile con R2020a fino a R2024b
Compatibilità della piattaforma
Windows macOS (Apple silicon) macOS (Intel) LinuxCategorie
Scopri di più su Deep Learning Toolbox in Help Center e MATLAB Answers
Tag
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!Scopri Live Editor
Crea script con codice, output e testo formattato in un unico documento eseguibile.