GPU Coder vs. ONNXRuntime, is there a difference in inference speed?
Mostra commenti meno recenti
Since I can export from Matlab to ONNX format, why can't I just import my model into TensorRT etc.? Will I get significant speed increases or is the benefit of GPU Coder more about being able to compile all my other Matlab code into optimized Cuda?
Thanks in advance.
Risposte (1)
Joss Knight
il 2 Apr 2021
0 voti
You can compile your network for TensorRT using GPU Coder if that's your intended target, no need to go through ONNX.
I don't believe MathWorks have any published benchmarks against ONNX runtime specifically. GPU Coder on the whole outperforms other frameworks, although it does depend on the network.
2 Commenti
Matti Kaupenjohann
il 7 Gen 2022
Could you show/link the benchmark which includes the performance of gpucoder against other frameworks (which one?).
Joss Knight
il 7 Gen 2022
Modificato: Joss Knight
il 7 Gen 2022
We don't publish the competitive benchmarks, you'll have to make a request through your sales agent. we can provide some numbers for MATLAB.
Categorie
Scopri di più su Get Started with GPU Coder in Centro assistenza e File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!