How to utilize GPU while the classifiers were running on the classification learner application?

I'm working in Deep Neural Networks in which lot of execution power is needed for computation. I used Tesla K40c and GeForce GTX 1050Ti Parallel Computing Power for features extraction from different pretrained models but at the stage of classification (which is being done by classification learner application) none of the GPU is utilizing. I have configured MATLAB 2018a with CUDA Toolkit 9.2 and cudNN library 9.2. I also tried different versions of MATLAB with different versions of CUDA Toolkit and cudNN library like MATLAB2017a with CUDA Toolkit8.0 and cudNN library version 8.0 and name a few.
My GPU is utilizing while I used matlab function "activation" for extracting features but GPU utilization has ended during the computation of all the classifiers while using classification learner app.
So, I need to utilize my GPU power while using the classification learner app to minimize the execution time during testing.
I have install all the required toolboxes like Neural Network Toolbox, Parallel Computing Toolbox and Pretrained Models.
Need help to solve this query, waiting for your response.
Thanks !

7 Commenti

How have you enabled GPU support? Not every classifier supports GPU execution.
Actually, I'm not sure there is any way to use the GPU from the app. You have to use gpuArrays direct from the command window, and then your options are SVM and KNN via fitcsvm and knnsearch - you have to pass in your data as a gpuArray to use the GPU.
You should try making your data a gpuArray when you create a new session, and see if that triggers some GPU behaviour. However, I'm pretty sure you'll be limited to SVM.
I enabled GPU support by installing and configuring CUDA Toolkit 8.0 with cudNN libraries in MATLAB version 2017a followed by Microsoft Visual Studio 2015 and setting the compiler to MS VS 2015 C++ using this command " mex -setup C++ " and then run " vl_compilenn('enableGpu', true) " for matconvnet 1.0-beta25 release.
Can you enlist the classifiers which were supported by GPU execution!
gpuArray is not a good efficient method when you have to compile a lots of simulations using different classifiers whereas on the other side of the curtains Classification Learner App provides very friendly environment to test on many good classifiers with a range of alterations in hyperparameters.
Installing the CUDA toolkit, cudnn, Visual Studio and MatConvNet has nothing whatsoever to do with MATLAB or Classification Learner. To use the GPU in MATLAB you create gpuArray objects and pass them to supported functions. If you write your own mex functions then the toolkit and cuDNN may become relevant, and if you install MatConvNet you have access to the supported tools within that third party toolbox. But of course none of that is integrated with Classification Learner.

Accedi per commentare.

 Risposta accettata

This page lists all the functions that support gpuArray, so far just a couple statistical and "classic" machine learning ones. But not all "classic" machine learning algorithms lend themselves to parallelization on a GPU.

3 Commenti

So, don't you think that MATLAB Community have to put something additional in this regard to support all the executions for Deep Learning algorithm's for GPU? I think so !
Because when testing your codes with python in Spyder shell on Anaconda then all the things were automatically utilize GPU execution, and they must do.
If not then GPU's is not much longer worthy as they do while accessing and utilizing them in python.
I need to utilize the max of GPU's in MATLAB as python do.
So, what do you say about these points and if there is not an option for it then Deep Learning in python is the best and ever best option.
Our GPU Coder will enable to run optimized deep learning algorithms on GPUs, and according to our benchmarks we are significantly faster than python-based deep learning frameworks. The benefit of GPUs for "classic" machine learning is less clear, that's why we haven't put the same effort into supporting classic machine learning on GPUs. That said, if your CPU is connected to a GPU, and you have PCT, the "Use Parallel" button in the classification learner will cause the model training to fan out processing to the GPU, though not leveraging the GPU specific CUDA or TensorRT acceleration libraries yet.
I agree with Mr. Junaid Lodhi
Matlab needs some improvement to make using the GPU more clear and easy. because now the only gpuArray available and that not clear when trying to write a huge program in multi-files.
I hope that in future.

Accedi per commentare.

Più risposte (0)

Categorie

Scopri di più su Deep Learning Toolbox in Centro assistenza e File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by