Running a function on two signals in parallel

5 visualizzazioni (ultimi 30 giorni)
Hello
Recently, I wrote a function that finds an optimum value of a signal's parameter. More specifically, suppose that the name of my function is Erf and gets a signal, x, as an input- Erf(x). When I run Erf (x), it returns a value, say 0.64.
My question is how I can run this function on different signals in parallel?
I mean, assume that I have two signals, x and y. I want to find Erf (x) and Erf (y) in parallel. To date, I have run the Erf function on x first and then on y to find the optimum value of a parameter for each signal.
Is there any method to achieve the mentioned aim simultaneously?
I should mention that I have an Nvidia GPU , and therefore if there exist any than can address the problem using GPU, please mention it and provide some detail.
Many Thanks

Risposta accettata

Walter Roberson
Walter Roberson il 29 Mag 2021
Yes, it is possible. Much of the time it is not beneficial.
You need the Parallel Computing Toolbox. You would create a parpool with two members. You would use one of:
  • spmd
  • parfor
  • parfeval or parfevalOnAll
If you use spmd then the two workers can communicate with each other using labSend() and labReceive(), but you cannot communicate with the controller until the spmd completely finishes.
If you use parfor then the workers cannot directly communicate with each other. It is possible to create parallel data queues to send results back from the worker to the controller, and it is possible (but usually awkward) to have the workers create parallel data queues and send those back to the controller and that permits the controller to send data to the workers. The workers cannot communicate directly: they would have to send data to the controller and the controller would have to send it to the other workers. Normal control over activity in the controller does not resume until the parfor completely finishes on all workers.
If you use parfeval or parfevalOnAll, then you can continue on in the controller while a worker processes the task, asynchronous execution. However, it becomes awkward to communicate with the workers during execution.
A lot of the time it turns out that the overhead of sending data to the workers and getting results back, and any communication in the meanwhile, adds up to make using parallel processing slower in many cases. Also, if the individual tasks involve heavy mathematical calculation, then unless you allocate a bunch of cores to each worker, then the workers default to running on a single core each, and so cannot take advantage of the high performance built-in parallel operations such as matrix multiplication that is tuned to be cache-friendly.
Only one worker at a time can use a GPU. and a worker can only use one GPU at a time. If you only have one GPU, then if both workers tried to access it, MATLAB would need to continually steal it away from the other worker, forcing a full state synchronization each time, which is one of the most expensive GPU operations.
  5 Commenti
Walter Roberson
Walter Roberson il 30 Mag 2021
What you describe for fft is vectorization. It is present for most build-in operations. Whether it works for your code depends upon how you write your code.
Erfan Basiri
Erfan Basiri il 30 Mag 2021
Modificato: Erfan Basiri il 30 Mag 2021
Thanks Walter.

Accedi per commentare.

Più risposte (0)

Categorie

Scopri di più su Parallel Computing Fundamentals in Help Center e File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by