conv vs conv2?

35 visualizzazioni (ultimi 30 giorni)
ABTJ
ABTJ il 1 Mag 2020
Commentato: Ameer Hamza il 2 Mag 2020
What is difference between these two commands?
especially in regards to there application

Risposta accettata

Ameer Hamza
Ameer Hamza il 1 Mag 2020
The simple difference is that conv() operates on a 1D vector, while conv2() operates on a 2D matrix.
conv() is usually used in the study of linear system analysis and control theory. The most common application of conv() is to find the response of an LTI system. If the impulse response of an LTI system is given, then finding its response to any input is just a matter of finding the convolution of input and impulse response. See these links for more information.
conv2() is mostly used in image processing. It can do a lot of things with images, e.g., blurring, smoothing, sharpening, etc. It is impossible to summarize its application in a single post. Following links will be helpful
  4 Commenti
John D'Errico
John D'Errico il 2 Mag 2020
The flaw with using conv in this way for large integer multiplies is the time grows as O(N*M), where N and M are the number of digits in the members of the product. (Adds and subtracts are far more efficient, as these ops only grow as O(max(N,M)).) So a trick that HPF uses is to store the numbers in blocks of digits. I called them migits. So you may for example use blocks of 5 digits. The same trick still works using conv and carries, but since you are using blocks of 5 digits, the conv based multiply is now 25 times as fast as it would be for single digits.
The downside to that is the intermmediate products prior to carries will grow in maximum magnitude.
max(conv(repmat(9,1,100000),repmat(9,1,100000)))
ans =
8100000
max(conv(repmat(9,1,1000000),repmat(9,1,1000000)))
ans =
81000000
If you are using single digit integers stuffed into a double vector to store your digits, you can compute the maximum length of a number you can use conv to do multiplies, otherwise a double precision flint overflow will occur. You can go a bit further if you use int8 to store your numbers before overlow happens.
But what happens if you use migits as I have said? Consider a multiply between pairs of 100000 decimal digit numbers? The maximum for those intermediate products grows quadratically with the migit order.
>> max(conv(repmat(9,1,100000),repmat(9,1,100000)))
ans =
8100000
>> max(conv(repmat(99,1,100000/2),repmat(99,1,100000/2)))
ans =
490050000
>> max(conv(repmat(999,1,ceil(100000/3)),repmat(999,1,ceil(100000/3))))
ans =
3.3267e+10
>> max(conv(repmat(9999,1,ceil(100000/4)),repmat(9999,1,ceil(100000/4))))
ans =
2.4995e+12
>> max(conv(repmat(99999,1,ceil(100000/5)),repmat(99999,1,ceil(100000/5))))
ans =
2e+14
>> max(conv(repmat(999999,1,ceil(100000/6)),repmat(999999,1,ceil(100000/6))))
ans =
1.6667e+16
>> log2(ans)
ans =
53.888
What does this mean? There is a precision implied limit to the maximum number of digits you can work with when you would work with high order migits.
You can use 5 digit groups to multiply two numbers where each has a little more than 5e5 decimal digits at a maximum. However, 1e5/6 blocks of 6 digit migits will result in a double precision flint overflow, though you apparently could still use int64 to perform that computation.
My HPF code allows the user to specify the block size used, trading off the ultimate size of the numbers stored for speed of computation. 6 digit migits allow you to multiply 36 times as fast as single digit migits, and arithmetic in many thousands of decimal digits of floating point precision is still seriously big.
(When I was testing HPF, I used it to compute pi and e to 1e6 digits for code verification, all done using the maximum possible migit order of course.)
conv really does have amazingly many uses, though I admit signal processing is a big one.
Ameer Hamza
Ameer Hamza il 2 Mag 2020
That's an excellent discussion, John. Your answers and comments are always very informative and interesting to read.
Your comment made me think further about this problem. The limitation on the size of "migits" is caused by of finite precision of floating-point or the int64 data type. Using int64 will we can get a gain of about 49 times (int64 should work up to migits of size 7). What if we do something like this:
The method in your comment can handle variable precision multiplication by using conv() and int64 up to imigits of 7). If we wrap it into a function and use it to create migits of arbitrary size too. For example, we write a function vpa_mul
function z = vpa_mul(x, y) % x, y and z are char vectors
% here we convert x and y to individual digits
% then partition them into migits and multiply
% using migits and return y as char array
end
and then I do convolution using vpa_mul with migits of size 20 (gain of 400 times)
conv(repmat({'99999999999999999999'}, 1, 100000/20), repmat({'99999999999999999999'}, 1, 100000/20))
Of course, here we need to write a custom convolution function, but the idea is the same. Maybe it is quite a naive idea, and the overall complexity is higher. Is there something obvious I am missing, which will end up increasing the overall complexity of the whole process.

Accedi per commentare.

Più risposte (0)

Categorie

Scopri di più su Get Started with MATLAB in Help Center e File Exchange

Prodotti


Release

R2015a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by