# How can i compute the derivative of a signal in an instant of time/sample?

190 views (last 30 days)
Miriam zinnanti on 13 Jun 2019
Edited: James Browne on 15 Jun 2019
Hi, everybody. I need help calculating a signal first derivative. In particular, I need to calculate the value that the first derivative of the signal assumes at a specific istant time (in addition to the values that the starting signal assumes, I also have the sampling frequency and a vector with the associated time instants). I tried to use the Matlab function "diff" but since it returns a vector with one sample less than the starting signal, I think it makes lose the correspondence with the vector of the starting times, extracting at the t-th time a sample that is not the desired one. I also tried to use Matlab's "designfilt('differentiatorfir')" function but I'm not sure it really calculates the signal derivative. Is there anyone who can give me advice with these functions or who knows about new ones?

James Browne on 15 Jun 2019
Edited: James Browne on 15 Jun 2019
Greetings,
Consider the general deffinition of the first derivative, dy/dx. In the continuous realm, dy and dx are infinitely small, this gives rise to the mathematical rules which allow us to use algebra to calculate derivatives of functions at specific points. If, however, we are talking about the discrete realm (real data), then we can really only approximate the first derivative and the simplest way is:
dy/dx = (y2 -y1) / (x2 - x1) (1)
So, if you think about it, when you calculate dy/dx using x1, x2, y1 and y2, you are not calculating the approximate derivative at either point 1 or point 2 but BETWEEN the points and furthermore it is a linear approximation of what might be a nonlinear derivative curve.
Consider the following:
dx = 1;
x = 0:dx:10;
L = length(x);
y = sin(x);
dy = diff(y);
dydt = dy/dx;
ct = 1;
for i = 2:L
x2(ct) = x(i-1) + ( x(i) - x(i-1) )/2;
ct = ct + 1;
end
plot(x,y,'-bo',x2,dydt,':kd')
title('y = sin(x) vs dy/dx')
xlabel('x-values')
legend('y(x)','dy/dx','location','northeast')
ylim([-1,1.4])
The above example computes the approximate derivative and places the derivative values between the original data points. The result is the following figure: Note that there are n-1 derivative data points for n points of original data.
Now, this is a deliberately extreme case of the linear approximation of a nonlinear curve. Certainly, if we reduced the parameter dx, the linear approximation of the original signal (sin(x), in this case) would appear much more smooth, as would its derivative. Why? Because the points in the data set for y, that lie on either side of any given point in y, will be closer in magnitude to said point and there would be more of them, for a given range of x, as will the points in the y-derivative data.
So why look at the extreme case? To illistrate the point of why Equation 1 is not always a good representation of the approximate derivative at any x-coordinate. Consider the approximate derivative at x = 6.5, shown in the figure above. The derivative is positive on the left but negative on the right. You could certainly calculate it by evaluating the derivative of sin(x) at x = 6, but how would you approximate it from the data? An average, perhaps? I am sure there are methods for doing so but the accuracy of any one method would depend on the distance between the data points, compared to curvature of the continuous signal which is being approximated by the data collection. But then again, the derivative points don't lie ON any of the original data points, they are BETWEEN them.
So, all things considered, for the data points in x, excluding the first and last, a linear approximation of the derivative AT each x-value would be an interpolation between neighboring derivative datapoints. The accuracy of the linear approximation then depends on the distance between data points, the curvature of the actual (continuous) derivative and the rate of change of the curvature of the derivative between any two data points. The derivative at the end points of the original data follows a similar pattern.
For example, if I were to calculate the equation of the approximate derivative line between x = 0.5 and x = 1.5, then extend that line back to x = 0, would it be accurate? That would depend on the curvature of the actual (continuous) derivative of the real signal. If the curvature is low and the rate of change of the curvature is also low, then the linear extention of the approximate derivative for the end points would be fairly accurate. That is to say that the accuracy of the linear approximation/extension methods depends on the second and third derivatives of the original signal!
If you want to play it fast and lose though, interpolate between derivative approximation points for all but the end points and use linear extension for the end points. There are probably more accurate methods out there, based on second and third derivative values, so you may want to look into that if you want better accuracy~

Stephen23 on 15 Jun 2019
James Browne on 15 Jun 2019
Edited: James Browne on 15 Jun 2019
dx = 1;
x = 0:dx:10;
L = length(x);
y = sin(x);
dy = diff(y);
dydt = dy/dx;
ct = 1;
for i = 2:L
x2(ct) = x(i-1) + ( x(i) - x(i-1) )/2;
ct = ct + 1;
end
title('y = sin(x) vs dy/dx')
xlabel('x-values')
ylim([-1,1.4])
And...big surprise.... Gradient() simply interpolates between linear approximation points of the derivative and then does some funky magic at the end points. Oh, and hey, it also assumes a distance of 1 between your independant variable values so it is a good thing that I used dx = 1, so I did not have to preprocess my data~
Oh wait, if you look at the actual manual linear approximation vector, dy/dx and the gradienty vector, the endpoints are the same, so gradient() interpolates for the inner points and then just uses the original end points of the linear approximation derivative data as the end points of the interpolated data. Interesting method.