Modulation Transfer function: From edge spread function to MTF

I have this immage and i have to get the MTF. But I don't really know how to start. I know that I first have to get the edge spread function. Then line spread function and then with a fourrier transform function I get the MTF. But my question is how do I have to do everything. I'm not good in programming that why I ask if someone could help me with this?

Risposte (2)

I've seen definitions of the MTF as the magnitude ot the Fourier-transform of the line-spread-function or the magnitude of the Fourier-transform of the point-spread-function.
For the first definition of the MTF you should notice that the line-spread-function is the gradient of the edge-spread-function perpendicular to the edge. Since your edge is rotated relative to the detector you have to correct for that (or do some additional work to squeeze out sub-pixel resolution for the ESF). Once that's done you should calculate the LSF from the ESF and then do a dft of the LSF. Something like this:
D = imread('image.png'); % Just your illustration
D1 = D(50:160,50:200,:); % Crop to the relevant region
subplot(2,2,1)
imagesc(D1)
axis xy
[xe,ye] = ginput(2); % Select 2 point along the edge
phi_rot = -atan2(diff(xe),diff(ye)); % Either this angle or multiply it with -1
[x,y] = meshgrid(1:size(D1,2),1:size(D1,1));
x = x-mean(x(:));
y = y-mean(y(:));
X = cos(phi_rot)*x + sin(-phi_rot)*y;
Y = cos(phi_rot)*y + sin(phi_rot)*x;
D2 = interp2(x,y,double(D1(:,:,1)),X,Y); % Resample image to align edge vertically, also: imrotate
subplot(2,2,2)
imagesc(D2) % Check that rotating image worked OK,
esf = abs(diff(D2(10:90,76+[-30:30])'));
subplot(2,2,3)
plot(esf)
subplot(2,2,4)
plot(mean(esf'))
The MTF is a complete waste of time, I've now worked with image-data-analysis for ~25 years and no one has managed to explain the benefit of using the MTF over the PSF for evaluating the resolution-related characteristics of an imaging system. Since the MTF are based on the FT of the PSF an implicit assumption is that the PSF is shift-invariant - which is certainly not always the case (aberration-limited optical systems etc) and a naively calculated MTF would miss-inform the user. A far more direct approach is to estimate the PSF and look at that to get a direct image of how much discrete structures in the scene are blurred in the image. (This is my "Carthage must be destroyed") To do something that is trivially useful instead of Fourier-obfuscate the PSF I would use the deconvblind function like this:
xG = -11:11;
[xG,yG] = meshgrid(xG,xG);
psf0 = exp(-xG.^2-yG.^2);
subplot(2,2,1)
imagesc(D1(:,:,1))
axis([56 108 4 105])
ax1 = axis;
[J,PSF] = deconvblind({double(D1(:,:,1))},{psf0},1);
imagesc(J{2}),axis(ax1)
subplot(2,2,4)
imagesc(PSF{2})
[J,PSF] = deconvblind(J,PSF,1); % Repeat these steps
imagesc(J{2}),axis(ax1) % until you get as
subplot(2,2,4) % shart edge as possible
imagesc(PSF{2}) % withou getting a Gibbs-type edge-enhancement
psf_est = PSF{2}; % This is your estimate of the point-spread-function
HTH

8 Commenti

Hi what do you meam by Gibbs-type edge enhamcememt? Thanks
The Gibbs phenomenon (Wikipedia-explanation) is something that typically appears when deconvolving imagesc with sharp edges in intensity - due to the operation being a high-frequency enhancement operation.
Hi Bjorn,
in this case PSF does not include any input information from the ESF measurement. Is it possible to include the edge in the initial guess for the blind deconvolution?
Thanks,
Mirta
Sure, you can work your way from ESF towards an estimate of the PSF.
The LSF is just a convolution of the PSF with a line,
The ESF is just a convolution of the LSF with a Heaviside step-function.
If you walk backwards you get:
and assuming you have a rotationally symmetric PSF you get:
i.e. the PSF is the inverse Abel-transform of the LSF. Have a look on the file exchange for Abel-transform contributions. This rigorous approach might be overzealous - but is a good exersice - since the PSF typically have a width of a couple of pixels the extra "accuracy" of this approach is ground to dust on the pixel-limited resolution. Therefore you might just as well try a 2-D Gaussian with slightly smaller width than the LSF.
Thanks,
In my case I have 100 micron resolution imaging machine and the edge measurement (3 samples) as attached. So for now we had a measurement of the PSF from a similar machine which was non-symmetric that we used as an input the the blind algorithm. I'm working on, as you described, getting the PSF estimate from the edge measurement. If you have any code to share, I would be happy to see it.
I disagree on your notion of absolutely useless. Being an industry standard metric (for as long as I can recall), MTF is used to design optical systems. It is the best way to link real world imaging requirements to a mathematical metric. Having freqency domain requirements clarifies to both the customer and designer what the image quality should and will be of an optical system. Subsequently the system, upon being produced, can be measured on an MTF bench and traced to requirements. It is a quality metric that has been used in the optics industry decades. So maybe not worthless.
@Thomas Carter, sure, it is a widely used standard and I deliberately used a harsh wording. Since the MTF and psf are a Fourier-transform pair, when the conditions apply, both are equaly suitable to describe the imaging quality, and the to get to the blurring effect of an imaging system I for one have a far easier task looking at the psf. Since the shift-invariance conditions typically breaks down for the wide-field-of-view imaging systems the MTF becomes ill-defined. For such imaging systems an MTF-description is plain wrong, while a sequence of psf-s over the image will still be descriptive. In my (very stuborn) opinion the industry chose the wrong standard for this, but I'm not naive enough to imagine that I will change that standard.
Since I have only worked in image data analysis, I am very intrigued as to why the frequency-domain requirements are so preferable to the spatial-domain counterpart in system design and evaluation.
Bjorn,
There are a few reasons I can think of right off the top of my head.
Preamble: The Edge Spread Function is essentally how optical systems were tested orignally. The signal to noise ratio is signifigantly higher than a pinhole as well as a slit. With the large performance improvement of CMOS technology, pinhole targets are more common for visible optical systems. This get you both transverse and sagittal MTF at once. Pinholes are not as convenient in the SWIR, MWIR & LWIR as signal to noise ratio is quite a bit less.
  1. Defense & commercial imaging projects requirements are often called out in terms of MTF, subsequently testing is performed to that end.
  2. At its heart though, the performance of am imaging system, with digital image output, is more aptly described in terms of frequency. The system cutoff and more appropriately Nyquist are calculated by sensor pixel size. This provides a specific frequency that can be tied to system performance. Say 40% modulation at 100 cyc/mm (5um pixels).
  3. The informed customer may have a specific object critical dimension (CD) they require be detected. This may come from some studied algorthmic performance or it may simply be the Vernier acuity of the human eye. The system magnification can then be established such that the size of the CD is imaged onto the sensor over, say, three 5um pixels. The sinusoidal modulation at this frequency 1/(2*(3*.005))=33.3 cyc/mm, should be say 50%, which becomes system image quality metric.
The customer will have a hard time defining what the appropriate PSF should be to see their object CD. Also Strehl is a more widely used image quality metric when we get into use cases such as astronomical telescopes, i.e. pin holes in the sky.
Regards.

Accedi per commentare.

The Image Processing Toolbox a function called measureSharpness that measures Spatial Frequency Response (SFR) and MTF from slanted edges. It was initially supported only for IMATest Edge SF charts.
Starting in R2024a, this function can be used on any image which has a slanted edge. The example in the documentation below shows how this can be done:
For accurate MTF measurement, the slanted edges must not have a slope of more than 5 degrees from the X or Y axis.
Hope this helps!

Richiesto:

il 13 Mag 2020

Risposto:

il 17 Nov 2025

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by