(Rephrased) I am starting to play with the Deep Learning Toolbox and deepNetworkDesigner. An example of what I'd like to be able to do is to take a CNN classifier that has already been trained for 30x30 input images, but now use it to do classification on every 30x30 sub-block of a 400x400 image A.
The naive way to do this would be to loop over the sub-blocks in A and feed them to the CNN one at a time, but that is inefficient. The more efficient technique that I have seen recommended is to convert the CNN to a fully-convolutional network, which means adjusting the final layers as follows:
(1) Converting the fully-connected layer to a convolutional layer where the weights are now that of a 30x30 filter, stride=1, Padding=[0,0].
(2) Applying the softmax operation pixel-wise to the output of (1).
After doing the above, every layer in the network is now a shift-invariant operation, and should be able to process input images of any size. If I input a 400x400 image A, the output of the network should be an N-channel image of size 371x371 where each pixel contains the N class probabilities of a particular 30x30 sub-block.
I am wondering if it possible to make the kind of adjustments described above to an already trained CNN?