How to convert 2D layer to 1D and from 1D to 2D?

16 visualizzazioni (ultimi 30 giorni)
Grzegorz Klosowski
Grzegorz Klosowski il 4 Set 2022
Risposto: Mandar il 13 Dic 2022
I want to create an autoencoder architecture with 2D input and output (matrix) but inside I need 1D (fullyConnectedLayer) as a latent layer. How to do it? In Layer Library of Deep Network Designer I cannot see the useful "bricks".

Risposte (2)

David Willingham
David Willingham il 6 Set 2022
Can you describe a little more about your application? I.e. what is your input, is it a matrix of signals or an image?
  2 Commenti
Grzegorz Klosowski
Grzegorz Klosowski il 6 Set 2022
Input and output are 48x48 matrix. It is a single-channel image that consists of real numbers.
Grzegorz Klosowski
Grzegorz Klosowski il 6 Set 2022
Modificato: Grzegorz Klosowski il 6 Set 2022
And even this matrix. The same at the input and output. And in the middle it should be a 1D layer. I need to compress this image this way. Using the autoencoder precisely. In the latent layer, I want to have a vector consisting of, say, 256 neurons. How to change a dimension from 2D to 1D and from 1D to 2D inside a neural network (or an autoencoder)?

Accedi per commentare.

Mandar il 13 Dic 2022
I understand that you want make an autoencoder network with specific hidden layer (latent layer) size.
You may refer the following documentation which shows how to create an autoencoder networks with specific hidden layer size, learning latent features and further stacking the latent layers to make a stacked autoencoder network with multiple hidden layers to learn effective latent features.




Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by