Output and Input of MLP neural network

6 visualizzazioni (ultimi 30 giorni)
fatemeh hosseini
fatemeh hosseini il 11 Gen 2021
Risposto: Aastha il 19 Giu 2025
I am using the MLP neural network. I have 200 experimental sample data for both inputs and outputs. As I found in most cases, all input layer variables are one-dimensional for example:
input=x
output=y
x=[1 2 3 4 5 6 7 8 9 10]
y=[100 200 300 400 500 600 700 800 900 1000]
but in my case, it is a difference that I do not know how I describe data as inputs and outputs.
in one experimental data-set, the inputs are 2 scalers (T, P) and the outputs are a 3-mole fraction (matrix 10*3) and 4 scaler numbers. I mean:
Input=[T P]
Output=[x y u] [L V S R]
T,P,L,S,V,and R are scaler but x,y,u are matrix(10*1)
How can I describe input and output?
I appreciate any guidance.

Risposte (1)

Aastha
Aastha il 19 Giu 2025
It is my understanding that you would like to define an MLP neural network that performs regression to predict the output variables "x", "y", "u", "L", "S", "V" and "R" from the input variables "T" and "P". This can be implemented using the dlnetwork object in MATLAB, which allows you to specify a custom neural network architecture. The dlnetwork object is constructed using a layers array that defines the sequence of network layers.
According to the information provided in the question, the input is a 2x1 vector consisting of variables "T" and "P", and the output is a 34x1 vector formed by concatenating the variables "[x, y, u, L, S, V, R]".
You may refer to the below MATLAB code snippet to construct the "layers" array:
% Define the layers array
layers = [
featureInputLayer(2, 'Name', 'input') % 2 input features: T and P
fullyConnectedLayer(64, 'Name', 'fc1') % Hidden layer 1
reluLayer('Name', 'relu1')
fullyConnectedLayer(64, 'Name', 'fc2') % Hidden layer 2
reluLayer('Name', 'relu2')
fullyConnectedLayer(34, 'Name', 'fc\_out') % Output layer with 34 outputs
];
% Create the dlnetwork object
net = dlnetwork(layers);
This MATLAB code snippet defines a simple feedforward MLP with two hidden layers, each containing 64 neurons followed by ReLU activation functions. The final fully connected layer maps to the 34 output variables. You can modify the number of layers or neurons to suit the complexity of your task and the characteristics of your dataset.
For more details on the dlnetwork object and the types of layers it supports, please refer to the MathWorks documentation below:
I hope this is helpful!

Categorie

Scopri di più su Deep Learning Toolbox in Help Center e File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by