- "adam" optimizer
- "MiniBatchSize", 256
- "ExecutionEnvironment", "auto" or "gpu" if available
Build Time delay neural network using Deep learning tool box
3 visualizzazioni (ultimi 30 giorni)
Mostra commenti meno recenti
I wanna build my own Deep neural network which accepts two inputs and estimate two outputs. The input signals of the first layer are formed by tapped delay lines to consider the memory effect. I can design my network using (Time delay neural network), but I can't find the leaky rectified linear unit (Relu) activation function or use Adam as an optimization algorithm in such type of networks (Time delay neural network).
Now I wanna use Deep neural network toolbox to design this model with the MSE loss function and Adam as the optimizer with a mini-batch size of 256. The activation function is the leaky rectified linear unit (ReLU) with a slope of 0.01 for negative input. Any recommendation how to build such model?

0 Commenti
Risposte (1)
Sameer
il 4 Giu 2025
To build a deep neural network that handles dual input and dual output with memory (using tapped delay lines), and also supports leaky ReLU activation and the Adam optimizer, the Deep Learning Toolbox offers the needed flexibility through custom layer definitions and sequence modeling.
Instead of using the built-in "timedelaynet", which has limited support for custom activation functions and optimizers, the network can be designed using the "layerGraph" or "dlnetwork" approach with these steps:
1. Prepare the input: Format the two input signals (I and Q components) into a sequence or time series format with tapped delay lines manually (e.g., using "buffer" or lag features).
2. Create input layers: Use two separate "sequenceInputLayer" or one "featureInputLayer" with a concatenated input, depending on how the data is structured.
3. Design hidden layers: Use "fullyConnectedLayer" followed by "leakyReluLayer" with the desired negative slope. If a custom slope is needed (e.g., 0.01), a custom layer can be implemented using "customLayer".
4. Output layer: Use a "fullyConnectedLayer" with 2 outputs followed by a "regressionLayer" to minimize MSE.
5. Training options: Use "trainingOptions" with:
Example sketch:
layers = [
featureInputLayer(8) % assuming 4-tap delay per I/Q input
fullyConnectedLayer(10)
leakyReluLayer(0.01)
fullyConnectedLayer(10)
leakyReluLayer(0.01)
fullyConnectedLayer(2)
regressionLayer
];
options = trainingOptions('adam', ...
'MiniBatchSize', 256, ...
'MaxEpochs', 50, ...
'Shuffle', 'every-epoch', ...
'Plots', 'training-progress', ...
'Verbose', false);
This approach gives full control over architecture, activation, and training behavior, unlike traditional time-delay networks.
For more details please refer to the following MathWorks documentation:
Hope this helps!
0 Commenti
Vedere anche
Categorie
Scopri di più su Deep Learning Toolbox in Help Center e File Exchange
Prodotti
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!