Best Way to Save Neural Network States as it Trains?

6 visualizzazioni (ultimi 30 giorni)
I'm trying to find the optimal amount of training iterations for my neural network. I'm already varying the number of fully connected layers and hidden layers. I would like to also analyze the performance of increasing the number of training iterations but don't want to overfit, and I don't want to go back in and re-train each time just to test more iterations.
I don't want to go off of only validation accuracy, I also want to go off of my test accuracy when compared with the last 15% of my dataset, which is currently performed after the network has finished training.
Data is currently split as [70,15,15]
So for example, maybe I check against my test set every 1k iterations and save the net in that state.
I'm thinking the best way to do this may be to save with Checkpoints, and have the checkpoint save file named as a function of iterations/time so it doesn't overwrite its own checkpoint. I can think of ways to do this but I would prefer it to be a very straightforward approach. Or maybe there is a standard practice that already exists?
Reference for checkpoints I'm referring to is found here: Matlab Checkpoints
EDIT NOTE: After attempting to use checkpoints, I found that ADAMS networks cannot use checkpoints.

Risposte (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by