How can I convert 1D sparse data into learnable format for machine learning?

3 views (last 30 days)
I wanted to do sequence-to-sequence regression where I have sparse 1D arrays as inputs and 1D signals as outputs. I tried a lstm network but the training loss was just fluctuating instead of decreasing. I figured that might be bacause of the sparsity of the data. Is there any way to deal with this problem, like by changing the sparse dataset into some machine learnable format?
Thanks in advance.

Answers (0)


Find more on Deep Learning with Time Series and Sequence Data in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by