Imbalanced Audio Dataset for Deep Learning Classification
27 views (last 30 days)
Hi, I am trying to use audio data from interviews for binary classification through converting my dataset into spectrograms before feeding into CNN for classification. Firstly, the audio data have different duration i.e., 7 min-30 min and the dataset is imbalanced. I am aware of techniques such as SMOTE and oversampling of minority classes, but I am lost on how to oversample my minority class. Should I convert into spectrogram before oversampling and are there any ways to do it? Thanks!
Vineet Joshi on 30 Jul 2021
The following documentation talks about data augmentation for audio data. It covers examples on how to create custom pipelines and functions such as pitch shifting, time shifting, and time stretching.
Hope this helps you.