This folder contains some popular recipes for the WSJ0-Mix task (2/3 sources).
- This recipe supports train with several source separation models on WSJ0-2Mix, including Sepformer, RE-SepFormer, DPRNN, ConvTasnet, DPTNet.
Web Demo Integrated to Huggingface Spaces with Gradio. See demo Speech Separation:
Before proceeding, ensure you have installed the necessary additional dependencies. To do this, simply run the following command in your terminal:
pip install -r ../extra_requirements.txt
To run it:
python train.py hyperparams/sepformer.yaml --data_folder yourpath/wsj0-mix/2speakersNote that during training we print the negative SI-SNR (as we treat this value as the loss).
If you want to run it on the test sets only, you can add the flag --test_only to the following command:
python train.py hyperparams/sepformer.yaml --data_folder yourpath/wsj0-mix/2speakers --test_only- The best way to create the datasets is using the original matlab script. This script and the associated meta data can be obtained through the following link.
- The dataset creation script assumes that the original WSJ0 files in the sphere format are already converted to .wav .
- This recipe supports dynamic mixing where the training data is dynamically created in order to obtain new utterance combinations during training. For this you need to have the WSJ0 dataset (available though LDC at
https://catalog.ldc.upenn.edu/LDC93S6A).
-
You can listen to example results on the test set of WSJ0-2/3Mix with SepFormer through this page.
-
Here are the SI - SNRi results (in dB) on the test set of WSJ0-2/3 Mix with SepFormer:
| SepFormer, WSJ0-2Mix | |
|---|---|
| NoAugment | 20.4 |
| DynamicMixing | 22.4 |
| SepFormer, WSJ0-3Mix | |
|---|---|
| NoAugment | 17.6 |
| DynamicMixing | 19.8 |
| RE-SepFormer, WSJ0-2Mix | |
|---|---|
| DynamicMixing | 18.6 |
| SkiM, WSJ0-2Mix | |
|---|---|
| DynamicMixing | 18.1 |
Each epoch takes about 2 hours for WSJ0-2Mix and WSJ0-3Mix (DynamicMixing ) on a NVIDIA V100 (32GB).
Pretrained models for SepFormer on WSJ0-2Mix, WSJ0-3Mix, and WHAM! datasets can be found through huggingface:
-
The output folder (with logs and checkpoints) for SepFormer (hparams/sepformer.yaml) can be found here.
-
The output folder (with logs and checkpoints) for RE-SepFormer (hparams/resepformer.yaml) can be found here.
-
The output folder (with logs and checkpoints) for convtasnet (hparams/convtasnet.yaml) can be found here.
-
The output folder (with logs and checkpoints) for dual-path RNN (hparams/dprnn.yaml) can be found here.
-
The output folder (with logs and checkpoints) for SkiM (hparams/skim.yaml) can be found here.
-
The output folder (with logs and checkpoints) for Sepformer with conformer block as intra model (hparams/sepformer-conformerintra.yaml) can be found here.
-
WSJ0-2Mix training without dynamic mixing
python train.py hparams/sepformer.yaml --data_folder yourpath/wsj0-mix/2speakers -
WSJ0-2Mix training with dynamic mixing
python train.py hparams/sepformer.yaml --data_folder yourpath/wsj0-mix/2speakers --base_folder_dm yourpath/wsj0/si_tr_s --dynamic_mixing True -
WSJ0-3Mix training without dynamic mixing
python train.py hparams/sepformer.yaml --data_folder yourpath/wsj0-mix/3speakers--num_spks 3 -
WSJ0-3Mix training with dynamic mixing
python train.py hparams/sepformer.yaml --data_folder yourpath/wsj0-mix/3speakers--num_spks 3 --base_folder_dm yourpath/wsj0/si_tr_s --dynamic_mixing True`
You can run the following command to train the model using Distributed Data Parallel (DDP) with 2 GPUs:
torchrun --nproc_per_node=2 train.py hparams/sepformer.yaml --data_folder /yourdatapathYou can add the other runtime options as appropriate. For more complete information on multi-GPU usage, take a look at our documentation.
Please, cite SpeechBrain if you use it for your research or business.
@misc{speechbrainV1,
title={Open-Source Conversational AI with SpeechBrain 1.0},
author={Mirco Ravanelli and Titouan Parcollet and Adel Moumen and Sylvain de Langen and Cem Subakan and Peter Plantinga and Yingzhi Wang and Pooneh Mousavi and Luca Della Libera and Artem Ploujnikov and Francesco Paissan and Davide Borra and Salah Zaiem and Zeyu Zhao and Shucong Zhang and Georgios Karakasidis and Sung-Lin Yeh and Pierre Champion and Aku Rouhe and Rudolf Braun and Florian Mai and Juan Zuluaga-Gomez and Seyed Mahed Mousavi and Andreas Nautsch and Xuechen Liu and Sangeet Sagar and Jarod Duret and Salima Mdhaffar and Gaelle Laperriere and Mickael Rouvier and Renato De Mori and Yannick Esteve},
year={2024},
eprint={2407.00463},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2407.00463},
}
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}