Light-weight speech separation based on dual-path attention and recurrent neural network
-
-
Abstract
A light-weight speech separation algorithm based on dual-path attention and recurrent neural network is proposed. First, optional branch structures based on dual-path attention mechanism and dual-path recurrent network are utilized to model the speech signals, which facilitate the extraction of deep feature information and the reduction of training parameters. Second, sub-band processing approach is introduced to alleviate the computation burden. As shown by the experimental results on the LibriCSS dataset, the average word error rate obtained by the proposed algorithm is 8.6% with only 0.15 MiB training parameters and 15.2 G/6s computation cost, which is 3.3−391.3 and 1.1−3.2 times smaller than other mainstream approaches. This proves the proposed algorithm can effectively reduce the training parameters and computation cost while achieving high speech separation performance.
-
-