Skip to the main content

Original scientific paper

https://doi.org/10.32985/ijeces.14.7.5

Microphone Array Speech Enhancement Via Beamforming Based Deep Learning Network

Jeyasingh Pathrose *
M Mohamed Ismail ; Professor and the Dean (Academic Affairs) of B.S. Abdur Rahman Crescent Institute of Science & Technology, Chennai, 600048 India.
Madhan Mohan ; Jasmin Infotech Pvt Ltd, Chennai, India 600100

* Corresponding author.


Full text: english pdf 2.388 Kb

page 781-790

downloads: 298

cite


Abstract

In general, in-car speech enhancement is an application of the microphone array speech enhancement in particular acoustic environments. Speech enhancement inside the moving cars is always an interesting topic and the researchers work to create some modules to increase the quality of speech and intelligibility of speech in cars. The passenger dialogue inside the car, the sound of other equipment, and a wide range of interference effects are major challenges in the task of speech separation in-car environment. To overcome this issue, a novel Beamforming based Deep learning Network (Bf-DLN) has been proposed for speech enhancement. Initially, the captured microphone array signals are pre-processed using an Adaptive beamforming technique named Least Constrained Minimum Variance (LCMV). Consequently, the proposed method uses a time-frequency representation to transform the pre-processed data into an image. The smoothed pseudo-Wigner-Ville distribution (SPWVD) is used for converting time-domain speech inputs into images. Convolutional deep belief network (CDBN) is used to extract the most pertinent features from these transformed images. Enhanced Elephant Heard Algorithm (EEHA) is used for selecting the desired source by eliminating the interference source. The experimental result demonstrates the effectiveness of the proposed strategy in removing background noise from the original speech signal. The proposed strategy outperforms existing methods in terms of PESQ, STOI, SSNRI, and SNR. The PESQ of the proposed Bf-DLN has a maximum PESQ of 1.98, whereas existing models like Two-stage Bi-LSTM has 1.82, DNN-C has 1.75 and GCN has 1.68 respectively. The PESQ of the proposed method is 1.75%, 3.15%, and 4.22% better than the existing GCN, DNN-C, and Bi-LSTM techniques. The efficacy of the proposed method is then validated by experiments.

Keywords

Speech Enhancement; Microphone; Deep Learning; Beamforming; Noise Reduction;

Hrčak ID:

307904

URI

https://hrcak.srce.hr/307904

Publication date:

11.9.2023.

Visits: 772 *