Skoči na glavni sadržaj

Izvorni znanstveni članak

https://doi.org/10.32985/ijeces.15.1.5

Multimodal emotion recognition based on the fusion of vision, EEG, ECG, and EMG signals

Shripad Bhatlawande ; Dept. of E&TC, VIT, Pune, India
Swati Shilaskar orcid id orcid.org/0000-0002-1450-2939 ; Dept. of E&TC, VIT, Pune, India
Sourjadip Pramanik orcid id orcid.org/0009-0009-5969-8873 ; Dept. of E&TC, VIT, Pune, India *
Swarali Sole orcid id orcid.org/0009-0001-6204-1540 ; Dept. of E&TC, VIT, Pune, India

* Dopisni autor.


Puni tekst: engleski pdf 2.904 Kb

str. 41-58

preuzimanja: 631

citiraj


Sažetak

This paper presents a novel approach for emotion recognition (ER) based on Electroencephalogram (EEG), Electromyogram (EMG), Electrocardiogram (ECG), and computer vision. The proposed system includes two different models for physiological signals and facial expressions deployed in a real-time embedded system. A custom dataset for EEG, ECG, EMG, and facial expression was collected from 10 participants using an Affective Video Response System. Time, frequency, and wavelet domain-specific features were extracted and optimized, based on their Visualizations from Exploratory Data Analysis (EDA) and Principal Component Analysis (PCA). Local Binary Patterns (LBP), Local Ternary Patterns (LTP), Histogram of Oriented Gradients (HOG), and Gabor descriptors were used for differentiating facial emotions. Classification models, namely decision tree, random forest, and optimized variants thereof, were trained using these features. The optimized Random Forest model achieved an accuracy of 84%, while the optimized Decision Tree achieved 76% for the physiological signal-based model. The facial emotion recognition (FER) model attained an accuracy of 84.6%, 74.3%, 67%, and 64.5% using K-Nearest Neighbors (KNN), Random Forest, Decision Tree, and XGBoost, respectively. Performance metrics, including Area Under Curve (AUC), F1 score, and Receiver Operating Characteristic Curve (ROC), were computed to evaluate the models. The outcome of both results, i.e., the fusion of bio-signals and facial emotion analysis, is given to a voting classifier to get the final emotion. A comprehensive report is generated using the Generative Pretrained Transformer (GPT) language model based on the resultant emotion, achieving an accuracy of 87.5%. The model was implemented and deployed on a Jetson Nano. The results show its relevance to ER. It has applications in enhancing prosthetic systems and other medical fields such as psychological therapy, rehabilitation, assisting individuals with neurological disorders, mental health monitoring, and biometric security.

Ključne riječi

Emotion Recognition; Analysis of mental health; Feature Fusion; Machine learning; Computer vision; Physiological signals;

Hrčak ID:

313456

URI

https://hrcak.srce.hr/313456

Datum izdavanja:

19.1.2024.

Posjeta: 1.947 *