Mostrar el registro sencillo del ítem

dc.contributor.authorParamasivam, Ramya
dc.contributor.authorLavanya, K.
dc.contributor.authorDivakarachari, Parameshachari Bidare
dc.contributor.authorCamacho, David
dc.date2025-09-01
dc.date.accessioned2026-03-10T13:02:22Z
dc.date.available2026-03-10T13:02:22Z
dc.identifier.citationR. Paramasivam, K. Lavanya, P. B. Divakarachari, D. Camacho. A Robust Framework for Speech Emotion Recognition Using Attention Based Convolutional Peephole LSTM, International Journal of Interactive Multimedia and Artificial Intelligence, vol. 9, no. 4, pp. 45-58, 2025, http://dx.doi.org/10.9781/ijimai.2025.02.002es_ES
dc.identifier.urihttps://reunir.unir.net/handle/123456789/19192
dc.description.abstractSpeech Emotion Recognition (SER) plays an important role in emotional computing which is widely utilized in various applications related to medical, entertainment and so on. The emotional understanding improvises the user machine interaction with a better responsive nature. The issues faced during SER are existence of relevant features and increased complexity while analyzing of huge datasets. Therefore, this research introduces a wellorganized framework by introducing Improved Jellyfish Optimization Algorithm (IJOA) for feature selection, and classification is performed using Convolutional Peephole Long Short-Term Memory (CP-LSTM) with attention mechanism. The raw data acquisition takes place using five datasets namely, EMO-DB, IEMOCAP, RAVDESS, Surrey Audio-Visual Expressed Emotion (SAVEE) and Crowd-sourced Emotional Multimodal Actors Dataset (CREMA-D). The undesired partitions are removed from the audio signal during pre-processing and fed into phase of feature extraction using IJOA. Finally, CP LSTM with attention mechanisms is used for emotion classification. As the final stage, classification takes place using CP-LSTM with attention mechanisms. Experimental outcome clearly shows that the proposed CP-LSTM with attention mechanism is more efficient than existing DNN-DHO, DH-AS, D-CNN, CEOAS methods in terms of accuracy. The classification accuracy of the proposed CP-LSTM with attention mechanism for EMO-DB, IEMOCAP, RAVDESS and SAVEE datasets are 99.59%, 99.88%, 99.54% and 98.89%, which is comparably higher than other existing techniques.es_ES
dc.language.isoenges_ES
dc.publisherUNIRes_ES
dc.relation.urihttps://www.ijimai.org/index.php/ijimai/article/view/821es_ES
dc.rightsopenAccesses_ES
dc.subjectAttention Mechanismses_ES
dc.subjectConvolutional Peephole Long Short-Term Memoryes_ES
dc.subjectFeature Selectiones_ES
dc.subjectImproved Jellyfish Optimization Algorithmes_ES
dc.subjectSpeech Emotion Recognitiones_ES
dc.titleA Robust Framework for Speech Emotion Recognition Using Attention Based Convolutional Peephole LSTMes_ES
dc.typearticlees_ES
reunir.tag~IJIMAIes_ES


Ficheros en el ítem

Thumbnail

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem