Mostrar el registro sencillo del ítem

dc.contributor.authorDebnath, Saswati
dc.contributor.authorRoy, Pinki
dc.date2021-12
dc.date.accessioned2022-05-10T10:09:43Z
dc.date.available2022-05-10T10:09:43Z
dc.identifier.issn1989-1660
dc.identifier.urihttps://reunir.unir.net/handle/123456789/13055
dc.description.abstractAudio-Visual Automatic Speech Recognition (AV-ASR) has become the most promising research area when the audio signal gets corrupted by noise. The main objective of this paper is to select the important and discriminative audio and visual speech features to recognize audio-visual speech. This paper proposes Pseudo Zernike Moment (PZM) and feature selection method for audio-visual speech recognition. Visual information is captured from the lip contour and computes the moments for lip reading. We have extracted 19th order of Mel Frequency Cepstral Coefficients (MFCC) as speech features from audio. Since all the 19 speech features are not equally important, therefore, feature selection algorithms are used to select the most efficient features. The various statistical algorithm such as Analysis of Variance (ANOVA), Kruskal-wallis, and Friedman test are employed to analyze the significance of features along with Incremental Feature Selection (IFS) technique. Statistical analysis is used to analyze the statistical significance of the speech features and after that IFS is used to select the speech feature subset. Furthermore, multiclass Support Vector Machine (SVM), Artificial Neural Network (ANN) and Naive Bayes (NB) machine learning techniques are used to recognize the speech for both the audio and visual modalities. Based on the recognition rate combined decision is taken from the two individual recognition systems. This paper compares the result achieved by the proposed model and the existing model for both audio and visual speech recognition. Zernike Moment (ZM) is compared with PZM and shows that our proposed model using PZM extracts better discriminative features for visual speech recognition. This study also proves that audio feature selection using statistical analysis outperforms methods without any feature selection technique.es_ES
dc.language.isoenges_ES
dc.publisherInternational Journal of Interactive Multimedia and Artificial Intelligence (IJIMAI)es_ES
dc.relation.ispartofseries;vol. 7, nº 2
dc.relation.urihttps://www.ijimai.org/journal/bibcite/reference/3012es_ES
dc.rightsopenAccesses_ES
dc.subjectaudio-visual speech recognitiones_ES
dc.subjectlip trackinges_ES
dc.subjectpseudo zernike momentes_ES
dc.subjectmel frequency cepstrales_ES
dc.subjectcoefficients (MFCC)es_ES
dc.subjectincremental feature selection (IFS)es_ES
dc.subjectstatistical analysises_ES
dc.subjectIJIMAIes_ES
dc.titleAudio-Visual Automatic Speech Recognition Using PZM, MFCC and Statistical Analysises_ES
dc.typearticlees_ES
reunir.tag~IJIMAIes_ES
dc.identifier.doihttps://doi.org/10.9781/ijimai.2021.09.001


Ficheros en el ítem

Thumbnail

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem