Mostrar el registro sencillo del ítem

dc.contributor.authorKuang, Yuxiang
dc.contributor.authorWu, Qun
dc.contributor.authorWang, Ying
dc.contributor.authorDey, Nilanjan
dc.contributor.authorShi, Fuqian
dc.contributor.authorGonzález-Crespo, Rubén
dc.contributor.authorSimon Sherratt, R.
dc.date2020-12
dc.date.accessioned2021-04-19T13:57:17Z
dc.date.available2021-04-19T13:57:17Z
dc.identifier.issn1872-9681
dc.identifier.urihttps://reunir.unir.net/handle/123456789/11197
dc.description.abstractFacial expressions, verbal, behavioral, such as limb movements, and physiological features are vital ways for affective human interactions. Researchers have given machines the ability to recognize affective communication through the above modalities in the past decades. In addition to facial expressions, changes in the level of sound, strength, weakness, and turbulence will also convey affective. Extracting affective feature parameters from the acoustic signals have been widely applied in customer service, education, and the medical field. In this research, an improved AlexNet-based deep convolutional neural network (A-DCNN) is presented for acoustic signal recognition. Firstly, preprocessed on signals using simplified inverse filter tracking (SIFT) and short-time Fourier transform (STFT), Mel frequency Cepstrum (MFCC) and waveform-based segmentation were deployed to create the input for the deep neural network (DNN), which was applied widely in signals preprocess for most neural networks. Secondly, acoustic signals were acquired from the public Ryerson Audio Visual Database of Affective Speech and Song (RAVDESS) affective speech audio system. Through the acoustic signal preprocessing tools, the basic features of the kind of sound signals were calculated and extracted. The proposed DNN based on improved AlexNet has a 95.88% accuracy on classifying eight affective of acoustic signals. By comparing with some linear classifications, such as decision table (DT) and Bayesian inference (BI) and other deep neural networks, such as AlexNet+SVM, recurrent convolutional neural network (R-CNN), etc., the proposed method achieves high effectiveness on theaccuracy (A), sensitivity (S1), positive predictive (PP), and f1-score (F1). Acoustic signals affective recognition and classification can be potentially applied in industrial product design through measuring consumers' affective responses to products; by collecting relevant affective sound data to understand the popularity of the product, and furthermore, to improve the product design and increase the market responsiveness.es_ES
dc.language.isoenges_ES
dc.publisherApplied Soft Computinges_ES
dc.relation.ispartofseries;vol. 97, sub. A
dc.relation.urihttps://www.sciencedirect.com/science/article/abs/pii/S1568494620307134?via%3Dihubes_ES
dc.rightsrestrictedAccesses_ES
dc.subjectAlexNetes_ES
dc.subjectdeep convolutional neural networkes_ES
dc.subjectacoustic signalses_ES
dc.subjectaffective computinges_ES
dc.subjectshort time Fourier transformes_ES
dc.subjectJCRes_ES
dc.subjectScopuses_ES
dc.titleSimplified inverse filter tracked affective acoustic signals classification incorporating deep convolutional neural networkses_ES
dc.typeArticulo Revista Indexadaes_ES
reunir.tag~ARIes_ES
dc.identifier.doihttp://dx.doi.org/10.1016/j.asoc.2020.106775


Ficheros en el ítem

FicherosTamañoFormatoVer

No hay ficheros asociados a este ítem.

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem