Mostrar el registro sencillo del ítem

dc.contributor.authorPlanet, Santiago
dc.contributor.authorIriondo, Ignasi
dc.date2012-09
dc.date.accessioned2019-12-03T10:36:45Z
dc.date.available2019-12-03T10:36:45Z
dc.identifier.issn1989-1660
dc.identifier.urihttps://reunir.unir.net/handle/123456789/9606
dc.description.abstractThe automatic analysis of speech to detect affective states may improve the way users interact with electronic devices. However, the analysis only at the acoustic level could be not enough to determine the emotion of a user in a realistic scenario. In this paper we analyzed the spontaneous speech recordings of the FAU Aibo Corpus at the acoustic and linguistic levels to extract two sets of features. The acoustic set was reduced by a greedy procedure selecting the most relevant features to optimize the learning stage. We compared two versions of this greedy selection algorithm by performing the search of the relevant features forwards and backwards. We experimented with three classification approaches: Nave-Bayes, a support vector machine and a logistic model tree, and two fusion schemes: decision-level fusion, merging the hard-decisions of the acoustic and linguistic classifiers by means of a decision tree; and feature-level fusion, concatenating both sets of features before the learning stage. Despite the low performance achieved by the linguistic data, a dramatic improvement was achieved after its combination with the acoustic information, improving the results achieved by this second modality on its own. The results achieved by the classifiers using the parameters merged at feature level outperformed the classification results of the decision-level fusion scheme, despite the simplicity of the scheme. Moreover, the extremely reduced set of acoustic features obtained by the greedy forward search selection algorithm improved the results provided by the full set.es_ES
dc.language.isospaes_ES
dc.publisherInternational Journal of Interactive Multimedia and Artificial Intelligence (IJIMAI)es_ES
dc.relation.ispartofseries;vol. 01, nº 06
dc.relation.urihttps://www.ijimai.org/journal/node/277es_ES
dc.rightsopenAccesses_ES
dc.subjectacoustic and linguistic featureses_ES
dc.subjectdecision-level and future-level fusiones_ES
dc.subjectemotion recognitiones_ES
dc.subjectspontaneous speeches_ES
dc.subjectIJIMAIes_ES
dc.titleComparative Study on Feature Selection and Fusion Schemes for Emotion Recognition from Speeches_ES
dc.typearticlees_ES
reunir.tag~IJIMAIes_ES
dc.identifier.doihttp://dx.doi.org/10.9781/ijimai.2012.166


Ficheros en el ítem

Thumbnail

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem