Real Time Facial Expression Recognition Using Webcam and SDK Affectiva
Autor:
Magdin, Martin
; Prikler, F
Fecha:
06/2018Palabra clave:
Revista / editorial:
International Journal of Interactive Multimedia and Artificial Intelligence (IJIMAI)Tipo de Ítem:
articleDirección web:
https://ijimai.org/journal/bibcite/reference/2644Resumen:
Facial expression is an essential part of communication. For this reason, the issue of human emotions evaluation using a computer is a very interesting topic, which has gained more and more attention in recent years. It is mainly related to the possibility of applying facial expression recognition in many fields such as HCI, video games, virtual reality, and analysing customer satisfaction etc. Emotions determination (recognition process) is often performed in 3 basic phases: face detection, facial features extraction, and last stage - expression classification. Most often you can meet the so-called Ekman’s classification of 6 emotional expressions (or 7 - neutral expression) as well as other types of classification - the Russell circular model, which contains up to 24 or the Plutchik’s Wheel of Emotions. The methods used in the three phases of the recognition process have not only improved over the last 60 years, but new methods and algorithms have also emerged that can determine the ViolaJones detector with greater accuracy and lower computational demands. Therefore, there are currently various solutions in the form of the Software Development Kit (SDK). In this publication, we point to the proposition and creation of our system for real-time emotion classification. Our intention was to create a system that would use all three phases of the recognition process, work fast and stable in real time. That’s why we’ve decided to take advantage of existing Affectiva SDKs. By using the classic webcamera we can detect facial landmarks on the image automatically using the Software Development Kit (SDK) from Affectiva. Geometric feature based approach is used for feature extraction. The distance between landmarks is used as a feature, and for selecting an optimal set of features, the brute force method is used. The proposed system uses neural network algorithm for classification. The proposed system recognizes 6 (respectively 7) facial expressions, namely anger, disgust, fear, happiness, sadness, surprise and neutral. We do not want to point only to the percentage of success of our solution. We want to point out the way we have determined this measurements and the results we have achieved and how these results have significantly influenced our future research direction.
Ficheros en el ítem
Este ítem aparece en la(s) siguiente(s) colección(es)
Estadísticas de uso
Año |
2012 |
2013 |
2014 |
2015 |
2016 |
2017 |
2018 |
2019 |
2020 |
2021 |
2022 |
2023 |
2024 |
Vistas |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
304 |
345 |
320 |
Descargas |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
193 |
178 |
213 |
Ítems relacionados
Mostrando ítems relacionados por Título, autor o materia.
-
Evaluating the Emotional State of a User Using a Webcam
Magdin, Martin; Turcani, Milan; Hudec, Lukas (International Journal of Interactive Multimedia and Artificial Intelligence (IJIMAI), 09/2016)In online learning is more difficult for teachers identify to see how individual students behave. Student’s emotions like self-esteem, motivation, commitment, and others that are believed to be determinant in student’s ... -
Voice Analysis Using PRAAT Software and Classification of User Emotional State
Magdin, Martin; Sulka, T; Tomanová, J; Vozár, M (International Journal of Interactive Multimedia and Artificial Intelligence (IJIMAI), 09/2019)During the last decades the field of IT has seen an incredible and very rapid development. This development has shown that it is important not only to shift performance and functional boundaries but also to adapt the way ... -
Simple MoCap System for Home Usage
Magdin, Martin (International Journal of Interactive Multimedia and Artificial Intelligence (IJIMAI), 06/2017)Nowadays many MoCap systems exist. Generating 3D facial animation of characters is currently realized by using the motion capture data (MoCap data), which is obtained by tracking the facial markers from an actor/actress. ...