Mostrar el registro sencillo del ítem

dc.contributor.authorKamel Benamara, Nadir
dc.contributor.authorZigh, Ehlem
dc.contributor.authorBoudghene Stambouli, Tarik
dc.contributor.authorKeche, Mokhtar
dc.date2022-06
dc.date.accessioned2022-10-10T10:48:59Z
dc.date.available2022-10-10T10:48:59Z
dc.identifier.issn1989-1660
dc.identifier.urihttps://reunir.unir.net/handle/123456789/13582
dc.description.abstractSecurity is a sensitive area that concerns all authorities around the world due to the emerging terrorism phenomenon. Contactless biometric technologies such as face recognition have grown in interest for their capacity to identify probe subjects without any human interaction. Since traditional face recognition systems use visible spectrum sensors, their performances decrease rapidly when some visible imaging phenomena occur, mainly illumination changes. Unlike the visible spectrum, Infrared spectra are invariant to light changes, which makes them an alternative solution for face recognition. However, in infrared, the textural information is lost. We aim, in this paper, to benefit from visible and thermal spectra by proposing a new heterogeneous face recognition approach. This approach includes four scientific contributions. The first one is the annotation of a thermal face database, which has been shared via Github with all the scientific community. The second is the proposition of a multi-sensors face detector model based on the last YOLO v3 architecture, able to detect simultaneously faces captured in visible and thermal images. The third contribution takes up the challenge of modality gap reduction between visible and thermal spectra, by applying a new structure of CycleGAN, called TV-CycleGAN, which aims to synthesize visible-like face images from thermal face images. This new thermal-visible synthesis method includes all extreme poses and facial expressions in color space. To show the efficacy and the robustness of the proposed TV-CycleGAN, experiments have been applied on three challenging benchmark databases, including different real-world scenarios: TUFTS and its aligned version, NVIE and PUJ. The qualitative evaluation shows that our method generates more realistic faces. The quantitative one demonstrates that the proposed TV -CycleGAN gives the best improvement on face recognition rates. Therefore, instead of applying a direct matching from thermal to visible images which allows a recognition rate of 47,06% for TUFTS Database, a proposed TV-CycleGAN ensures accuracy of 57,56% for the same database. It contributes to a rate enhancement of 29,16%, and 15,71% for NVIE and PUJ databases, respectively. It reaches an accuracy enhancement of 18,5% for the aligned TUFTS database. It also outperforms some recent state of the art methods in terms of F1-Score, AUC/EER and other evaluation metrics. Furthermore, it should be mentioned that the obtained visible synthesized face images using TV-CycleGAN method are very promising for thermal facial landmark detection as a fourth contribution of this paper.es_ES
dc.language.isoenges_ES
dc.publisherInternational Journal of Interactive Multimedia and Artificial Intelligence (IJIMAI)es_ES
dc.relation.ispartofseries;vol. 7, nº 4
dc.relation.urihttps://www.ijimai.org/journal/bibcite/reference/3067es_ES
dc.rightsopenAccesses_ES
dc.subjectdeep learninges_ES
dc.subjectgenerative adversarial networkes_ES
dc.subjectface detectiones_ES
dc.subjectthermal sensores_ES
dc.subjectIJIMAIes_ES
dc.titleTowards a Robust Thermal-Visible Heterogeneous Face Recognition Approach Based on a Cycle Generative Adversarial Networkes_ES
dc.typearticlees_ES
reunir.tag~IJIMAIes_ES
dc.identifier.doihttps://doi.org/10.9781/ijimai.2021.12.003


Ficheros en el ítem

Thumbnail

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem