Mostrar el registro sencillo del ítem

dc.contributor.authorOrtega, Alfonso
dc.contributor.authorFierrez, Julian
dc.contributor.authorMorales, Aythami
dc.contributor.authorWang, Zilong
dc.contributor.authorde la Cruz, Marina
dc.date2021
dc.date.accessioned2022-05-31T10:38:50Z
dc.date.available2022-05-31T10:38:50Z
dc.identifier.issn2073-431X
dc.identifier.urihttps://reunir.unir.net/handle/123456789/13207
dc.description.abstractMachine learning methods are growing in relevance for biometrics and personal information processing in domains such as forensics, e-health, recruitment, and e-learning. In these domains, white-box (human-readable) explanations of systems built on machine learning methods become crucial. Inductive logic programming (ILP) is a subfield of symbolic AI aimed to automatically learn declarative theories about the processing of data. Learning from interpretation transition (LFIT) is an ILP technique that can learn a propositional logic theory equivalent to a given black-box system (under certain conditions). The present work takes a first step to a general methodology to incorporate accurate declarative explanations to classic machine learning by checking the viability of LFIT in a specific AI application scenario: fair recruitment based on an automatic tool generated with machine learning methods for ranking Curricula Vitae that incorporates soft biometric information (gender and ethnicity). We show the expressiveness of LFIT for this specific problem and propose a scheme that can be applicable to other domains. In order to check the ability to cope with other domains no matter the machine learning paradigm used, we have done a preliminary test of the expressiveness of LFIT, feeding it with a real dataset about adult incomes taken from the US census, in which we consider the income level as a function of the rest of attributes to verify if LFIT can provide logical theory to support and explain to what extent higher incomes are biased by gender and ethnicity.es_ES
dc.language.isoenges_ES
dc.publisherMDPIes_ES
dc.relation.ispartofseries;vol. 10, nº 11
dc.relation.urihttps://www.mdpi.com/2073-431X/10/11/154es_ES
dc.rightsopenAccesses_ES
dc.subjectexplainable artificial intelligencees_ES
dc.subjectfair income leveles_ES
dc.subjectfair recruitmentes_ES
dc.subjectinductive logic programminges_ES
dc.subjectpropositional logices_ES
dc.subjectScopuses_ES
dc.subjectEmerginges_ES
dc.titleSymbolic AI for XAI: Evaluating LFIT inductive programming for explaining biases in machine learninges_ES
dc.typearticlees_ES
reunir.tag~ARIes_ES
dc.identifier.doihttps://doi.org/10.3390/computers10110154


Ficheros en el ítem

FicherosTamañoFormatoVer

No hay ficheros asociados a este ítem.

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem