Mostrar el registro sencillo del ítem
Symbolic AI for XAI: Evaluating LFIT inductive programming for explaining biases in machine learning
dc.contributor.author | Ortega, Alfonso | |
dc.contributor.author | Fierrez, Julian | |
dc.contributor.author | Morales, Aythami | |
dc.contributor.author | Wang, Zilong | |
dc.contributor.author | de la Cruz, Marina | |
dc.date | 2021 | |
dc.date.accessioned | 2022-05-31T10:38:50Z | |
dc.date.available | 2022-05-31T10:38:50Z | |
dc.identifier.issn | 2073-431X | |
dc.identifier.uri | https://reunir.unir.net/handle/123456789/13207 | |
dc.description.abstract | Machine learning methods are growing in relevance for biometrics and personal information processing in domains such as forensics, e-health, recruitment, and e-learning. In these domains, white-box (human-readable) explanations of systems built on machine learning methods become crucial. Inductive logic programming (ILP) is a subfield of symbolic AI aimed to automatically learn declarative theories about the processing of data. Learning from interpretation transition (LFIT) is an ILP technique that can learn a propositional logic theory equivalent to a given black-box system (under certain conditions). The present work takes a first step to a general methodology to incorporate accurate declarative explanations to classic machine learning by checking the viability of LFIT in a specific AI application scenario: fair recruitment based on an automatic tool generated with machine learning methods for ranking Curricula Vitae that incorporates soft biometric information (gender and ethnicity). We show the expressiveness of LFIT for this specific problem and propose a scheme that can be applicable to other domains. In order to check the ability to cope with other domains no matter the machine learning paradigm used, we have done a preliminary test of the expressiveness of LFIT, feeding it with a real dataset about adult incomes taken from the US census, in which we consider the income level as a function of the rest of attributes to verify if LFIT can provide logical theory to support and explain to what extent higher incomes are biased by gender and ethnicity. | es_ES |
dc.language.iso | eng | es_ES |
dc.publisher | MDPI | es_ES |
dc.relation.ispartofseries | ;vol. 10, nº 11 | |
dc.relation.uri | https://www.mdpi.com/2073-431X/10/11/154 | es_ES |
dc.rights | openAccess | es_ES |
dc.subject | explainable artificial intelligence | es_ES |
dc.subject | fair income level | es_ES |
dc.subject | fair recruitment | es_ES |
dc.subject | inductive logic programming | es_ES |
dc.subject | propositional logic | es_ES |
dc.subject | Scopus | es_ES |
dc.subject | Emerging | es_ES |
dc.title | Symbolic AI for XAI: Evaluating LFIT inductive programming for explaining biases in machine learning | es_ES |
dc.type | article | es_ES |
reunir.tag | ~ARI | es_ES |
dc.identifier.doi | https://doi.org/10.3390/computers10110154 |
Ficheros en el ítem
Ficheros | Tamaño | Formato | Ver |
---|---|---|---|
No hay ficheros asociados a este ítem. |