Mostrar el registro sencillo del ítem

dc.contributor.authorAmbritta P., Nancy
dc.contributor.authorMahalle, Parikshit N.
dc.contributor.authorPatil, Rajkumar V.
dc.contributor.authorDey, Nilanjan
dc.contributor.authorGonzález Crespo, Rubén
dc.contributor.authorSherratt, R. Simon
dc.date2024
dc.date.accessioned2024-08-21T10:57:18Z
dc.date.available2024-08-21T10:57:18Z
dc.identifier.citationN. Ambritta P., P. N. Mahalle, R. V. Patil, N. Dey, R. G. Crespo and R. S. Sherratt, "Explainable AI for Human-Centric Ethical IoT Systems," in IEEE Transactions on Computational Social Systems, vol. 11, no. 3, pp. 3407-3419, June 2024, doi: 10.1109/TCSS.2023.3330738.es_ES
dc.identifier.issn2329-924X
dc.identifier.issn2373-7476
dc.identifier.urihttps://reunir.unir.net/handle/123456789/17294
dc.description.abstractThe current era witnesses the notable transition of society from an information-centric to a human-centric one aiming at striking a balance between economic advancements and upholding the societal and fundamental needs of humanity. It is undeniable that the Internet of Things (IoT) and artificial intelligence (AI) are the key players in realizing a human-centric society. However, for society and individuals to benefit from advanced technology, it is important to gain the trust of human users by guaranteeing the inclusion of ethical aspects such as safety, privacy, nondiscrimination, and legality of the system. Incorporating explainable AI (XAI) into the system to establish explainability and transparency supports the development of trust among stakeholders, including the developers of the system. This article presents the general class of vulnerabilities that affect IoT systems and directs the readers’ attention toward intrusion detection systems (IDSs). The existing state-of-the-art IDS system is discussed. An attack model modeling the possible attacks is presented. Furthermore, since our focus is on providing explanations for the IDS predictions, we first present a consolidated study of the commonly used explanation methods along with their advantages and disadvantages. We then present a high-level human-inclusive XAI framework for the IoT that presents the participating components and roles. We also hint upon a few approaches to upholding safety and privacy using XAI that we will be taking up in our future work. An attack model based on the study of possible attacks on the system is also presented in the article. The article also presents guidelines to choose a suitable XAI method and a taxonomy of explanation evaluation mechanisms, which is an important yet less visited aspect of explainable AI.es_ES
dc.language.isoenges_ES
dc.publisherIEEE Transactions on Computational Social Systemses_ES
dc.relation.ispartofseries;vol. 11, nº 3
dc.relation.urihttps://ieeexplore.ieee.org/document/10335691es_ES
dc.rightsrestrictedAccesses_ES
dc.subjectArtificial Intelligence (AI)es_ES
dc.subjectInternet of Thingses_ES
dc.subjectethicses_ES
dc.subjectmonitoringes_ES
dc.subjectmalwarees_ES
dc.subjectsafetyes_ES
dc.subjectintrusion detectiones_ES
dc.subjectexplainable artificial intelligence (XAI)es_ES
dc.subjecthuman-centric AIes_ES
dc.subjectinterpretabilityes_ES
dc.subjectprivacyes_ES
dc.subjectsecurityes_ES
dc.subjectSociety 5.0es_ES
dc.subjectWOSes_ES
dc.titleExplainable AI for Human-Centric Ethical IoT Systemses_ES
dc.typeArticulo Revista Indexadaes_ES
reunir.tag~ARIes_ES
dc.identifier.doihttps://doi.org/10.1109/TCSS.2023.3330738


Ficheros en el ítem

FicherosTamañoFormatoVer

No hay ficheros asociados a este ítem.

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem