Mostrar el registro sencillo del ítem
Explainable AI for Human-Centric Ethical IoT Systems
dc.contributor.author | Ambritta P., Nancy | |
dc.contributor.author | Mahalle, Parikshit N. | |
dc.contributor.author | Patil, Rajkumar V. | |
dc.contributor.author | Dey, Nilanjan | |
dc.contributor.author | González Crespo, Rubén | |
dc.contributor.author | Sherratt, R. Simon | |
dc.date | 2024 | |
dc.date.accessioned | 2024-08-21T10:57:18Z | |
dc.date.available | 2024-08-21T10:57:18Z | |
dc.identifier.citation | N. Ambritta P., P. N. Mahalle, R. V. Patil, N. Dey, R. G. Crespo and R. S. Sherratt, "Explainable AI for Human-Centric Ethical IoT Systems," in IEEE Transactions on Computational Social Systems, vol. 11, no. 3, pp. 3407-3419, June 2024, doi: 10.1109/TCSS.2023.3330738. | es_ES |
dc.identifier.issn | 2329-924X | |
dc.identifier.issn | 2373-7476 | |
dc.identifier.uri | https://reunir.unir.net/handle/123456789/17294 | |
dc.description.abstract | The current era witnesses the notable transition of society from an information-centric to a human-centric one aiming at striking a balance between economic advancements and upholding the societal and fundamental needs of humanity. It is undeniable that the Internet of Things (IoT) and artificial intelligence (AI) are the key players in realizing a human-centric society. However, for society and individuals to benefit from advanced technology, it is important to gain the trust of human users by guaranteeing the inclusion of ethical aspects such as safety, privacy, nondiscrimination, and legality of the system. Incorporating explainable AI (XAI) into the system to establish explainability and transparency supports the development of trust among stakeholders, including the developers of the system. This article presents the general class of vulnerabilities that affect IoT systems and directs the readers’ attention toward intrusion detection systems (IDSs). The existing state-of-the-art IDS system is discussed. An attack model modeling the possible attacks is presented. Furthermore, since our focus is on providing explanations for the IDS predictions, we first present a consolidated study of the commonly used explanation methods along with their advantages and disadvantages. We then present a high-level human-inclusive XAI framework for the IoT that presents the participating components and roles. We also hint upon a few approaches to upholding safety and privacy using XAI that we will be taking up in our future work. An attack model based on the study of possible attacks on the system is also presented in the article. The article also presents guidelines to choose a suitable XAI method and a taxonomy of explanation evaluation mechanisms, which is an important yet less visited aspect of explainable AI. | es_ES |
dc.language.iso | eng | es_ES |
dc.publisher | IEEE Transactions on Computational Social Systems | es_ES |
dc.relation.ispartofseries | ;vol. 11, nº 3 | |
dc.relation.uri | https://ieeexplore.ieee.org/document/10335691 | es_ES |
dc.rights | restrictedAccess | es_ES |
dc.subject | Artificial Intelligence (AI) | es_ES |
dc.subject | Internet of Things | es_ES |
dc.subject | ethics | es_ES |
dc.subject | monitoring | es_ES |
dc.subject | malware | es_ES |
dc.subject | safety | es_ES |
dc.subject | intrusion detection | es_ES |
dc.subject | explainable artificial intelligence (XAI) | es_ES |
dc.subject | human-centric AI | es_ES |
dc.subject | interpretability | es_ES |
dc.subject | privacy | es_ES |
dc.subject | security | es_ES |
dc.subject | Society 5.0 | es_ES |
dc.subject | WOS | es_ES |
dc.title | Explainable AI for Human-Centric Ethical IoT Systems | es_ES |
dc.type | Articulo Revista Indexada | es_ES |
reunir.tag | ~ARI | es_ES |
dc.identifier.doi | https://doi.org/10.1109/TCSS.2023.3330738 |
Ficheros en el ítem
Ficheros | Tamaño | Formato | Ver |
---|---|---|---|
No hay ficheros asociados a este ítem. |