Mostrar el registro sencillo del ítem

dc.contributor.authorGarcía, Javier
dc.contributor.authorSagredo-Olivenza, Ismael
dc.date2022
dc.date.accessioned2022-06-09T08:27:42Z
dc.date.available2022-06-09T08:27:42Z
dc.identifier.issn0952-1976
dc.identifier.urihttps://reunir.unir.net/handle/123456789/13264
dc.description.abstractDeep Reinforcement Learning systems are now a hot topic in Machine Learning for their effectiveness in many complex tasks, but their application in safety-critical domains (e.g., robot control or self-autonomous driving) remains dangerous without mechanism to detect and prevent risk situations. In Deep RL, such risk is mostly in the form of adversarial attacks, which introduce small perturbations to sensor inputs with the aim of changing the network-based decisions and thus cause catastrophic situations. In the light of these dangers, a promising line of research is that of providing these Deep RL algorithms with suitable defenses, especially when deploying in real environments. This paper suggests that this line of research could be greatly improved by the concepts from the existing research field of Safe Reinforcement Learning, which has been postulated as a family of RL algorithms capable of providing defenses against many forms of risks. However, the connections between Safe RL and the design of defenses against adversarial attacks in Deep RL remain largely unexplored. This paper seeks to explore precisely some of these connections. In particular, this paper proposes to reuse some of the concepts from existing Safe RL algorithms to create a novel and effective instance-based defense for the deployment stage of Deep RL policies. The proposed algorithm uses a risk function based on how far a state is from the state space known by the agent, that allows identifying and preventing adversarial situations. The success of the proposed defense has been evaluated in 4 Atari games.es_ES
dc.language.isoenges_ES
dc.publisherElsevier Ltdes_ES
dc.relation.ispartofseries;vol. 107
dc.relation.urihttps://www.sciencedirect.com/science/article/abs/pii/S0952197621003626?via%3Dihubes_ES
dc.rightsrestrictedAccesses_ES
dc.subjectadversarial reinforcement learninges_ES
dc.subjectdefense methodses_ES
dc.subjectreinforcement learninges_ES
dc.subjectScopuses_ES
dc.subjectJCRes_ES
dc.titleInstance-based defense against adversarial attacks in Deep Reinforcement Learninges_ES
dc.typearticlees_ES
reunir.tag~ARIes_ES
dc.identifier.doihttps://doi.org/10.1016/j.engappai.2021.104514


Ficheros en el ítem

FicherosTamañoFormatoVer

No hay ficheros asociados a este ítem.

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem