Mostrar el registro sencillo del ítem
‘AI lost the prompt!’ Replacing ‘AI hallucination’ to distinguish between mere errors and irregularities
| dc.contributor.author | Ariso, José María | |
| dc.contributor.author | Bannister, Peter | |
| dc.date | 2025 | |
| dc.date.accessioned | 2025-11-25T13:47:34Z | |
| dc.date.available | 2025-11-25T13:47:34Z | |
| dc.identifier.citation | Ariso, J., & Bannister, P. (2025). ‘AI lost the prompt!’ Replacing ‘AI hallucination’ to distinguish between mere errors and irregularities. AI & Society. https://doi.org/10.1007/s00146-025-02757-1 | es_ES |
| dc.identifier.issn | 1435-5655 | |
| dc.identifier.issn | 0951-5666 | |
| dc.identifier.uri | https://reunir.unir.net/handle/123456789/18454 | |
| dc.description.abstract | One of the principal areas of current AI research concerns what are termed “hallucinations”. Whilst hundreds of different definitions and classifications of “AI hallucination” have been published, none have yet considered the distinction between errors and irregularities in Wittgenstein’s sense. This article provides a straightforward explanation of this distinction, illustrated through examples of AI outputs drawn from various publications. We then examine the terms proposed as alternatives to “hallucination” and highlight both their strengths and weaknesses. Drawing upon this analysis, we establish criteria for proposing alternative terms that encompass both errors and irregularities in Wittgenstein’s sense. Our aim is not to definitively resolve the ongoing debate surrounding suitable replacements for “AI hallucination”, but rather to provide a comprehensive overview of the characteristics and nuances that this distinction brings to the discussion. For unlike errors, irregularities prove entirely incomprehensible to users, owing to the grammatical gap created when fundamental certainties that underpin meaningful language use are violated. Given that irregularities prove incomprehensible, as they violate the certainties underlying meaningful language use, the most trustworthy AI systems may ultimately be those that recognise their own epistemic boundaries rather than those that produce seemingly perfect outputs. | es_ES |
| dc.language.iso | eng | es_ES |
| dc.publisher | AI & Society | es_ES |
| dc.relation.uri | https://link.springer.com/article/10.1007/s00146-025-02757-1 | es_ES |
| dc.rights | openAccess | es_ES |
| dc.subject | hallucination | es_ES |
| dc.subject | AI systems | es_ES |
| dc.subject | human-AI interaction | es_ES |
| dc.subject | certainty | es_ES |
| dc.subject | hinge | es_ES |
| dc.title | ‘AI lost the prompt!’ Replacing ‘AI hallucination’ to distinguish between mere errors and irregularities | es_ES |
| dc.type | article | es_ES |
| reunir.tag | ~OPU | es_ES |
| dc.identifier.doi | https://doi.org/10.1007/s00146-025-02757-1 |





