Mostrar el registro sencillo del ítem

dc.contributor.authorSantas Ciavatta, José Armando
dc.contributor.authorBermejo Higuera, Juan Ramón
dc.contributor.authorBermejo Higuera, Javier
dc.contributor.authorSicilia Moltalvo, Juan Antonio
dc.contributor.authorSureda Riera, Tomás
dc.contributor.authorPérez Melero, Jesús
dc.date2026
dc.date.accessioned2026-04-20T10:43:57Z
dc.date.available2026-04-20T10:43:57Z
dc.identifier.citationSantas Ciavatta, J.A., Bermejo Higuera, J.R., Bermejo Higuera, J., Montalvo, J.A.S., Riera, T.S. et al. (2026). Integration of Large Language Models (LLMs) and Static Analysis for Improving the Efficacy of Security Vulnerability Detection in Source Code. Computers, Materials & Continua, 86(3), 11. https://doi.org/10.32604/cmc.2025.074566es_ES
dc.identifier.issn1546-2218
dc.identifier.issn1546-2226
dc.identifier.urihttps://reunir.unir.net/handle/123456789/19503
dc.description.abstractAs artificial Intelligence (AI) continues to expand exponentially, particularly with the emergence of generative pre-trained transformers (GPT) based on a transformer’s architecture, which has revolutionized data processing and enabled significant improvements in various applications. This document seeks to investigate the security vulnerabilities detection in the source code using a range of large language models (LLM). Our primary objective is to evaluate the effectiveness of Static Application Security Testing (SAST) by applying various techniques such as prompt persona, structure outputs and zero-shot. To the selection of the LLMs (CodeLlama 7B, DeepSeek coder 7B, Gemini 1.5 Flash, Gemini 2.0 Flash, Mistral 7b Instruct, Phi 3 8b Mini 128K instruct, Qwen 2.5 coder, StartCoder 2 7B) with comparison and combination with Find Security Bugs. The evaluation method will involve using a selected dataset containing vulnerabilities, and the results to provide insights for different scenarios according to the software criticality (Business critical, non-critical, minimum effort, best effort) In detail, the main objectives of this study are to investigate if large language models outperform or exceed the capabilities of traditional static analysis tools, if the combining LLMs with Static Application Security Testing (SAST) tools lead to an improvement and the possibility that local machine learning models on a normal computer produce reliable results. Summarizing the most important conclusions of the research, it can be said that while it is true that the results have improved depending on the size of the LLM for business-critical software, the best results have been obtained by SAST analysis. This differs in “Non-Critical,” “Best Effort,” and “Minimum Effort” scenarios, where the combination of LLM (Gemini) + SAST has obtained better results.es_ES
dc.language.isoenges_ES
dc.publisherTech Science Press, Computers, Materials & Continuaes_ES
dc.relation.ispartofseries;vol. 86, nº 3
dc.relation.urihttps://www.techscience.com/cmc/v86n3/65509es_ES
dc.rightsopenAccesses_ES
dc.subjectAI + SASTes_ES
dc.subjectsecure codees_ES
dc.subjectLLMes_ES
dc.subjectbenchmarking LLMes_ES
dc.subjectvulnerability detectiones_ES
dc.titleIntegration of Large Language Models (LLMs) and Static Analysis for Improving the Efficacy of Security Vulnerability Detection in Source Code.es_ES
dc.typearticlees_ES
reunir.tag~OPUes_ES
dc.identifier.doihttps://doi.org/10.32604/cmc.2025.074566


Ficheros en el ítem

Thumbnail

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem