Mostrar el registro sencillo del ítem

dc.contributor.authorTomar, A.
dc.contributor.authorKumar, S.
dc.contributor.authorPant, B.
dc.date2023-04
dc.date.accessioned2023-05-03T09:09:31Z
dc.date.available2023-05-03T09:09:31Z
dc.identifier.issn1989-1660
dc.identifier.urihttps://reunir.unir.net/handle/123456789/14588
dc.description.abstractTraditional crowd counting (optical flow or feature matching) techniques have been upgraded to deep learning (DL) models due to their lack of automatic feature extraction and low-precision outcomes. Most of these models were tested on surveillance scene crowd datasets captured by stationary shooting equipment. It is very challenging to perform people counting from the videos shot with a head-mounted moving camera; this is mainly due to mixing the temporal information of the moving crowd with the induced camera motion. This study proposed a transfer learning-based PeopleNet model to tackle this significant problem. For this, we have made some significant changes to the standard VGG16 model, by disabling top convolutional blocks and replacing its standard fully connected layers with some new fully connected and dense layers. The strong transfer learning capability of the VGG16 network yields in-depth insights of the PeopleNet into the good quality of density maps resulting in highly accurate crowd estimation. The performance of the proposed model has been tested over a self-generated image database prepared from moving camera video clips, as there is no public and benchmark dataset for this work. The proposed framework has given promising results on various crowd categories such as dense, sparse, average, etc. To ensure versatility, we have done self and cross-evaluation on various crowd counting models and datasets, which proves the importance of the PeopleNet model in adverse defense of society.es_ES
dc.language.isoenges_ES
dc.publisherInternational Journal of Interactive Multimedia and Artificial Intelligencees_ES
dc.relation.ispartofseries;In Press
dc.relation.urihttps://www.ijimai.org/journal/bibcite/reference/3297es_ES
dc.rightsopenAccesses_ES
dc.subjectdeep learninges_ES
dc.subjectdensity mapes_ES
dc.subjectfeature extractiones_ES
dc.subjectmoving camera videoses_ES
dc.subjectcounting individualses_ES
dc.subjectIJIMAIes_ES
dc.titlePeopleNet: A Novel People Counting Framework for Head-Mounted Moving Camera Videoses_ES
dc.typearticlees_ES
reunir.tag~IJIMAIes_ES
dc.identifier.doihttps://doi.org/10.9781/ijimai.2023.04.002


Ficheros en el ítem

Thumbnail

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem