<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns="http://purl.org/rss/1.0/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#">
<channel rdf:about="https://reunir.unir.net/handle/123456789/15694">
<title>vol. 8, nº 4, december 2023</title>
<link>https://reunir.unir.net/handle/123456789/15694</link>
<description/>
<items>
<rdf:Seq>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/15706"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/15705"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/15533"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/15134"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/15133"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/15132"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/15131"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/15129"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14812"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14365"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14354"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14353"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14351"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14350"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14338"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14337"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14336"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14322"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14315"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14309"/>
</rdf:Seq>
</items>
<dc:date>2024-10-24T16:03:05Z</dc:date>
</channel>
<item rdf:about="https://reunir.unir.net/handle/123456789/15706">
<title>Editor's Note</title>
<link>https://reunir.unir.net/handle/123456789/15706</link>
<description>Editor's Note
Verdú, Elena
The International Journal of Interactive Multimedia and Artificial Intelligence – IJIMAI –provides an interdisciplinary forum in which scientists and professionals can share their research results and report new advances in Artificial Intelligence (AI) tools or tools that use AI with interactive multimedia techniques. The present regular issue comprises different topics as generative AI, brain and main inspired computing, bird species identification, spam detection, recommendation systems, synthetic aperture radar automatic target recognition, hand gestures recognition, anomalies detection for video surveillance systems, disease detection, social networks analysis, or user experience. The collection of articles shows the wide use of deep learning techniques, although classical machine learning techniques, among others, are also present.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-12-12T14:09:23Z
No. of bitstreams: 1
ijimai8_4_0.pdf: 78433 bytes, checksum: 511972015e7b5e293f8991193fac549e (MD5); Made available in DSpace on 2023-12-12T14:09:23Z (GMT). No. of bitstreams: 1
ijimai8_4_0.pdf: 78433 bytes, checksum: 511972015e7b5e293f8991193fac549e (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/15705">
<title>Tests of Usability Guidelines About Response to User Actions. Importance, Compliance, and Application of the Guidelines</title>
<link>https://reunir.unir.net/handle/123456789/15705</link>
<description>Tests of Usability Guidelines About Response to User Actions. Importance, Compliance, and Application of the Guidelines
Alonso-Virgós, Lucía; Pascual-Espada, Jordán; Rossi, Gustavo
Usability is a quality that a web page can have due to its simple use. Many recommendations aim to improve the web user experience, but there is no standardization of them. This study is part of a saga, which aims to order existing recommendations and guidelines by analyzing the behavior of 20 Information Technology (IT) developers. This publication analyzes the set of guidelines that determine "user responses" when they interact with a website. It is intended to group these guidelines and obtain data on the application of each of them. The test is carried out with 20 web developers without training or experience in web usability. The objective is to know if there are "user response" guidelines that a developer with no training or usability experience applies innate. Since web developers are also users, it is believed that there may be innate behavior that is not necessarily learned. The purposes of the work are: 1) Enumerate the most forgotten recommendations by web developers. This can help to think about the importance of offering specific training in this field. 2) Know the most important recommendations and guidelines, according to the web developers themselves. The investigation is carried out as follows: First, IT engineers were asked to develop a website; Second, user tests were performed and the most neglected and most applied guidelines were evaluated. The level of compliance was also analyzed, as developers lack experience in web usability and could be applying a guideline, but not correctly; Third, web developers are interviewed to find out what guidelines they consider necessary. The results are intended to help us understand if a web developer without training or experience in web usability can innately apply guidelines on "user responses". The objective of the study is to determine that there are guidelines that are applied intuitively and others that are not, and to know the reason for each situation. The results determine that the guidelines considered essential and those that are most applied innately have something in common. The results reveal that the essential guidelines and those that are most commonly implemented inherently share certain commonalities.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-12-12T14:04:13Z&#13;
No. of bitstreams: 1&#13;
ijimai8_4_19_0.pdf: 1544973 bytes, checksum: 7376bf3b12bf68a2c95d912cbcc589ec (MD5); Made available in DSpace on 2023-12-12T14:04:13Z (GMT). No. of bitstreams: 1&#13;
ijimai8_4_19_0.pdf: 1544973 bytes, checksum: 7376bf3b12bf68a2c95d912cbcc589ec (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/15533">
<title>S-Divergence-Based Internal Clustering Validation Index</title>
<link>https://reunir.unir.net/handle/123456789/15533</link>
<description>S-Divergence-Based Internal Clustering Validation Index
Kumar Sharma, Krishna; Seal, Ayan; Yazidi, Anis; Krejcar, Ondrej
A clustering validation index (CVI) is employed to evaluate an algorithm’s clustering results. Generally, CVI statistics can be split into three classes, namely internal, external, and relative cluster validations. Most of the existing internal CVIs were designed based on compactness (CM) and separation (SM). The distance between cluster centers is calculated by SM, whereas the CM measures the variance of the cluster. However, the SM between groups is not always captured accurately in highly overlapping classes. In this article, we devise a novel internal CVI that can be regarded as a complementary measure to the landscape of available internal CVIs. Initially, a database’s clusters are modeled as a non-parametric density function estimated using kernel&#13;
density estimation. Then the S-divergence (SD) and S-distance are introduced for measuring the SM and the CM, respectively. The SD is defined based on the concept of Hermitian positive definite matrices applied to density functions. The proposed internal CVI (PM) is the ratio of CM to SM. The PM outperforms the legacy measures presented in the literature on both superficial and realistic databases in various scenarios, according to empirical results from four popular clustering algorithms, including fuzzy k-means, spectral clustering, density peak clustering, and density-based spatial clustering applied to noisy data.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-11-02T17:22:50Z&#13;
No. of bitstreams: 1&#13;
ip2023_10_001.pdf: 4247803 bytes, checksum: 6cc0f78e54ba6ad1f97721e81afc2aa4 (MD5); Made available in DSpace on 2023-11-02T17:22:50Z (GMT). No. of bitstreams: 1&#13;
ip2023_10_001.pdf: 4247803 bytes, checksum: 6cc0f78e54ba6ad1f97721e81afc2aa4 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/15134">
<title>What Do We Mean by GenAI? A Systematic Mapping of The Evolution, Trends, and Techniques Involved in Generative AI</title>
<link>https://reunir.unir.net/handle/123456789/15134</link>
<description>What Do We Mean by GenAI? A Systematic Mapping of The Evolution, Trends, and Techniques Involved in Generative AI
García-Peñalvo, Francisco; Vázquez-Ingelmo, Andrea
Artificial Intelligence has become a focal point of interest across various sectors due to its ability to generate creative and realistic outputs. A specific subset, generative artificial intelligence, has seen significant growth, particularly in late 2022. Tools like ChatGPT, Dall-E, or Midjourney have democratized access to Large Language Models, enabling the creation of human-like content. However, the concept 'Generative Artificial Intelligence lacks a universally accepted definition, leading to potential misunderstandings. While a model that produces any output can be technically seen as generative, the Artificial Intelligent research community often reserves the term for complex models that generate high-quality, human-like material. This paper presents a literature mapping of AI-driven content generation, analyzing 631 solutions published over the last five years to better understand and characterize the Generative Artificial Intelligence landscape. Our findings suggest a dichotomy in the understanding and application of the term "Generative AI". While the broader public often interprets "Generative AI" as AI-driven creation of tangible content, the AI research community mainly discusses generative implementations with an emphasis on the models in use, without explicitly categorizing their work under the term "Generative AI".
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-08-28T12:17:56Z&#13;
No. of bitstreams: 1&#13;
ip2023_07_006.pdf: 2089221 bytes, checksum: 7d853997a9bf3895a42bfd07db9c8615 (MD5); Made available in DSpace on 2023-08-28T12:17:56Z (GMT). No. of bitstreams: 1&#13;
ip2023_07_006.pdf: 2089221 bytes, checksum: 7d853997a9bf3895a42bfd07db9c8615 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/15133">
<title>Explaining Query Answers in Probabilistic Databases</title>
<link>https://reunir.unir.net/handle/123456789/15133</link>
<description>Explaining Query Answers in Probabilistic Databases
Debbi, Hichem
Probabilistic databases have emerged as an extension of relational databases that can handle uncertain data under possible worlds semantics. Although the problems of creating effective means of probabilistic data representation as well as probabilistic query evaluation have been addressed so widely, low attention has been given to query result explanation. While query answer explanation in relational databases tends to answer the question: why is this tuple in the query result? In probabilistic databases, we should ask an additional question: why does this tuple have such a probability? Due to the huge number of resulting worlds of probabilistic databases, query explanation in probabilistic databases is a challenging task. In this paper, we propose a causal explanation technique for conjunctive queries in probabilistic databases. Based on the notions of causality, responsibility and blame, we will be able to address explanation for tuple and attribute uncertainties in a complementary way. Through an experiment on the real-dataset of IMDB, we will see that this framework would be helpful for explaining complex queries results. Comparing to existing explanation methods, our method could be also considered as an aided-diagnosis method through computing the blame, which helps to understand the impact of uncertain attributes.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-08-28T12:06:08Z&#13;
No. of bitstreams: 1&#13;
ip2023_07_005.pdf: 2108435 bytes, checksum: 13c631e121d7879270ce87308c9a49aa (MD5); Made available in DSpace on 2023-08-28T12:06:08Z (GMT). No. of bitstreams: 1&#13;
ip2023_07_005.pdf: 2108435 bytes, checksum: 13c631e121d7879270ce87308c9a49aa (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/15132">
<title>Research on Brain and Mind Inspired Intelligence</title>
<link>https://reunir.unir.net/handle/123456789/15132</link>
<description>Research on Brain and Mind Inspired Intelligence
Liu, Yang; Wei, Jianshe
To address the problems of scientific theory, common technology and engineering application of multimedia and multimodal information computing, this paper is focused on the theoretical model, algorithm framework, and system architecture of brain and mind inspired intelligence (BMI) based on the structure mechanism simulation of the nervous system, the function architecture emulation of the cognitive system and the complex behavior imitation of the natural system. Based on information theory, system theory, cybernetics and bionics, we define related concept and hypothesis of brain and mind inspired computing (BMC) and design a model and framework for frontier BMI theory. Research shows that BMC can effectively improve the performance of semantic processing of multimedia and cross-modal information, such as target detection, classification and recognition. Based on the brain mechanism and mind architecture, a semantic-oriented multimedia neural, cognitive computing model is designed for multimedia semantic computing. Then a hierarchical cross-modal cognitive neural computing framework is proposed for cross-modal information processing. Furthermore, a cross-modal neural, cognitive computing architecture is presented for remote sensing intelligent information extraction platform and unmanned autonomous system.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-08-28T12:00:15Z&#13;
No. of bitstreams: 1&#13;
ip2023_07_004.pdf: 5526925 bytes, checksum: 5223497ce60b15e0c4c36b946ecfa981 (MD5); Made available in DSpace on 2023-08-28T12:00:15Z (GMT). No. of bitstreams: 1&#13;
ip2023_07_004.pdf: 5526925 bytes, checksum: 5223497ce60b15e0c4c36b946ecfa981 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/15131">
<title>Deobfuscating Leetspeak With Deep Learning to Improve Spam Filtering</title>
<link>https://reunir.unir.net/handle/123456789/15131</link>
<description>Deobfuscating Leetspeak With Deep Learning to Improve Spam Filtering
Vélez de Mendizabal, Iñaki; Vidriales, Xabier; Basto-Fernandes, Vitor; Ezpeleta, Enaitz; Méndez, José Ramón; Zurutuza, Urko
The evolution of anti-spam filters has forced spammers to make greater efforts to bypass filters in order to distribute content over networks. The distribution of content encoded in images or the use of Leetspeak are concrete and clear examples of techniques currently used to bypass filters. Despite the importance of dealing with these problems, the number of studies to solve them is quite small, and the reported performance is very limited. This study reviews the work done so far (very rudimentary) for Leetspeak deobfuscation and proposes a new technique based on using neural networks for decoding purposes. In addition, we distribute an image database specifically created for training Leetspeak decoding models. We have also created and made available four different corpora to analyse the performance of Leetspeak decoding schemes. Using these corpora, we have experimentally evaluated our neural network approach for decoding Leetspeak. The results obtained have shown the usefulness of the proposed model for addressing the deobfuscation of Leetspeak character sequences.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-08-28T11:41:03Z&#13;
No. of bitstreams: 1&#13;
ip2023_07_003.pdf: 1374055 bytes, checksum: c27508a391c30289b9587215e328be22 (MD5); Made available in DSpace on 2023-08-28T11:41:03Z (GMT). No. of bitstreams: 1&#13;
ip2023_07_003.pdf: 1374055 bytes, checksum: c27508a391c30289b9587215e328be22 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/15129">
<title>IoT Detection System for Mildew Disease in Roses Using Neural Networks and Image Analysis</title>
<link>https://reunir.unir.net/handle/123456789/15129</link>
<description>IoT Detection System for Mildew Disease in Roses Using Neural Networks and Image Analysis
Torres, Laura; Romero, Luis; Aguirre, Edgar; Ferro Escobar, Roberto
Artificial intelligence presents different approaches, one of these is the use of neural network algorithms, a particular context is the farming sector and these algorithms support the detection of diseases in flowers, this work presents a system to detect downy mildew disease in roses through the analysis of images through neural networks and the correlation of environmental variables through an experiment in a controlled environment, for which an IoT platform was developed that integrated an artificial intelligence module. For the verification of the model, three different models of neural networks in a controlled greenhouse were experimentally compared and a proposed model was obtained for the training and validation sets of two categories of healthy roses and diseased roses with 89% training and 11% recovery. validation and it was determined that the relative humidity variable can influence the development and appearance of Downy Mildew disease when its value is above 85% for a prolonged period.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-08-28T08:56:57Z&#13;
No. of bitstreams: 1&#13;
ip2023_07_001_0.pdf: 4767566 bytes, checksum: 0b015bacd69adb6ae18095b9b1b8b605 (MD5); Made available in DSpace on 2023-08-28T08:56:57Z (GMT). No. of bitstreams: 1&#13;
ip2023_07_001_0.pdf: 4767566 bytes, checksum: 0b015bacd69adb6ae18095b9b1b8b605 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14812">
<title>ConvGRU-CNN: Spatiotemporal Deep Learning for Real-World Anomaly Detection in Video Surveillance System</title>
<link>https://reunir.unir.net/handle/123456789/14812</link>
<description>ConvGRU-CNN: Spatiotemporal Deep Learning for Real-World Anomaly Detection in Video Surveillance System
Qasim Gandapur, Maryam; Verdú, Elena
Video surveillance for real-world anomaly detection and prevention using deep learning is an important and difficult research area. It is imperative to detect and prevent anomalies to develop a nonviolent society. Realworld video surveillance cameras automate the detection of anomaly activities and enable the law enforcement systems for taking steps toward public safety. However, a human-monitored surveillance system is vulnerable to oversight anomaly activity. In this paper, an automated deep learning model is proposed in order to detect and prevent anomaly activities. The real-world video surveillance system is designed by implementing the ResNet-50, a Convolutional Neural Network (CNN) model, to extract the high-level features from input streams whereas temporal features are extracted by the Convolutional GRU (ConvGRU) from the ResNet-50 extracted features in the time-series dataset. The proposed deep learning video surveillance model (named ConvGRUCNN) can efficiently detect anomaly activities. The UCF-Crime dataset is used to evaluate the proposed deep learning model. We classified normal and abnormal activities, thereby showing the ability of ConvGRU-CNN to find a correct category for each abnormal activity. With the UCF-Crime dataset for the video surveillance-based anomaly detection, ConvGRU-CNN achieved 82.22% accuracy. In addition, the proposed model outperformed the related deep learning models.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-06-01T10:12:22Z&#13;
No. of bitstreams: 1&#13;
ip2023_05_006_0.pdf: 2055534 bytes, checksum: 070dd17271be2ce11fcd266d68791423 (MD5); Made available in DSpace on 2023-06-01T10:12:22Z (GMT). No. of bitstreams: 1&#13;
ip2023_05_006_0.pdf: 2055534 bytes, checksum: 070dd17271be2ce11fcd266d68791423 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14365">
<title>A Survey on Data-Driven Evaluation of Competencies and Capabilities Across Multimedia Environments</title>
<link>https://reunir.unir.net/handle/123456789/14365</link>
<description>A Survey on Data-Driven Evaluation of Competencies and Capabilities Across Multimedia Environments
Strukova, Sofia; Ruipérez-Valiente, José A.; Gómez Mármol, Félix
The rapid evolution of technology directly impacts the skills and jobs needed in the next decade. Users can, intentionally or unintentionally, develop different skills by creating, interacting with, and consuming the content from online environments and portals where informal learning can emerge. These environments generate large amounts of data; therefore, big data can have a significant impact on education. Moreover, the educational landscape has been shifting from a focus on contents to a focus on competencies and capabilities that will prepare our society for an unknown future during the 21st century. Therefore, the main goal of this literature survey is to examine diverse technology-mediated environments that can generate rich data sets through the users’ interaction and where data can be used to explicitly or implicitly perform a data-driven evaluation of different competencies and capabilities. We thoroughly and comprehensively surveyed the state of the art to identify and analyse digital environments, the data they are producing and the capabilities they can measure and/or develop. Our survey revealed four key multimedia environments that include sites for content sharing &amp; consumption, video games, online learning and social networks that fulfilled our goal. Moreover, different methods were used to measure a large array of diverse capabilities such as expertise, language proficiency and soft skills. Our results prove the potential of the data from diverse digital environments to support the development of lifelong and lifewide 21st-century capabilities for the future society.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-15T09:22:47Z&#13;
No. of bitstreams: 1&#13;
ip2022_10_004_0.pdf: 1157780 bytes, checksum: 5e9ac2341a6bba489dc90f7fe1b5b692 (MD5); Made available in DSpace on 2023-03-15T09:22:47Z (GMT). No. of bitstreams: 1&#13;
ip2022_10_004_0.pdf: 1157780 bytes, checksum: 5e9ac2341a6bba489dc90f7fe1b5b692 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14354">
<title>Attentive Flexible Translation Embedding in Top-N Sparse Sequential Recommendations</title>
<link>https://reunir.unir.net/handle/123456789/14354</link>
<description>Attentive Flexible Translation Embedding in Top-N Sparse Sequential Recommendations
Seo, Min-Ji; Kim, Myung-Ho
Sequential recommendation aims to predict the user’s next action based on personal action sequences. The major challenge in this task is how to achieve high performance recommendation under the data sparsity problem. Translation-based recommendations, which learn distance metrics to capture interactions between users and items in sequential recommendations, are a promising method to overcome this issue. However, a disadvantage of translation-based recommendations is that they capture long-term preferences of the user and complex item transitions. In this paper, we propose attentive flexible translation for recommendations (AFTRec) to tackle data sparsity problem by capturing a user’s dynamic preferences and complex interactions between items in user’s purchasing behaviors. In particular, we first encode semantic information of an item related to user’s purchasing behaviors as the user-specific item translation vectors. We also design a transition graph and encode complex item transitions as correlation-specific item translation vectors. Finally, we adopt a flexible distance metric that considers directions with respect to the translation vectors in the same space for predicting the next item. To evaluate the performance of our method, we conducted experiments on four sparse datasets and one dense dataset with different domains. The experimental results demonstrate that our proposed AFTRec outperforms the state-of-the-art baselines in terms of normalized discounted cumulative&#13;
gain and hit rate on sparse datasets.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-14T10:28:46Z&#13;
No. of bitstreams: 1&#13;
ip2022_10_007.pdf: 2429918 bytes, checksum: 4407cce040a1d5838b26a588739dd81c (MD5); Made available in DSpace on 2023-03-14T10:28:46Z (GMT). No. of bitstreams: 1&#13;
ip2022_10_007.pdf: 2429918 bytes, checksum: 4407cce040a1d5838b26a588739dd81c (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14353">
<title>An Empirical Evaluation of Machine Learning Techniques for Crop Prediction</title>
<link>https://reunir.unir.net/handle/123456789/14353</link>
<description>An Empirical Evaluation of Machine Learning Techniques for Crop Prediction
Mariammal, G.; Suruliandi, A.; Raja, S. P.; Poongothai, E.
Agriculture is the primary source driving the economic growth of every country worldwide. Crop prediction, which is critical to agriculture, depends on the soil and environment. Nutrient levels differ from area to area and greatly influence in crop cultivation. Earlier, the tasks of crop forecast and cultivation were undertaken by farmers themselves. Today, however, crop prediction is determined by climatic variations. This is where machine learning algorithms step in to identify the most relevant crop for cultivation. This research undertakes an empirical analysis using the bagging, random forest, support vector machine, decision tree, Naïve Bayes and k-nearest neighbor classifiers to predict the most appropriate cultivable crop for certain areas, based on environment and soil traits. Further, the suitability of the classifiers is examined using a GitHub prisoners’ dataset. The experimental results of all the classification techniques were assessed to show that the ensemble outclassed the rest with respect to every performance metric.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-14T10:18:38Z&#13;
No. of bitstreams: 1&#13;
ip2022_12_004.pdf: 829089 bytes, checksum: 1a22e16ffed40d297c55796f0fecf0b9 (MD5); Made available in DSpace on 2023-03-14T10:18:38Z (GMT). No. of bitstreams: 1&#13;
ip2022_12_004.pdf: 829089 bytes, checksum: 1a22e16ffed40d297c55796f0fecf0b9 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14351">
<title>Tourism-Related Placeness Feature Extraction from Social Media Data Using Machine Learning Models</title>
<link>https://reunir.unir.net/handle/123456789/14351</link>
<description>Tourism-Related Placeness Feature Extraction from Social Media Data Using Machine Learning Models
Muñoz, Pedro; Doñaque, E.; Larrañaga, A.; Martínez Torres, Javier
The study of placeness has been focus for researchers trying to understand the impact of locations on their surroundings and tourism, the loss of it by globalization and modernization and its effect on tourism, or the characterization of the activities that take place in them. Identifying places that have a high level of placeness can become very valuable when studying social trends and mobility in relation to the space in which the study takes place. Moreover, places can be enriched with dimensions such as the demographics of the individuals visiting such places and the activities the carry in them thanks to social media and modern machine learning and data mining methods. Such information can prove to be useful in fields such as urban planning or tourism as a base for analysis and decision-making or the discovery of new social hotspots or sites rich in cultural heritage.&#13;
This manuscript will focus on the methodology to obtain such information, for which data from Instagram is used to feed a set of classification models that will mine demographics from the users based on graphic and textual data from their profiles, gain insight on what they were doing in each of their posts and try to classify that information into any of the categories discovered in this article. The goal of this methodology is to obtain, from social media data, characteristics of visitors to locations as a discovery tool for the tourism industry.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-14T09:46:13Z&#13;
No. of bitstreams: 1&#13;
ip2022_12_003.pdf: 1149017 bytes, checksum: f6c14a3431fb1cf416990f4d8443e4bb (MD5); Made available in DSpace on 2023-03-14T09:46:13Z (GMT). No. of bitstreams: 1&#13;
ip2022_12_003.pdf: 1149017 bytes, checksum: f6c14a3431fb1cf416990f4d8443e4bb (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14350">
<title>Deep Transfer Learning-Based Automated Identification of Bird Song</title>
<link>https://reunir.unir.net/handle/123456789/14350</link>
<description>Deep Transfer Learning-Based Automated Identification of Bird Song
Das, Nabanita; Padhy, Neelamadhab; Dey, Nilanjan; Bhattacharya, Sudipta; Tavares, Joao Manuel R. S.
Bird species identification is becoming increasingly crucial for avian biodiversity conservation and assisting ornithologists in quantifying the presence of birds in a given area. Convolutional Neural Networks (CNNs) are advanced deep learning algorithms that have proven to perform well in speech classification. However, developing an accurate deep learning classifier requires a large amount of data. Such a large amount of data on endemic or endangered creatures is frequently difficult to gathered. Also, in some other fields, such as bioinformatics and robotics, the high cost of data collection and expensive annotation limit their progress, so large, well-annotated data creating a set is also difficult. A transfer learning method can alleviate overfitting concerns in a deep learning model. This feature serves as the inspiration for transfer learning, which was created to deal with situations where the data are distributed across a variety of functional domains. In this study, the ability of deep transfer models such as VGG16, VGG19 and InceptionV3 to effectively extract and discriminate speech signals from different species of birds with high prediction accuracy is explored. The obtained accuracies using VGG16, VGG19 and InceptionV3 were equal to 78, 61.9 and 85%, respectively, which are very promising.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-14T09:30:41Z&#13;
No. of bitstreams: 1&#13;
ip2023_01_003_1.pdf: 3796248 bytes, checksum: 3384f7e7c2b7a69190885f76c549e697 (MD5); Made available in DSpace on 2023-03-14T09:30:41Z (GMT). No. of bitstreams: 1&#13;
ip2023_01_003_1.pdf: 3796248 bytes, checksum: 3384f7e7c2b7a69190885f76c549e697 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14338">
<title>Quantitative Measures for Medical Fundus and Mammography Images Enhancement</title>
<link>https://reunir.unir.net/handle/123456789/14338</link>
<description>Quantitative Measures for Medical Fundus and Mammography Images Enhancement
Intriago-Pazmiño, Monserrate; Ibarra-Fiallo, Julio; Guzmán-Castillo, Adán; Alonso-Calvo, Raúl; Crespo, José
Enhancing the visibility of medical images is part of the initial or preprocessing phase within a computer vision system. This image preparation is essential for subsequent system tasks such as segmentation or classification. Therefore, quantitative validation of medical image preprocessing is crucial. In this work, four metrics are studied: Contrast Improvement Index (CII), Enhancement Measurement Estimation (EME), Entropy EME (EMEE), and Entropy. The objective is to find the best parameters for each metric. The study is performed on five medical image datasets, three retinal fundus sets (DRIVE, ROPFI, HRF-POORQ), and two mammography image sets (MIAS, DDSM). Metrics are calculated using a binary mask image to discard the background.&#13;
Using the fundus and mask datasets, the best results were obtained with the EMEE and EMEE metrics, which achieved mean improvements of up to 186% and 75%, respectively. For mammography datasets and using masks of the region of interest, the two metrics with the highest percentage improvement were CII and EMEE, which obtained means of up to 396% and 129%, respectively. Based on the experimental results provided, we can conclude that EMEE, EME, and CII metrics can achieve better enhancement assessment in this type of medical imaging.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-13T11:45:48Z&#13;
No. of bitstreams: 1&#13;
ip2022_12_002_0.pdf: 5667122 bytes, checksum: f9c96ad1df94e8ae5b1a983bd37fe2c1 (MD5); Made available in DSpace on 2023-03-13T11:45:48Z (GMT). No. of bitstreams: 1&#13;
ip2022_12_002_0.pdf: 5667122 bytes, checksum: f9c96ad1df94e8ae5b1a983bd37fe2c1 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14337">
<title>Synthetic Aperture Radar Automatic Target Recognition Based on a Simple Attention Mechanism</title>
<link>https://reunir.unir.net/handle/123456789/14337</link>
<description>Synthetic Aperture Radar Automatic Target Recognition Based on a Simple Attention Mechanism
Ukwuoma, Chiagoziem Chima; Zhiguang, Qin; Tienin, Bole W.; Yussif, Sophyani B.; Ejiyi, Chukwuebuka Joseph; Urama, Gilbert C.; Ukwuoma, Chibueze D.; Chikwendu, Ijeoma Amuche
A simple but effective channel attention module is proposed for Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR). The channel attention technique has shown recent success in improving Deep Convolutional Neural Networks (CNN). The resolution of SAR images does not surpass optical images thus information flow of SAR images becomes relatively poor when the network depth is raised blindly leading to a serious gradients explosion/vanishing. To resolve the issue of SAR image recognition efficiency and ambiguity trade-off, we proposed a simple Channel Attention module into the ResNet Architecture as our network backbone, which utilizes few parameters yet results in a performance gain. Our simple attention module, which follows the implementation of Efficient Channel Attention, shows that avoiding dimensionality reduction is essential for learning as well as an appropriate cross-channel interaction can preserve performance and decrease model complexity. We also explored the One Policy Learning Rate on the ResNet-50 architecture and compared it with the proposed attention based ResNet-50 architecture. A thorough analysis of the MSTAR Dataset demonstrates the efficacy of the suggested strategy over the most recent findings. With the Attention-based model and the One Policy Learning Rate-based architecture, we were able to obtain recognition rate of 100% and 99.8%, respectively.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-13T11:10:00Z&#13;
No. of bitstreams: 1&#13;
ip2023_02_004.pdf: 4381897 bytes, checksum: 240caf320d19eb5d4de43c7dd61fb2ea (MD5); Made available in DSpace on 2023-03-13T11:10:00Z (GMT). No. of bitstreams: 1&#13;
ip2023_02_004.pdf: 4381897 bytes, checksum: 240caf320d19eb5d4de43c7dd61fb2ea (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14336">
<title>Emotion-Aware Monitoring of Users’ Reaction With a Multi-Perspective Analysis of Long- and Short-Term Topics on Twitter</title>
<link>https://reunir.unir.net/handle/123456789/14336</link>
<description>Emotion-Aware Monitoring of Users’ Reaction With a Multi-Perspective Analysis of Long- and Short-Term Topics on Twitter
Cavaliere, Danilo; Fenza, Giuseppe; Loia, Vincenzo; Nota, Francesco
Social networks, such as Twitter, play like a disinformation spread booster giving the chance to individuals and organizations to influence users’ beliefs on purpose through tweets causing destabilization effects to the community. As a consequence, there is a need for solutions to analyse users’ reactions to topics debated in the community. To this purpose, state-of-the-art methods focus on selecting the most debated topics over time, ignoring less-frequent-discussed topics. In this paper, a framework for users’ reaction and topic analysis is introduced. First the method extracts topics as frequent itemsets of named entities from tweets collected, hence the support over time and RoBERTa-based sentiment analysis are applied to assess the current topic spread and the emotional impact, then a time-grid-based approach allows a granule-level analysis of the collected features that can be exploited for predicting future users’ reactions towards topics. Finally, a three-perspective score function is introduced to build comparative ranked lists of the most relevant topics according to topic sentiment, importance and spread. Experiences demonstrate the potential of the framework on IEEE COVID-19 Tweets Dataset.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-13T11:00:19Z&#13;
No. of bitstreams: 1&#13;
ip2023_02_003.pdf: 1790012 bytes, checksum: 313fafa79d29a1bd740e0945384dcf6e (MD5); Made available in DSpace on 2023-03-13T11:00:19Z (GMT). No. of bitstreams: 1&#13;
ip2023_02_003.pdf: 1790012 bytes, checksum: 313fafa79d29a1bd740e0945384dcf6e (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14322">
<title>Results of a Study to Improve the Spanish Version of the User Experience Questionnaire (UEQ)</title>
<link>https://reunir.unir.net/handle/123456789/14322</link>
<description>Results of a Study to Improve the Spanish Version of the User Experience Questionnaire (UEQ)
Hernández-Campos, Mónica; Thomaschewski, Jörg; Law, Yuen C.
This paper analyses changes in some items of the User Experience Questionnaire (UEQ) for use in the context of Costa Rican culture. Although a Spanish version of the UEQ was created in 2012, we use a double-translation and reconciliation model for detecting the more appropriate words for Costa Rican culture. These resulted in 7 new items that were added to the original Spanish version. In total, the resulting UEQ had 33 items. 161 participants took part in a study that examined both the original items and the new ones. Static analyses (Cronbach's Alpha, mean, variance, and confidence interval) were performed to measure the differences of the scales of the original items and the new UEQ variant with the Costa Rican words. Finally, confidence intervals of the individual items and Cronbach’s Alpha coefficient average of the affected scales were analysed. The results show, contrary to initial expectations, that the Costa Rican word version is neither better nor worse than the original Spanish version. However, this shows that the UEQ is very robust to some changes in the items.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-10T13:35:35Z&#13;
No. of bitstreams: 1&#13;
ip2022_11_003.pdf: 514919 bytes, checksum: 2bc2e2adaafe10ad962fdde1426f1df3 (MD5); Made available in DSpace on 2023-03-10T13:35:35Z (GMT). No. of bitstreams: 1&#13;
ip2022_11_003.pdf: 514919 bytes, checksum: 2bc2e2adaafe10ad962fdde1426f1df3 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14315">
<title>A Platform for Swimming Pool Detection and Legal Verification Using a Multi-Agent System and Remote Image Sensing</title>
<link>https://reunir.unir.net/handle/123456789/14315</link>
<description>A Platform for Swimming Pool Detection and Legal Verification Using a Multi-Agent System and Remote Image Sensing
Sánchez San Blas, Héctor; Carmona Balea, Antía; Sales, A.; Augusto Silva, Luís; Villarrubia González, Gabriel
Spain is the second country in Europe with the most swimming pools. However, the legal literature estimates that 20% of swimming pools are not declared or irregular.The administration has a corps of people who manually analyze satellite or drone images to detect illegal or irregular structures. This method is costly in terms of effort and time, and it is also a method based on the subjectivity of the person carrying it out. This proposal aims to design a platform that allows the automatic detection of irregular pools. Using geographic information tools (GIS) based on orthophotography, combined with advanced machine learning techniques for object detection, allows this work. Furthermore, using a multi-agent architecture allows the system to be modular, with the possibility of the different parts of the system working together, balancing the workload. The proposed system has been validated by testing it in different towns in Spain. The system has shown promisin results in performing this task, with an F1-Score of 97.1%.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-09T16:11:46Z&#13;
No. of bitstreams: 1&#13;
ip2023_01_002.pdf: 15157584 bytes, checksum: a865b94d7d0ba34b25ee70e7ae98f5c5 (MD5); Made available in DSpace on 2023-03-09T16:11:46Z (GMT). No. of bitstreams: 1&#13;
ip2023_01_002.pdf: 15157584 bytes, checksum: a865b94d7d0ba34b25ee70e7ae98f5c5 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14309">
<title>Point Cloud Deep Learning Solution for Hand Gesture Recognition</title>
<link>https://reunir.unir.net/handle/123456789/14309</link>
<description>Point Cloud Deep Learning Solution for Hand Gesture Recognition
Osimani, César; Ojeda-Castelo, Juan Jesus; Piedra-Fernandez, Jose A.
In the last couple of years, there has been an increasing need for Human-Computer Interaction (HCI) systems that do not require touching the devices to control them, such as ATMs, self service kiosks in airports, terminals in public offices, among others. The use of hand gestures offers a natural alternative to achieve control without touching the devices. This paper presents a solution that allows the recognition of hand gestures by analyzing three-dimensional landmarks using deep learning. These landmarks are extracted by using a model created with machine learning techniques from a single standard RGB camera in order to define the skeleton of the hand with 21 landmarks distributed as follows: one on the wrist and four on each finger. This study proposes a deep neural network that was trained with 9 gestures receiving as input the 21 points of the hand. One of the main contributions, that considerably improves the performance, is a first layer of normalization and transformation of the landmarks. In our experimental analysis, we reach an accuracy of 99.87% recognizing of 9 hand gestures.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-09T08:40:35Z&#13;
No. of bitstreams: 1&#13;
ip2023_01_001.pdf: 3726735 bytes, checksum: 59ea71a7db13110cd8657efe5823c1ba (MD5); Made available in DSpace on 2023-03-09T08:40:35Z (GMT). No. of bitstreams: 1&#13;
ip2023_01_001.pdf: 3726735 bytes, checksum: 59ea71a7db13110cd8657efe5823c1ba (MD5)
</description>
</item>
</rdf:RDF>
