<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns="http://purl.org/rss/1.0/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#">
<channel rdf:about="https://reunir.unir.net/handle/123456789/14284">
<title>2023</title>
<link>https://reunir.unir.net/handle/123456789/14284</link>
<description/>
<items>
<rdf:Seq>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/15706"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/15705"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/15533"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/15218"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/15217"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/15216"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/15215"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/15214"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/15213"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/15212"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/15211"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/15198"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/15197"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/15134"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/15133"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/15132"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/15131"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/15129"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14832"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14831"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14830"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14812"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14593"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14592"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14587"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14368"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14367"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14366"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14365"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14356"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14355"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14354"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14353"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14352"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14351"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14350"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14349"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14338"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14337"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14336"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14335"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14334"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14333"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14327"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14326"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14325"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14324"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14323"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14322"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14321"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14315"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14312"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14310"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14309"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14305"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14304"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14303"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14295"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14294"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14293"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14292"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14291"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14290"/>
<rdf:li rdf:resource="https://reunir.unir.net/handle/123456789/14289"/>
</rdf:Seq>
</items>
<dc:date>2026-02-17T10:06:58Z</dc:date>
</channel>
<item rdf:about="https://reunir.unir.net/handle/123456789/15706">
<title>Editor's Note</title>
<link>https://reunir.unir.net/handle/123456789/15706</link>
<description>Editor's Note
Verdú, Elena
The International Journal of Interactive Multimedia and Artificial Intelligence – IJIMAI –provides an interdisciplinary forum in which scientists and professionals can share their research results and report new advances in Artificial Intelligence (AI) tools or tools that use AI with interactive multimedia techniques. The present regular issue comprises different topics as generative AI, brain and main inspired computing, bird species identification, spam detection, recommendation systems, synthetic aperture radar automatic target recognition, hand gestures recognition, anomalies detection for video surveillance systems, disease detection, social networks analysis, or user experience. The collection of articles shows the wide use of deep learning techniques, although classical machine learning techniques, among others, are also present.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-12-12T14:09:23Z
No. of bitstreams: 1
ijimai8_4_0.pdf: 78433 bytes, checksum: 511972015e7b5e293f8991193fac549e (MD5); Made available in DSpace on 2023-12-12T14:09:23Z (GMT). No. of bitstreams: 1
ijimai8_4_0.pdf: 78433 bytes, checksum: 511972015e7b5e293f8991193fac549e (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/15705">
<title>Tests of Usability Guidelines About Response to User Actions. Importance, Compliance, and Application of the Guidelines</title>
<link>https://reunir.unir.net/handle/123456789/15705</link>
<description>Tests of Usability Guidelines About Response to User Actions. Importance, Compliance, and Application of the Guidelines
Alonso-Virgós, Lucía; Pascual-Espada, Jordán; Rossi, Gustavo
Usability is a quality that a web page can have due to its simple use. Many recommendations aim to improve the web user experience, but there is no standardization of them. This study is part of a saga, which aims to order existing recommendations and guidelines by analyzing the behavior of 20 Information Technology (IT) developers. This publication analyzes the set of guidelines that determine "user responses" when they interact with a website. It is intended to group these guidelines and obtain data on the application of each of them. The test is carried out with 20 web developers without training or experience in web usability. The objective is to know if there are "user response" guidelines that a developer with no training or usability experience applies innate. Since web developers are also users, it is believed that there may be innate behavior that is not necessarily learned. The purposes of the work are: 1) Enumerate the most forgotten recommendations by web developers. This can help to think about the importance of offering specific training in this field. 2) Know the most important recommendations and guidelines, according to the web developers themselves. The investigation is carried out as follows: First, IT engineers were asked to develop a website; Second, user tests were performed and the most neglected and most applied guidelines were evaluated. The level of compliance was also analyzed, as developers lack experience in web usability and could be applying a guideline, but not correctly; Third, web developers are interviewed to find out what guidelines they consider necessary. The results are intended to help us understand if a web developer without training or experience in web usability can innately apply guidelines on "user responses". The objective of the study is to determine that there are guidelines that are applied intuitively and others that are not, and to know the reason for each situation. The results determine that the guidelines considered essential and those that are most applied innately have something in common. The results reveal that the essential guidelines and those that are most commonly implemented inherently share certain commonalities.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-12-12T14:04:13Z&#13;
No. of bitstreams: 1&#13;
ijimai8_4_19_0.pdf: 1544973 bytes, checksum: 7376bf3b12bf68a2c95d912cbcc589ec (MD5); Made available in DSpace on 2023-12-12T14:04:13Z (GMT). No. of bitstreams: 1&#13;
ijimai8_4_19_0.pdf: 1544973 bytes, checksum: 7376bf3b12bf68a2c95d912cbcc589ec (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/15533">
<title>S-Divergence-Based Internal Clustering Validation Index</title>
<link>https://reunir.unir.net/handle/123456789/15533</link>
<description>S-Divergence-Based Internal Clustering Validation Index
Kumar Sharma, Krishna; Seal, Ayan; Yazidi, Anis; Krejcar, Ondrej
A clustering validation index (CVI) is employed to evaluate an algorithm’s clustering results. Generally, CVI statistics can be split into three classes, namely internal, external, and relative cluster validations. Most of the existing internal CVIs were designed based on compactness (CM) and separation (SM). The distance between cluster centers is calculated by SM, whereas the CM measures the variance of the cluster. However, the SM between groups is not always captured accurately in highly overlapping classes. In this article, we devise a novel internal CVI that can be regarded as a complementary measure to the landscape of available internal CVIs. Initially, a database’s clusters are modeled as a non-parametric density function estimated using kernel&#13;
density estimation. Then the S-divergence (SD) and S-distance are introduced for measuring the SM and the CM, respectively. The SD is defined based on the concept of Hermitian positive definite matrices applied to density functions. The proposed internal CVI (PM) is the ratio of CM to SM. The PM outperforms the legacy measures presented in the literature on both superficial and realistic databases in various scenarios, according to empirical results from four popular clustering algorithms, including fuzzy k-means, spectral clustering, density peak clustering, and density-based spatial clustering applied to noisy data.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-11-02T17:22:50Z&#13;
No. of bitstreams: 1&#13;
ip2023_10_001.pdf: 4247803 bytes, checksum: 6cc0f78e54ba6ad1f97721e81afc2aa4 (MD5); Made available in DSpace on 2023-11-02T17:22:50Z (GMT). No. of bitstreams: 1&#13;
ip2023_10_001.pdf: 4247803 bytes, checksum: 6cc0f78e54ba6ad1f97721e81afc2aa4 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/15218">
<title>Editor’s Note</title>
<link>https://reunir.unir.net/handle/123456789/15218</link>
<description>Editor’s Note
Alonso, Ricardo S.; Chamoso, Pablo; Rodríguez-González, Sara; Novais, Paulo
Research in Agents and Multiagent Systems has matured significantly in recent years, representing one of the main branches of Artificial Intelligence and currently there are numerous effective applications of these technologies combined with Deep Learning, Computer Vision or Natural Language Processing, including areas such as healthcare and Ambient Intelligence, smart cities and mobility, Industry 4.0, educational technology, and fintech, among many others. In this regard, the International Conference on Practical Applications of Agents and Multi-Agent System (PAAMS) provides an international forum to present and discuss the latest scientific advances and their effective applications in different sectors, evaluate the impact of the approach and facilitate technology transfer among different stakeholders. Currently, a series of co-located events specialized in different areas of research are held simultaneously with PAAMS, these being the International Congress on Blockchain and Applications (BLOCKCHAIN), the International Conference on Distributed Computing and Artificial Intelligence (DCAI), the International Conference on Decision Economics (DECON), the International Symposium on Ambient Intelligence (ISAmI), the International Conference on Methodologies and Intelligent Systems for Technology Enhanced Learning (MIS4TEL), and the International Conference on Practical Applications of Computational Biology &amp; Bioinformatics (PACBB). In this regard, the present Special Issue includes a selection of extended papers presented at the 20th International Conference PAAMS 22 and its co-located events and held in L’Aquila (Italy), July 13-15, 2022.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-09-06T08:32:18Z
No. of bitstreams: 1
ijimai8_3_0_0.pdf: 57556 bytes, checksum: f0b02c6bc1df1b55b4db509d8c856e50 (MD5); Made available in DSpace on 2023-09-06T08:32:18Z (GMT). No. of bitstreams: 1
ijimai8_3_0_0.pdf: 57556 bytes, checksum: f0b02c6bc1df1b55b4db509d8c856e50 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/15217">
<title>Violence Detection in Audio: Evaluating the Effectiveness of Deep Learning Models and Data Augmentation</title>
<link>https://reunir.unir.net/handle/123456789/15217</link>
<description>Violence Detection in Audio: Evaluating the Effectiveness of Deep Learning Models and Data Augmentation
Durães, Dalila; Veloso, Bruno; Novais, Paulo
Human nature is inherently intertwined with violence, impacting the lives of numerous individuals. Various forms of violence pervade our society, with physical violence being the most prevalent in our daily lives. The study of human actions has gained significant attention in recent years, with audio (captured by microphones) and video (captured by cameras) being the primary means to record instances of violence. While video requires substantial processing capacity and hardware-software performance, audio presents itself as a viable alternative, offering several advantages beyond these technical considerations. Therefore, it is crucial to represent audio data in a manner conducive to accurate classification. In the context of violence in a car, specific datasets dedicated to this domain are not readily available. As a result, we had to create a custom dataset tailored to this particular scenario. The purpose of curating this dataset was to assess whether it could enhance the detection of violence in car-related situations. Due to the imbalanced nature of the dataset, data augmentation techniques were implemented. Existing literature reveals that Deep Learning (DL) algorithms can effectively classify audio, with a commonly used approach involving the conversion of audio into a mel spectrogram image. Based on the results obtained for that dataset, the EfficientNetB1 neural network demonstrated the highest accuracy (95.06%) in detecting violence in audios, closely followed by EfficientNetB0 (94.19%). Conversely, MobileNetV2 proved to be less capable in classifying instances of violence.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-09-06T08:26:38Z
No. of bitstreams: 1
ijimai8_3_7.pdf: 714559 bytes, checksum: a445d93fee207713f024d19b6bf6ab13 (MD5); Made available in DSpace on 2023-09-06T08:26:38Z (GMT). No. of bitstreams: 1
ijimai8_3_7.pdf: 714559 bytes, checksum: a445d93fee207713f024d19b6bf6ab13 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/15216">
<title>An Investigation Into Different Text Representations to Train an Artificial Immune Network for Clustering Texts</title>
<link>https://reunir.unir.net/handle/123456789/15216</link>
<description>An Investigation Into Different Text Representations to Train an Artificial Immune Network for Clustering Texts
Ferraria, Matheus A.; Ferraria, Vinicius A.; de Castro, Leandro N.
Extracting knowledge from text data is a complex task that is usually performed by first structuring the texts and then applying machine learning algorithms, or by using specific deep architectures capable of dealing directly with the raw text data. The traditional approach to structure texts is called Bag of Words (BoW) and consists of transforming each word in a document into a dimension (variable) in the structured data. Another approach uses grammatical classes to categorize the words and, thus, limit the dimension of the structured data to the number of grammatical categories. Another form of structuring text data for analysis is by using a distributed representation of words, sentences, or documents with methods like Word2Vec, Doc2Vec, and SBERT. This paper investigates four classes of text structuring methods to prepare documents for being clustered by an artificial immune system called aiNet. The goal is to assess the influence of each structuring method in the quality of the clustering obtained by the system and how methods that belong to the same type of representation differ from each other, for example both LIWC and MRC are considered grammarbased models but each one of them uses completely different dictionaries to generate its representation. By using internal clustering measures, our results showed that vector space models, on average, presented the best results for the datasets chosen, followed closely by the state of the art SBERT model, and MRC had the overall worst performance. We could also observe a consistency in the number of clusters generated by each representation and for each dataset, having SBERT as the model that presented a number of clusters closer to the original number of classes in the data.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-09-06T08:20:25Z
No. of bitstreams: 1
ijimai8_3_5.pdf: 353722 bytes, checksum: 84e1c9e34aff07d64e231c000192cdfd (MD5); Made available in DSpace on 2023-09-06T08:20:25Z (GMT). No. of bitstreams: 1
ijimai8_3_5.pdf: 353722 bytes, checksum: 84e1c9e34aff07d64e231c000192cdfd (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/15215">
<title>Pollutant Time Series Analysis for Improving Air-Quality in Smart Cities</title>
<link>https://reunir.unir.net/handle/123456789/15215</link>
<description>Pollutant Time Series Analysis for Improving Air-Quality in Smart Cities
López-Blanco, Raúl; Chaveinte García, Miguel; Alonso, Ricardo S.; Prieto, Javier; Corchado, Juan M.
The evolution towards Smart Cities is the process that many urban centers are following in their quest for efficiency, resource optimization and sustainable growth. This step forward in the continuous improvement of cities is closely linked to the quality of life they want to offer their citizens. One of the key issues that can have the greatest impact on the quality of life of all city dwellers is the quality of the air they breathe, which can lead to illnesses caused by pollutants in the air. The application of new technologies, such as the Internet of Things, Big Data and Artificial Intelligence, makes it possible to obtain increasingly abundant and accurate data on what is happening in cities, providing more information to take informed action based on scientific data. This article studies the evolution of pollutants in the main cities of Castilla y León, using Generative Additive Models (GAM), which have proven to be the most efficient for making predictions with detailed historical data and which have very strong seasonalities. The results of this study conclude that during the COVID-19 pandemic containment period, there was an overall reduction in the concentration of pollutants.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-09-06T07:56:12Z
No. of bitstreams: 1
ijimai8_3_9.pdf: 2271859 bytes, checksum: d134b99201a4bd287f347a2fe723b4ea (MD5); Made available in DSpace on 2023-09-06T07:56:12Z (GMT). No. of bitstreams: 1
ijimai8_3_9.pdf: 2271859 bytes, checksum: d134b99201a4bd287f347a2fe723b4ea (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/15214">
<title>Consensus-Based Learning for MAS: Definition, Implementation and Integration in IVEs</title>
<link>https://reunir.unir.net/handle/123456789/15214</link>
<description>Consensus-Based Learning for MAS: Definition, Implementation and Integration in IVEs
Carrascosa, C.; Enguix, F.; Rebollo, M.; Rincon, J.
One of the main advancements in distributed learning may be the idea behind Google’s Federated Learning (FL) algorithm. It trains copies of artificial neural networks (ANN) in a distributed way and recombines the weights and biases obtained in a central server. Each unit maintains the privacy of the information since the training datasets are not shared. This idea perfectly fits a Multi-Agent System, where the units learning and sharing the model are agents. FL is a centralized approach, where a server is in charge of receiving, averaging and distributing back the models to the different units making the learning process. In this work, we propose a truly distributed learning process where all the agents have the same role in the system. We suggest using a consensus-based learning algorithm that we call Co-Learning. This process uses a consensus process to share the ANN models each agent learns using its private data and calculates the aggregated model. Co-Learning, as a consensus-based algorithm, calculates the average of the ANN models shared by the agents with their local neighbors. This iterative process converges to the averaged ANN model as a central server does. Apart from the definition of the Co-Learning algorithm, the paper presents its integration in SPADE agents, along with a framework called FIVE allowing to develop Intelligent Virtual Environments for SPADE agents. This framework has been used to test the execution of SPADE agents using Co-Learning algorithm in a simulation of an orange orchard field.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-09-06T07:34:30Z&#13;
No. of bitstreams: 1&#13;
ijimai8_3_2.pdf: 958756 bytes, checksum: 8d28398827dd5615483a56b7b414301b (MD5); Made available in DSpace on 2023-09-06T07:34:30Z (GMT). No. of bitstreams: 1&#13;
ijimai8_3_2.pdf: 958756 bytes, checksum: 8d28398827dd5615483a56b7b414301b (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/15213">
<title>Development of an Intelligent Classifier Model for Denial of Service Attack Detection</title>
<link>https://reunir.unir.net/handle/123456789/15213</link>
<description>Development of an Intelligent Classifier Model for Denial of Service Attack Detection
Michelena, Álvaro; Aveleira-Mata, Jose; Jove, Esteban; Alaiz-Moretón, Héctor; Quintián, Héctor; Calvo-Rolle, José Luis
The prevalence of Internet of Things (IoT) systems deployment is increasing across various domains, from residential to industrial settings. These systems are typically characterized by their modest computationa requirements and use of lightweight communication protocols, such as MQTT. However, the rising adoption of IoT technology has also led to the emergence of novel attacks, increasing the susceptibility of these systems to compromise. Among the different attacks that can affect the main IoT protocols are Denial of Service attacks (DoS). In this scenario, this paper evaluates the performance of six supervised classification techniques (Decision Trees, Multi-layer Perceptron, Random Forest, Support Vector Machine, Fisher Linear Discriminant and Bernoulli and Gaussian Naive Bayes) combined with the Principal Component Analysis (PCA) feature extraction method for detecting DoS attacks in MQTT networks. For this purpose, a real dataset containing all the traffic generated in the network and many attacks executed has been used. The results obtained with several models have achieved performances above 99% AUC.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-09-06T07:20:37Z
No. of bitstreams: 1
ijimai8_3_3.pdf: 460755 bytes, checksum: 7a24d6bbdd35f105e746145de760bdc5 (MD5); Made available in DSpace on 2023-09-06T07:20:37Z (GMT). No. of bitstreams: 1
ijimai8_3_3.pdf: 460755 bytes, checksum: 7a24d6bbdd35f105e746145de760bdc5 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/15212">
<title>Automatic Cell Counting With YOLOv5: A Fluorescence Microscopy Approach</title>
<link>https://reunir.unir.net/handle/123456789/15212</link>
<description>Automatic Cell Counting With YOLOv5: A Fluorescence Microscopy Approach
López Flórez, Sebastián; González-Briones, Alfonso; Hernández, Guillermo; Ramos, Carlos; de la Prieta, Fernando
Counting cells in a Neubauer chamber on microbiological culture plates is a laborious task that depends on technical experience. As a result, efforts have been made to advance computer vision-based approaches, increasing efficiency and reliability through quantitative analysis of microorganisms and calculation of their characteristics, biomass concentration, and biological activity. However, variability that still persists in these processes poses a challenge that is yet to be overcome. In this work, we propose a solution adopting a YOLOv5 network model for automatic cell recognition and counting in a case study for laboratory cell detection using images from a CytoSMART Exact FL microscope. In this context, a dataset of 21 expert-labeled cell images was created, along with an extra Sperm DetectionV dataset of 1024 images for transfer learning. The dataset was trained using the pretrained YOLOv5 algorithm with the Sperm DetectionV database. A laboratory test was also performed to confirm result’s viability. Compared to YOLOv4, the current YOLOv5 model had accuracy, precision, recall, and F1 scores of 92%, 84%, 91%, and 87%, respectively. The YOLOv5 algorithm was also used for cell counting and compared to the current segmentation-based U-Net and OpenCV model that has been implemented. In conclusion, the proposed model successfully recognizes and counts the different types of cells present in the laboratory.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-09-06T07:08:19Z
No. of bitstreams: 1
ijimai8_3_6.pdf: 697392 bytes, checksum: c1f8f47261458de62606dcb4d737418c (MD5); Made available in DSpace on 2023-09-06T07:08:19Z (GMT). No. of bitstreams: 1
ijimai8_3_6.pdf: 697392 bytes, checksum: c1f8f47261458de62606dcb4d737418c (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/15211">
<title>A Survey on Demand-Responsive Transportation for Rural and Interurban Mobility</title>
<link>https://reunir.unir.net/handle/123456789/15211</link>
<description>A Survey on Demand-Responsive Transportation for Rural and Interurban Mobility
Martí, Pasqual; Jordán, Jaume; González Arrieta, María Angélica; Julian, Vicente
Rural areas have been marginalized when it comes to flexible, quality transportation research. This review article brings together papers that discuss, analyze, model, or experiment with demand-responsive transportation systems applied to rural settlements and interurban transportation, discussing their general feasibility as well as the most successful configurations. For that, demand-responsive transportation is characterized and the techniques used for modeling and optimization are described. Then, a classification of the relevant publications is presented, splitting the contributions into analytical and experimental works. The results of the classification lead to a discussion that states open issues within the topic: replacement of public transportation with demandresponsive solutions, disconnection between theoretical and experimental works, user-centered design and its impact on adoption rate, and a lack of innovation regarding artificial intelligence implementation on the proposed systems.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-09-05T16:15:21Z
No. of bitstreams: 1
ijimai8_3_4.pdf: 412656 bytes, checksum: f18ea08635e89eb0ce496ee4f982c51f (MD5); Made available in DSpace on 2023-09-05T16:15:21Z (GMT). No. of bitstreams: 1
ijimai8_3_4.pdf: 412656 bytes, checksum: f18ea08635e89eb0ce496ee4f982c51f (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/15198">
<title>Using Large Language Models to Shape Social Robots’ Speech</title>
<link>https://reunir.unir.net/handle/123456789/15198</link>
<description>Using Large Language Models to Shape Social Robots’ Speech
Sevilla-Salcedo, Javier; Fernádez-Rodicio, Enrique; Martín-Galván, Laura; Castro-González, Álvaro; Castillo, José C.; Salichs, Miguel A.
Social robots are making their way into our lives in different scenarios in which humans and robots need to communicate. In these scenarios, verbal communication is an essential element of human-robot interaction. However, in most cases, social robots’ utterances are based on predefined texts, which can cause users to perceive the robots as repetitive and boring. Achieving natural and friendly communication is important for avoiding this scenario. To this end, we propose to apply state-of- the-art natural language generation models to provide our social robots with more diverse speech. In particular, we have implemented and evaluated two mechanisms: a paraphrasing module that transforms the robot’s utterances while keeping their original meaning, and a module to generate speech about a certain topic that adapts the content of this speech to the robot’s conversation partner. The results show that these models have great potential when applied to our social robots, but several limitations must be considered. These include the computational cost of the solutions presented, the latency that some of these models can introduce in the interaction, the use of proprietary models, or the lack of a subjective evaluation that complements the results of the tests conducted.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-09-04T16:02:13Z
No. of bitstreams: 1
ijimai8_3_1.pdf: 440548 bytes, checksum: 5eb947a869e7c9bb1f258fde803c3198 (MD5); Made available in DSpace on 2023-09-04T16:02:13Z (GMT). No. of bitstreams: 1
ijimai8_3_1.pdf: 440548 bytes, checksum: 5eb947a869e7c9bb1f258fde803c3198 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/15197">
<title>Problem Detection in the Edge of IoT Applications</title>
<link>https://reunir.unir.net/handle/123456789/15197</link>
<description>Problem Detection in the Edge of IoT Applications
Bernabé-Sánchez, Iván; Fernández, Alberto; Billhardt, Holger; Ossowski, Sascha
Due to technological advances, Internet of Things (IoT) systems are becoming increasingly complex. They are characterized by being multi-device and geographically distributed, which increases the possibility of errors of different types. In such systems, errors can occur anywhere at any time and fault tolerance becomes an essential characteristic to make them robust and reliable. This paper presents a framework to manage and detect errors and malfunctions of the devices that compose an IoT system. The proposed solution approach takes into account both, simple devices such as sensors or actuators, as well as computationally intensive devices which are distributed geographically. It uses knowledge graphs to model the devices, the system’s topology, the software deployed on each device and the relationships between the different elements. The proposed framework retrieves information from log messages and processes this information automatically to detect anomalous situations or malfunctions that may affect the IoT system. This work also presents the ECO ontology to organize the IoT system information.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-09-04T15:53:52Z
No. of bitstreams: 1
ijimai8_3_8_0.pdf: 896555 bytes, checksum: 147dbd5992157eb21e49bc88dde04a02 (MD5); Made available in DSpace on 2023-09-04T15:53:52Z (GMT). No. of bitstreams: 1
ijimai8_3_8_0.pdf: 896555 bytes, checksum: 147dbd5992157eb21e49bc88dde04a02 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/15134">
<title>What Do We Mean by GenAI? A Systematic Mapping of The Evolution, Trends, and Techniques Involved in Generative AI</title>
<link>https://reunir.unir.net/handle/123456789/15134</link>
<description>What Do We Mean by GenAI? A Systematic Mapping of The Evolution, Trends, and Techniques Involved in Generative AI
García-Peñalvo, Francisco; Vázquez-Ingelmo, Andrea
Artificial Intelligence has become a focal point of interest across various sectors due to its ability to generate creative and realistic outputs. A specific subset, generative artificial intelligence, has seen significant growth, particularly in late 2022. Tools like ChatGPT, Dall-E, or Midjourney have democratized access to Large Language Models, enabling the creation of human-like content. However, the concept 'Generative Artificial Intelligence lacks a universally accepted definition, leading to potential misunderstandings. While a model that produces any output can be technically seen as generative, the Artificial Intelligent research community often reserves the term for complex models that generate high-quality, human-like material. This paper presents a literature mapping of AI-driven content generation, analyzing 631 solutions published over the last five years to better understand and characterize the Generative Artificial Intelligence landscape. Our findings suggest a dichotomy in the understanding and application of the term "Generative AI". While the broader public often interprets "Generative AI" as AI-driven creation of tangible content, the AI research community mainly discusses generative implementations with an emphasis on the models in use, without explicitly categorizing their work under the term "Generative AI".
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-08-28T12:17:56Z&#13;
No. of bitstreams: 1&#13;
ip2023_07_006.pdf: 2089221 bytes, checksum: 7d853997a9bf3895a42bfd07db9c8615 (MD5); Made available in DSpace on 2023-08-28T12:17:56Z (GMT). No. of bitstreams: 1&#13;
ip2023_07_006.pdf: 2089221 bytes, checksum: 7d853997a9bf3895a42bfd07db9c8615 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/15133">
<title>Explaining Query Answers in Probabilistic Databases</title>
<link>https://reunir.unir.net/handle/123456789/15133</link>
<description>Explaining Query Answers in Probabilistic Databases
Debbi, Hichem
Probabilistic databases have emerged as an extension of relational databases that can handle uncertain data under possible worlds semantics. Although the problems of creating effective means of probabilistic data representation as well as probabilistic query evaluation have been addressed so widely, low attention has been given to query result explanation. While query answer explanation in relational databases tends to answer the question: why is this tuple in the query result? In probabilistic databases, we should ask an additional question: why does this tuple have such a probability? Due to the huge number of resulting worlds of probabilistic databases, query explanation in probabilistic databases is a challenging task. In this paper, we propose a causal explanation technique for conjunctive queries in probabilistic databases. Based on the notions of causality, responsibility and blame, we will be able to address explanation for tuple and attribute uncertainties in a complementary way. Through an experiment on the real-dataset of IMDB, we will see that this framework would be helpful for explaining complex queries results. Comparing to existing explanation methods, our method could be also considered as an aided-diagnosis method through computing the blame, which helps to understand the impact of uncertain attributes.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-08-28T12:06:08Z&#13;
No. of bitstreams: 1&#13;
ip2023_07_005.pdf: 2108435 bytes, checksum: 13c631e121d7879270ce87308c9a49aa (MD5); Made available in DSpace on 2023-08-28T12:06:08Z (GMT). No. of bitstreams: 1&#13;
ip2023_07_005.pdf: 2108435 bytes, checksum: 13c631e121d7879270ce87308c9a49aa (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/15132">
<title>Research on Brain and Mind Inspired Intelligence</title>
<link>https://reunir.unir.net/handle/123456789/15132</link>
<description>Research on Brain and Mind Inspired Intelligence
Liu, Yang; Wei, Jianshe
To address the problems of scientific theory, common technology and engineering application of multimedia and multimodal information computing, this paper is focused on the theoretical model, algorithm framework, and system architecture of brain and mind inspired intelligence (BMI) based on the structure mechanism simulation of the nervous system, the function architecture emulation of the cognitive system and the complex behavior imitation of the natural system. Based on information theory, system theory, cybernetics and bionics, we define related concept and hypothesis of brain and mind inspired computing (BMC) and design a model and framework for frontier BMI theory. Research shows that BMC can effectively improve the performance of semantic processing of multimedia and cross-modal information, such as target detection, classification and recognition. Based on the brain mechanism and mind architecture, a semantic-oriented multimedia neural, cognitive computing model is designed for multimedia semantic computing. Then a hierarchical cross-modal cognitive neural computing framework is proposed for cross-modal information processing. Furthermore, a cross-modal neural, cognitive computing architecture is presented for remote sensing intelligent information extraction platform and unmanned autonomous system.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-08-28T12:00:15Z&#13;
No. of bitstreams: 1&#13;
ip2023_07_004.pdf: 5526925 bytes, checksum: 5223497ce60b15e0c4c36b946ecfa981 (MD5); Made available in DSpace on 2023-08-28T12:00:15Z (GMT). No. of bitstreams: 1&#13;
ip2023_07_004.pdf: 5526925 bytes, checksum: 5223497ce60b15e0c4c36b946ecfa981 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/15131">
<title>Deobfuscating Leetspeak With Deep Learning to Improve Spam Filtering</title>
<link>https://reunir.unir.net/handle/123456789/15131</link>
<description>Deobfuscating Leetspeak With Deep Learning to Improve Spam Filtering
Vélez de Mendizabal, Iñaki; Vidriales, Xabier; Basto-Fernandes, Vitor; Ezpeleta, Enaitz; Méndez, José Ramón; Zurutuza, Urko
The evolution of anti-spam filters has forced spammers to make greater efforts to bypass filters in order to distribute content over networks. The distribution of content encoded in images or the use of Leetspeak are concrete and clear examples of techniques currently used to bypass filters. Despite the importance of dealing with these problems, the number of studies to solve them is quite small, and the reported performance is very limited. This study reviews the work done so far (very rudimentary) for Leetspeak deobfuscation and proposes a new technique based on using neural networks for decoding purposes. In addition, we distribute an image database specifically created for training Leetspeak decoding models. We have also created and made available four different corpora to analyse the performance of Leetspeak decoding schemes. Using these corpora, we have experimentally evaluated our neural network approach for decoding Leetspeak. The results obtained have shown the usefulness of the proposed model for addressing the deobfuscation of Leetspeak character sequences.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-08-28T11:41:03Z&#13;
No. of bitstreams: 1&#13;
ip2023_07_003.pdf: 1374055 bytes, checksum: c27508a391c30289b9587215e328be22 (MD5); Made available in DSpace on 2023-08-28T11:41:03Z (GMT). No. of bitstreams: 1&#13;
ip2023_07_003.pdf: 1374055 bytes, checksum: c27508a391c30289b9587215e328be22 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/15129">
<title>IoT Detection System for Mildew Disease in Roses Using Neural Networks and Image Analysis</title>
<link>https://reunir.unir.net/handle/123456789/15129</link>
<description>IoT Detection System for Mildew Disease in Roses Using Neural Networks and Image Analysis
Torres, Laura; Romero, Luis; Aguirre, Edgar; Ferro Escobar, Roberto
Artificial intelligence presents different approaches, one of these is the use of neural network algorithms, a particular context is the farming sector and these algorithms support the detection of diseases in flowers, this work presents a system to detect downy mildew disease in roses through the analysis of images through neural networks and the correlation of environmental variables through an experiment in a controlled environment, for which an IoT platform was developed that integrated an artificial intelligence module. For the verification of the model, three different models of neural networks in a controlled greenhouse were experimentally compared and a proposed model was obtained for the training and validation sets of two categories of healthy roses and diseased roses with 89% training and 11% recovery. validation and it was determined that the relative humidity variable can influence the development and appearance of Downy Mildew disease when its value is above 85% for a prolonged period.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-08-28T08:56:57Z&#13;
No. of bitstreams: 1&#13;
ip2023_07_001_0.pdf: 4767566 bytes, checksum: 0b015bacd69adb6ae18095b9b1b8b605 (MD5); Made available in DSpace on 2023-08-28T08:56:57Z (GMT). No. of bitstreams: 1&#13;
ip2023_07_001_0.pdf: 4767566 bytes, checksum: 0b015bacd69adb6ae18095b9b1b8b605 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14832">
<title>Editor’s Note</title>
<link>https://reunir.unir.net/handle/123456789/14832</link>
<description>Editor’s Note
Gaona-García, Paulo Alonso
Artificial Intelligence (AI) represents one of the fastest growing areas of knowledge, sectors and fields of action globally. This growth has allowed to mark different positions, where the most favorable ones are oriented to its unquestionable contribution to facilitate decision making in various fields of society, as well as other sectors that mark a strong position for its use to be carried out in a regulated and measured way due to the scope and risks to which we are exposed. For this reason, rigorous methods are increasingly required for the design and development of AI-based computational models; methods that involve strict mechanisms for their validation, as well as the analysis of possible risks and scope that they may have on the field of application where they are being exposed. This type of aspects would definitely mark a valuable and relevant milestone to define several paths within which we can find two: 1) if it is definitely necessary to set limits on the use of AI by establishing increasingly sophisticated regulatory frameworks on various areas involving data protection and regulated use of the same, and 2) to remove all barriers so that it can be exploited openly in all its dimensions in any area of our society. Hence the importance of analysing the different risks and threats that AI may present within the particular context in which it is being applied.&#13;
Based on this panorama, this regular edition of the “International Journal Interactive Multimedia and Artificial Intelligence” presents a series of papers where proposals are oriented to different fields and sectors, which make use of diverse approaches, methods, models and AI-based systems that allow us to have a generalized idea of how these challenges are being addressed in some fields of our society. In particular, this regular issue collects research topics focusing on addressing the problems of evolving recommender systems, classification models, decision support systems, system modelling, data analytics, optimization algorithms, image retrieval, deep neural networks, social network analysis, and the relevance of the design of User Experience (UX) proposals.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-06-05T14:28:13Z
No. of bitstreams: 1
ijimai8_2_0_1.pdf: 95746 bytes, checksum: c911ddb3a3adcb6f9dd5511649802c70 (MD5); Made available in DSpace on 2023-06-05T14:28:13Z (GMT). No. of bitstreams: 1
ijimai8_2_0_1.pdf: 95746 bytes, checksum: c911ddb3a3adcb6f9dd5511649802c70 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14831">
<title>ResNet18 Supported Inspection of Tuberculosis in Chest Radiographs With Integrated Deep, LBP, and DWT Features</title>
<link>https://reunir.unir.net/handle/123456789/14831</link>
<description>ResNet18 Supported Inspection of Tuberculosis in Chest Radiographs With Integrated Deep, LBP, and DWT Features
Rajinikanth, Venkatesan; Kadry, Seifedine; Moreno-Ger, Pablo
The lung is a vital organ in human physiology and disease in lung causes various health issues. The acute disease in lung is a medical emergency and hence several methods are developed and implemented to detect the lung abnormality. Tuberculosis (TB) is one of the common lung disease and premature diagnosis and treatment is necessary to cure the disease with appropriate medication. Clinical level assessment of TB is commonly performed with chest radiographs (X-ray) and the recorded images are then examined to identify TB and its harshness. This research proposes a TB detection framework using integrated optimal deep and handcrafted features. The different stages of this work include (i) X-ray collection and processing, (ii) Pretrained Deep-Learning (PDL) scheme-based feature mining, (iii) Feature extraction with Local Binary Pattern (LBP) and Discrete Wavelet Transform (DWT), (iv) Feature optimization with Firefly-Algorithm, (v) Feature ranking and serial concatenation, and (vi) Classification by means of a 5-fold cross confirmation. The result of this study validates that, the ResNet18 scheme helps to achieve a better accuracy with SoftMax (95.2%) classifier and Decision Tree Classifier (99%) with deep and concatenated features, respectively. Further, overall performance of Decision Tree is better compared to other classifiers.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-06-05T14:02:05Z
No. of bitstreams: 1
ijimai8_2_4.pdf: 3550633 bytes, checksum: 4967a3d490556c67b7a1f4f2ef738197 (MD5); Made available in DSpace on 2023-06-05T14:02:05Z (GMT). No. of bitstreams: 1
ijimai8_2_4.pdf: 3550633 bytes, checksum: 4967a3d490556c67b7a1f4f2ef738197 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14830">
<title>Digit Recognition Using Composite Features With Decision Tree Strategy</title>
<link>https://reunir.unir.net/handle/123456789/14830</link>
<description>Digit Recognition Using Composite Features With Decision Tree Strategy
Chen, Chung-Hsing; Huang, Ko-Wei
At present, check transactions are one of the most common forms of money transfer in the market. The information for check exchange is printed using magnetic ink character recognition (MICR), widely used in the banking industry, primarily for processing check transactions. However, the magnetic ink card reader is specialized and expensive, resulting in general accounting departments or bookkeepers using manual data registration instead. An organization that deals with parts or corporate services might have to process 300 to 400 checks each day, which would require a considerable amount of labor to perform the registration process. The cost of a single-sided scanner is only 1/10 of the MICR; hence, using image recognition technology is an economical solution. In this study, we aim to use multiple features for character recognition of E13B, comprising ten numbers and four symbols. For the numeric part, we used statistical features such as image density features, geometric features, and simple decision trees for classification. The symbols of E13B are composed of three distinct rectangles, classified according to their size and relative position. Using the same sample set, MLP, LetNet-5, Alexnet, and hybrid CNN-SVM were used to train the numerical part of the artificial intelligence network as the experimental control group to verify the accuracy and speed of the proposed method. The results of this study were used to verify the performance and usability of the proposed method. Our proposed method obtained all test samples correctly, with a recognition rate close to 100%. A prediction time of less than one millisecond per character, with an average value of 0.03 ms, was achieved, over 50 times faster than state-of-the-art methods. The accuracy rate is also better than all comparative state-of-the-art methods. The proposed method was also applied to an embedded device to ensure the CPU would be used for verification instead of a high-end GPU.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-06-05T13:55:48Z
No. of bitstreams: 1
ijimai8_2_10.pdf: 2043799 bytes, checksum: a6c642303d81bfc5179a8053e5e7e086 (MD5); Made available in DSpace on 2023-06-05T13:55:48Z (GMT). No. of bitstreams: 1
ijimai8_2_10.pdf: 2043799 bytes, checksum: a6c642303d81bfc5179a8053e5e7e086 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14812">
<title>ConvGRU-CNN: Spatiotemporal Deep Learning for Real-World Anomaly Detection in Video Surveillance System</title>
<link>https://reunir.unir.net/handle/123456789/14812</link>
<description>ConvGRU-CNN: Spatiotemporal Deep Learning for Real-World Anomaly Detection in Video Surveillance System
Qasim Gandapur, Maryam; Verdú, Elena
Video surveillance for real-world anomaly detection and prevention using deep learning is an important and difficult research area. It is imperative to detect and prevent anomalies to develop a nonviolent society. Realworld video surveillance cameras automate the detection of anomaly activities and enable the law enforcement systems for taking steps toward public safety. However, a human-monitored surveillance system is vulnerable to oversight anomaly activity. In this paper, an automated deep learning model is proposed in order to detect and prevent anomaly activities. The real-world video surveillance system is designed by implementing the ResNet-50, a Convolutional Neural Network (CNN) model, to extract the high-level features from input streams whereas temporal features are extracted by the Convolutional GRU (ConvGRU) from the ResNet-50 extracted features in the time-series dataset. The proposed deep learning video surveillance model (named ConvGRUCNN) can efficiently detect anomaly activities. The UCF-Crime dataset is used to evaluate the proposed deep learning model. We classified normal and abnormal activities, thereby showing the ability of ConvGRU-CNN to find a correct category for each abnormal activity. With the UCF-Crime dataset for the video surveillance-based anomaly detection, ConvGRU-CNN achieved 82.22% accuracy. In addition, the proposed model outperformed the related deep learning models.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-06-01T10:12:22Z&#13;
No. of bitstreams: 1&#13;
ip2023_05_006_0.pdf: 2055534 bytes, checksum: 070dd17271be2ce11fcd266d68791423 (MD5); Made available in DSpace on 2023-06-01T10:12:22Z (GMT). No. of bitstreams: 1&#13;
ip2023_05_006_0.pdf: 2055534 bytes, checksum: 070dd17271be2ce11fcd266d68791423 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14593">
<title>Exploring ChatGPT's Potential for Consultation, Recommendations and Report Diagnosis: Gastric Cancer and Gastroscopy Reports’ Case</title>
<link>https://reunir.unir.net/handle/123456789/14593</link>
<description>Exploring ChatGPT's Potential for Consultation, Recommendations and Report Diagnosis: Gastric Cancer and Gastroscopy Reports’ Case
Zhou, Jiaming; Li, Tengyue; Fong, Simon James; Dey, Nilanjan; González-Crespo, Rubén
Artificial intelligence (AI) has shown its effectiveness in helping clinical users meet evolving challenges. Recently, ChatGPT, a newly launched AI chatbot with exceptional text comprehension capabilities, has triggered a global wave of AI popularization and application in seeking answers through human‒machine dialogues. Gastric cancer, as a globally prevalent disease, has a five-year survival rate of up to 90% when detected early and treated promptly. This research aims to explore ChatGPT's potential in disseminating gastric cancer knowledge, providing consultation recommendations, and interpreting endoscopy reports. Through experimentation, the GPT-4 model of ChatGPT achieved an appropriateness of 91.3% and a consistency of 95.7% in a gastric cancer knowledge test. Furthermore, GPT-4 has demonstrated considerable potential in consultation recommendations and endoscopy report analysis.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-05-03T10:25:57Z&#13;
No. of bitstreams: 1&#13;
ip2023_04_007.pdf: 918647 bytes, checksum: 4646c89b726a20dbc836aa8df0455beb (MD5); Made available in DSpace on 2023-05-03T10:25:57Z (GMT). No. of bitstreams: 1&#13;
ip2023_04_007.pdf: 918647 bytes, checksum: 4646c89b726a20dbc836aa8df0455beb (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14592">
<title>Adaptation of Applications to Compare Development Frameworks in Deep Learning for Decentralized Android Applications</title>
<link>https://reunir.unir.net/handle/123456789/14592</link>
<description>Adaptation of Applications to Compare Development Frameworks in Deep Learning for Decentralized Android Applications
Sainz-de-Abajo, Beatriz; Laso, Sergio; Garcia-Alonso, Jose
Not all frameworks used in machine learning and deep learning integrate with Android, which requires some prerequisites. The primary objective of this paper is to present the results of the analysis and a comparison of deep learning development frameworks, which can be adapted into fully decentralized Android apps from a cloud server. As a work methodology, we develop and/or modify the test applications that these frameworks offer us a priori in such a way that it allows an equitable comparison of the analysed characteristics of interest.&#13;
These parameters are related to attributes that a user would consider, such as (1) percentage of success; (2) battery consumption; and (3) power consumption of the processor. After analysing numerical results, the proposed framework that best behaves in relation to the analysed characteristics for the development of an Android application is TensorFlow, which obtained the best score against Caffe2 and Snapdragon NPE in the percentage of correct answers, battery consumption, and device CPU power consumption. Data consumption was not considered because we focus on decentralized cloud storage applications in this study.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-05-03T10:16:58Z&#13;
No. of bitstreams: 1&#13;
ip2023_04_006.pdf: 2388930 bytes, checksum: 7da9701560ea4c79115755aca69cea4c (MD5); Made available in DSpace on 2023-05-03T10:16:58Z (GMT). No. of bitstreams: 1&#13;
ip2023_04_006.pdf: 2388930 bytes, checksum: 7da9701560ea4c79115755aca69cea4c (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14587">
<title>Development of a Shared UX Vision Based on UX Factors Ascertained Through Attribution</title>
<link>https://reunir.unir.net/handle/123456789/14587</link>
<description>Development of a Shared UX Vision Based on UX Factors Ascertained Through Attribution
Winter, Dominique; Hausmann, Carolin; Hinderks, Andreas; Thomaschewski, Jörg
User experience (UX) is an important quality in differentiating products. For a product team, it is a challenge to develop a good positive user experience. A common UX vision for the product team supports the team in making goal-oriented decisions regarding the user experience. This paper presents an approach to developing a shared UX vision. This UX vision is developed by the product team while a collaborative session. To validate our approach, we conducted a first validation study. In this study, we conducted a collaborative session with two groups and a total of 37 participants. The group of participants comprised product managers, UX designers and comparable professional profiles. At the end of the collaborative session, participants had to fill out a questionnaire. Through questions and observations, we identified ten good practices and four bad practices in the application of our approach to developing a UX vision. The top 3 good practices mentioned by the&#13;
participants include the definition of decision-making procedures (G1), determining the UX vision with the team (G2), and using general factors of the UX as a basis (G3). The top 3 bad practices are: providing too little time for the development of the UX vision (B1), not providing clear cluster designations (B2) and working without user data (B3). The results show that the present approach for developing a UX vision helps to promote a shared understanding of the intended UX in a quickly and simply way.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-05-03T08:51:40Z&#13;
No. of bitstreams: 1&#13;
ip2023_04_001.pdf: 1637462 bytes, checksum: f2830f59be2d0b581d0f5e756ad54183 (MD5); Made available in DSpace on 2023-05-03T08:51:41Z (GMT). No. of bitstreams: 1&#13;
ip2023_04_001.pdf: 1637462 bytes, checksum: f2830f59be2d0b581d0f5e756ad54183 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14368">
<title>On the Importance of UX Quality Aspects for Different Product Categories</title>
<link>https://reunir.unir.net/handle/123456789/14368</link>
<description>On the Importance of UX Quality Aspects for Different Product Categories
Schrepp, Martin; Kollmorgen, Jessica; Meiners, Anna-Lena; Hinderks, Andreas; Winter, Dominique; Santoso, Harry B.; Thomaschewski, Jörg
User experience (UX) is a holistic concept. We conceptualize UX as a set of semantically distinct quality aspects. These quality aspects relate subjectively perceived properties of the user interaction with a product to the psychological needs of users. Not all possible UX quality aspects are equally important for all products. The main use case of a product can determine the relative importance of UX aspects for the overall impression of the UX. In this paper, the authors present several studies that investigate this dependency between the product category and the importance of several well-known UX aspects. A method to measure the importance of such UX aspects is presented. In addition, the authors show that the observed importance ratings are stable, i.e., reproducible, and hardly influenced by demographic factors or cultural background. Thus, the ratings reported in our studies can be reused by UX professionals to find out which aspects of UX they should concentrate on in product design and evaluation.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-15T10:21:46Z&#13;
No. of bitstreams: 1&#13;
ip2023_03_001.pdf: 1323641 bytes, checksum: 56fa34ed7b6a0677146aedec4cd55891 (MD5); Made available in DSpace on 2023-03-15T10:21:46Z (GMT). No. of bitstreams: 1&#13;
ip2023_03_001.pdf: 1323641 bytes, checksum: 56fa34ed7b6a0677146aedec4cd55891 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14367">
<title>Rhetorical Pattern Finding</title>
<link>https://reunir.unir.net/handle/123456789/14367</link>
<description>Rhetorical Pattern Finding
Gómez, Francisco; Tizón Díaz, Manuel; Arronte Alvarez, Aitor; Padilla, Victor
In this paper, we research rhetorical patterns from a musicological and computational standpoint. First, a theoretical examination of what constitutes a rhetorical pattern is conducted. Out of that examination, which includes primary sources and the study of the main composers, a formal definition of rhetorical patterns is proposed. Among the rhetorical figures, a set of imitative rhetorical figures is selected for our study, namely, epizeuxis, palilogy, synonymia, and polyptoton. Next, we design a computational model of the selected rhetorical patterns to automatically find those patterns in a corpus consisting of masses by Renaissance composer Tomás Luis de Victoria. In order to have a ground truth with which to test out our model, a group of experts manually annotated the rhetorical patterns. To deal with the problem of reaching a consensus on the annotations, a four-round Delphi method was followed by the annotators. The rhetorical patterns found by the annotators and by the algorithm are compared and their differences discussed. The algorithm reports almost all the patterns annotated by the experts and some additional patterns. The algorithm reports almost all the patterns annotated by the experts (recall: 98.11%) and some additional patterns (precision: 71.73%). These patterns correspond to rhetorical patterns within other rhetorical patterns, which were overlooked by the annotators on the basis of their contextual knowledge. These results pose issues as to how to integrate that contextual knowledge into the computational model.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-15T10:09:25Z&#13;
No. of bitstreams: 1&#13;
ip2022_10_002.pdf: 1408728 bytes, checksum: 5f3989a04ead9602a8cbf4325fca66c2 (MD5); Made available in DSpace on 2023-03-15T10:09:25Z (GMT). No. of bitstreams: 1&#13;
ip2022_10_002.pdf: 1408728 bytes, checksum: 5f3989a04ead9602a8cbf4325fca66c2 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14366">
<title>A Hybrid Secure Cloud Platform Maintenance Based on Improved Attribute-Based Encryption Strategies</title>
<link>https://reunir.unir.net/handle/123456789/14366</link>
<description>A Hybrid Secure Cloud Platform Maintenance Based on Improved Attribute-Based Encryption Strategies
Kumar, Abhishek; Kumar, Swarn Avinash; Dutt, Vishal; Dubey, A. K.; Narang, Sushil
In the modern era, Cloud Platforms are the most needed port to maintain documents remotely with proper security norms. The concept of cloud environments is similar to the network channel. Still, the Cloud is considered the refined form of network, in which the data can easily be stored into the server without any range restrictions. The data maintained into the remote server needs a high-security feature, and the processing power of data should be high to retrieve the data back from the respective server. In the past, there were several security schemes available to protect the remote cloud server reasonably. However, the attack possibilities over the cloud platform remain; only all the researchers continuously work on this platform without any delay. This paper introduces a hybrid data security scheme called the Improved Attribute-Based Encryption Scheme (IABES). This IABES combines two powerful data security algorithms: Advanced Encryption Standard (AES) and Attribute-Based Encryption (ABE) algorithm. These two algorithms are combined to provide massive support to the proposed approach of data maintenance over the remote cloud server with high-end security norms. This hybrid data security algorithm assures the data cannot be attacked over the server by the attacker or intruder in any case because of its robustness. The essential generation process generates a credential for the users. It cannot be identified or visible to anyone as well as the generated certificates cannot be extracted even if the corresponding user forgets the credentials. The only way to get back the certification is resetting the credential. The obtained results prove the accuracy level of the proposed cypher security schemes compared with the regular cloud security management scheme, and the proposed algorithm essential generation process is unique. No one can guess or acquire it. Even the person may be the service provider or server administrator. For all, the proposed system assures data maintenance over the cloud platform with a high level of security and robustness in Quality of Service.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-15T09:41:36Z&#13;
No. of bitstreams: 1&#13;
ip2021_11_004.pdf: 1309547 bytes, checksum: 411470b57830a77ebfaabbf6704f3cda (MD5); Made available in DSpace on 2023-03-15T09:41:36Z (GMT). No. of bitstreams: 1&#13;
ip2021_11_004.pdf: 1309547 bytes, checksum: 411470b57830a77ebfaabbf6704f3cda (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14365">
<title>A Survey on Data-Driven Evaluation of Competencies and Capabilities Across Multimedia Environments</title>
<link>https://reunir.unir.net/handle/123456789/14365</link>
<description>A Survey on Data-Driven Evaluation of Competencies and Capabilities Across Multimedia Environments
Strukova, Sofia; Ruipérez-Valiente, José A.; Gómez Mármol, Félix
The rapid evolution of technology directly impacts the skills and jobs needed in the next decade. Users can, intentionally or unintentionally, develop different skills by creating, interacting with, and consuming the content from online environments and portals where informal learning can emerge. These environments generate large amounts of data; therefore, big data can have a significant impact on education. Moreover, the educational landscape has been shifting from a focus on contents to a focus on competencies and capabilities that will prepare our society for an unknown future during the 21st century. Therefore, the main goal of this literature survey is to examine diverse technology-mediated environments that can generate rich data sets through the users’ interaction and where data can be used to explicitly or implicitly perform a data-driven evaluation of different competencies and capabilities. We thoroughly and comprehensively surveyed the state of the art to identify and analyse digital environments, the data they are producing and the capabilities they can measure and/or develop. Our survey revealed four key multimedia environments that include sites for content sharing &amp; consumption, video games, online learning and social networks that fulfilled our goal. Moreover, different methods were used to measure a large array of diverse capabilities such as expertise, language proficiency and soft skills. Our results prove the potential of the data from diverse digital environments to support the development of lifelong and lifewide 21st-century capabilities for the future society.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-15T09:22:47Z&#13;
No. of bitstreams: 1&#13;
ip2022_10_004_0.pdf: 1157780 bytes, checksum: 5e9ac2341a6bba489dc90f7fe1b5b692 (MD5); Made available in DSpace on 2023-03-15T09:22:47Z (GMT). No. of bitstreams: 1&#13;
ip2022_10_004_0.pdf: 1157780 bytes, checksum: 5e9ac2341a6bba489dc90f7fe1b5b692 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14356">
<title>A Greedy Randomized Adaptive Search With Probabilistic Learning for solving the Uncapacitated Plant Cycle Location Problem</title>
<link>https://reunir.unir.net/handle/123456789/14356</link>
<description>A Greedy Randomized Adaptive Search With Probabilistic Learning for solving the Uncapacitated Plant Cycle Location Problem
López-Plata, Israel; Expósito-Izquierdo, Christopher; Lalla-Ruiz, Eduardo; Melián-Batista, Belén; Moreno-Vega, J. Marcos
In this paper, we address the Uncapacitated Plant Cycle Location Problem. It is a location-routing problem aimed at determining a subset of locations to set up plants dedicated to serving customers. We propose a mathematical formulation to model the problem. The high computational burden required by the formulation when tackling large scenarios encourages us to develop a Greedy Randomized Adaptive Search Procedure with Probabilistic Learning Model. Its rationale is to divide the problem into two interconnected sub-problems.&#13;
The computational results indicate the high performance of our proposal in terms of the quality of reported solutions and computational time. Specifically, we have overcome the best approach from the literature on a wide range of scenarios.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-14T10:51:43Z&#13;
No. of bitstreams: 1&#13;
ip2022_04_003.pdf: 1912008 bytes, checksum: 3a6b10eb9b2c0334e124757b3c6f94bb (MD5); Made available in DSpace on 2023-03-14T10:51:43Z (GMT). No. of bitstreams: 1&#13;
ip2022_04_003.pdf: 1912008 bytes, checksum: 3a6b10eb9b2c0334e124757b3c6f94bb (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14355">
<title>OntoInfoG++: A Knowledge Fusion Semantic Approach for Infographics Recommendation</title>
<link>https://reunir.unir.net/handle/123456789/14355</link>
<description>OntoInfoG++: A Knowledge Fusion Semantic Approach for Infographics Recommendation
Deepak, Gerard; Vibakar, Adithya; Santhanavijayan, A.
As humans tend to improvise and learn on a constant basis, the need for visualizing and recommending knowledge is increasing. Since the World Wide Web is exploded with a lot of multimedia content and with a growing amount of research papers on the Web, there is a potential need for inferential multimedia like the infographics which can lead to an ultimate new level of learning from most viable information sources on the Web. The potential growth and future of technology have called for the need of a Web 3.0 compliant infographic recommendation system in order to be able to visualize, design and develop aesthetically. The trend of the Web has asked for better infographic recommendations in the attempt of technological exploration. This paper proposes the OntoInfoG++ which is a knowledge centric recommendation approach for Infographics that encompasses the amalgamation of metadata derived from multiple heterogeneous sources and the crowd sourced ontologies to recommend infographics based on the topic of interest of the user. The user- clicks are taken into consideration along with an Ontology which is modeled using the titles and the keywords extracted from the dataset comprising of research papers. The approach models user topic of interest from the Query Words, Current User-Clicks, and from standard Knowledge Stores like the BibSonomy, DBpedia, Wikidata, LOD Cloud, and crowd sourced Ontologies. The semantic alignment is achieved using three distinct measures namely the Horn’s index, EnAPMI measure and information entropy. The resultant infographic recommendation has been achieved by computing the semantic similarity between enriched topics of interest and infographic labels and arrange the recommended infographics in the increasing order of their semantic similarity to yield a chronological order for the meaningful arrangement of infographics. The OntoInfoG++ has achieved an overall F-measure of 97.27 % which is the best-in-class F-measure for an infographic recommendation system.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-14T10:34:38Z&#13;
No. of bitstreams: 1&#13;
ip2021_12_005.pdf: 1688814 bytes, checksum: 798e8a63663c6db51e90b023e2be4df0 (MD5); Made available in DSpace on 2023-03-14T10:34:38Z (GMT). No. of bitstreams: 1&#13;
ip2021_12_005.pdf: 1688814 bytes, checksum: 798e8a63663c6db51e90b023e2be4df0 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14354">
<title>Attentive Flexible Translation Embedding in Top-N Sparse Sequential Recommendations</title>
<link>https://reunir.unir.net/handle/123456789/14354</link>
<description>Attentive Flexible Translation Embedding in Top-N Sparse Sequential Recommendations
Seo, Min-Ji; Kim, Myung-Ho
Sequential recommendation aims to predict the user’s next action based on personal action sequences. The major challenge in this task is how to achieve high performance recommendation under the data sparsity problem. Translation-based recommendations, which learn distance metrics to capture interactions between users and items in sequential recommendations, are a promising method to overcome this issue. However, a disadvantage of translation-based recommendations is that they capture long-term preferences of the user and complex item transitions. In this paper, we propose attentive flexible translation for recommendations (AFTRec) to tackle data sparsity problem by capturing a user’s dynamic preferences and complex interactions between items in user’s purchasing behaviors. In particular, we first encode semantic information of an item related to user’s purchasing behaviors as the user-specific item translation vectors. We also design a transition graph and encode complex item transitions as correlation-specific item translation vectors. Finally, we adopt a flexible distance metric that considers directions with respect to the translation vectors in the same space for predicting the next item. To evaluate the performance of our method, we conducted experiments on four sparse datasets and one dense dataset with different domains. The experimental results demonstrate that our proposed AFTRec outperforms the state-of-the-art baselines in terms of normalized discounted cumulative&#13;
gain and hit rate on sparse datasets.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-14T10:28:46Z&#13;
No. of bitstreams: 1&#13;
ip2022_10_007.pdf: 2429918 bytes, checksum: 4407cce040a1d5838b26a588739dd81c (MD5); Made available in DSpace on 2023-03-14T10:28:46Z (GMT). No. of bitstreams: 1&#13;
ip2022_10_007.pdf: 2429918 bytes, checksum: 4407cce040a1d5838b26a588739dd81c (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14353">
<title>An Empirical Evaluation of Machine Learning Techniques for Crop Prediction</title>
<link>https://reunir.unir.net/handle/123456789/14353</link>
<description>An Empirical Evaluation of Machine Learning Techniques for Crop Prediction
Mariammal, G.; Suruliandi, A.; Raja, S. P.; Poongothai, E.
Agriculture is the primary source driving the economic growth of every country worldwide. Crop prediction, which is critical to agriculture, depends on the soil and environment. Nutrient levels differ from area to area and greatly influence in crop cultivation. Earlier, the tasks of crop forecast and cultivation were undertaken by farmers themselves. Today, however, crop prediction is determined by climatic variations. This is where machine learning algorithms step in to identify the most relevant crop for cultivation. This research undertakes an empirical analysis using the bagging, random forest, support vector machine, decision tree, Naïve Bayes and k-nearest neighbor classifiers to predict the most appropriate cultivable crop for certain areas, based on environment and soil traits. Further, the suitability of the classifiers is examined using a GitHub prisoners’ dataset. The experimental results of all the classification techniques were assessed to show that the ensemble outclassed the rest with respect to every performance metric.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-14T10:18:38Z&#13;
No. of bitstreams: 1&#13;
ip2022_12_004.pdf: 829089 bytes, checksum: 1a22e16ffed40d297c55796f0fecf0b9 (MD5); Made available in DSpace on 2023-03-14T10:18:38Z (GMT). No. of bitstreams: 1&#13;
ip2022_12_004.pdf: 829089 bytes, checksum: 1a22e16ffed40d297c55796f0fecf0b9 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14352">
<title>A Comparative Evaluation of Bayesian Networks Structure Learning Using Falcon Optimization Algorithm</title>
<link>https://reunir.unir.net/handle/123456789/14352</link>
<description>A Comparative Evaluation of Bayesian Networks Structure Learning Using Falcon Optimization Algorithm
Qasim Awla, Hoshang; Wahhab Kareem, Shahab; Salih Mohammed, Amin
Bayesian networks are analytical models that may represent probabilistic dependent connections among variables and are useful in machine learning for generating knowledge structure. Due to the vastness of the solution space, learning Bayesian network (BN) structures from data is an NP-hard problem. The score and search technique is one Bayesian Network structure learning strategy. In Bayesian network structure learning the Falcon Optimization Algorithm (FOA) is presented and evaluated by the authors. Inserting, Reversing, Moving, and Deleting, are used in the method to create the FOA for finding the best structural solution. The FOA algorithm is based on the falcon's searching technique during drought conditions. The suggested technique is compared to the score metric function of Pigeon Inspired search algorithm, Greedy Search, and Antlion optimization search algorithm. The performance of these techniques in terms of confusion matrices was further evaluated by the authors using a variety of benchmark data sets. The Falcon optimization algorithm outperforms the previous algorithms and generates higher scores and accuracy values, as evidenced by the results of our experiments.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-14T10:05:41Z&#13;
No. of bitstreams: 1&#13;
ip2023_01_004.pdf: 1954539 bytes, checksum: a8f6a836f3c25d18979bf2fa465f2a14 (MD5); Made available in DSpace on 2023-03-14T10:05:41Z (GMT). No. of bitstreams: 1&#13;
ip2023_01_004.pdf: 1954539 bytes, checksum: a8f6a836f3c25d18979bf2fa465f2a14 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14351">
<title>Tourism-Related Placeness Feature Extraction from Social Media Data Using Machine Learning Models</title>
<link>https://reunir.unir.net/handle/123456789/14351</link>
<description>Tourism-Related Placeness Feature Extraction from Social Media Data Using Machine Learning Models
Muñoz, Pedro; Doñaque, E.; Larrañaga, A.; Martínez Torres, Javier
The study of placeness has been focus for researchers trying to understand the impact of locations on their surroundings and tourism, the loss of it by globalization and modernization and its effect on tourism, or the characterization of the activities that take place in them. Identifying places that have a high level of placeness can become very valuable when studying social trends and mobility in relation to the space in which the study takes place. Moreover, places can be enriched with dimensions such as the demographics of the individuals visiting such places and the activities the carry in them thanks to social media and modern machine learning and data mining methods. Such information can prove to be useful in fields such as urban planning or tourism as a base for analysis and decision-making or the discovery of new social hotspots or sites rich in cultural heritage.&#13;
This manuscript will focus on the methodology to obtain such information, for which data from Instagram is used to feed a set of classification models that will mine demographics from the users based on graphic and textual data from their profiles, gain insight on what they were doing in each of their posts and try to classify that information into any of the categories discovered in this article. The goal of this methodology is to obtain, from social media data, characteristics of visitors to locations as a discovery tool for the tourism industry.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-14T09:46:13Z&#13;
No. of bitstreams: 1&#13;
ip2022_12_003.pdf: 1149017 bytes, checksum: f6c14a3431fb1cf416990f4d8443e4bb (MD5); Made available in DSpace on 2023-03-14T09:46:13Z (GMT). No. of bitstreams: 1&#13;
ip2022_12_003.pdf: 1149017 bytes, checksum: f6c14a3431fb1cf416990f4d8443e4bb (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14350">
<title>Deep Transfer Learning-Based Automated Identification of Bird Song</title>
<link>https://reunir.unir.net/handle/123456789/14350</link>
<description>Deep Transfer Learning-Based Automated Identification of Bird Song
Das, Nabanita; Padhy, Neelamadhab; Dey, Nilanjan; Bhattacharya, Sudipta; Tavares, Joao Manuel R. S.
Bird species identification is becoming increasingly crucial for avian biodiversity conservation and assisting ornithologists in quantifying the presence of birds in a given area. Convolutional Neural Networks (CNNs) are advanced deep learning algorithms that have proven to perform well in speech classification. However, developing an accurate deep learning classifier requires a large amount of data. Such a large amount of data on endemic or endangered creatures is frequently difficult to gathered. Also, in some other fields, such as bioinformatics and robotics, the high cost of data collection and expensive annotation limit their progress, so large, well-annotated data creating a set is also difficult. A transfer learning method can alleviate overfitting concerns in a deep learning model. This feature serves as the inspiration for transfer learning, which was created to deal with situations where the data are distributed across a variety of functional domains. In this study, the ability of deep transfer models such as VGG16, VGG19 and InceptionV3 to effectively extract and discriminate speech signals from different species of birds with high prediction accuracy is explored. The obtained accuracies using VGG16, VGG19 and InceptionV3 were equal to 78, 61.9 and 85%, respectively, which are very promising.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-14T09:30:41Z&#13;
No. of bitstreams: 1&#13;
ip2023_01_003_1.pdf: 3796248 bytes, checksum: 3384f7e7c2b7a69190885f76c549e697 (MD5); Made available in DSpace on 2023-03-14T09:30:41Z (GMT). No. of bitstreams: 1&#13;
ip2023_01_003_1.pdf: 3796248 bytes, checksum: 3384f7e7c2b7a69190885f76c549e697 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14349">
<title>Resource and Process Management With a Decision Model Based on Fuzzy Logic</title>
<link>https://reunir.unir.net/handle/123456789/14349</link>
<description>Resource and Process Management With a Decision Model Based on Fuzzy Logic
Fornerón Martínez, J. T.; Agostini, F.; la Red, David L.
The allocation of the resources to be shared in the context of a distributed processing system needs to be coordinated through the mutual exclusion mechanism, which will decide the order in which the shared resources will be allocated to those processes that require them. This paper proposes an aggregation operator, which can be used by a module that manages the shared resources, whose function is to assign the resources to the processes according to their requirements (shared resources) and the status of the distributed nodes in which the processes operate (computational load), by using 2-tuple associated to linguistic labels.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-14T09:17:36Z&#13;
No. of bitstreams: 1&#13;
ip2023_02_009.pdf: 2277350 bytes, checksum: aca528a795efb6cf7d0b6eff1a46ce6d (MD5); Made available in DSpace on 2023-03-14T09:17:36Z (GMT). No. of bitstreams: 1&#13;
ip2023_02_009.pdf: 2277350 bytes, checksum: aca528a795efb6cf7d0b6eff1a46ce6d (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14338">
<title>Quantitative Measures for Medical Fundus and Mammography Images Enhancement</title>
<link>https://reunir.unir.net/handle/123456789/14338</link>
<description>Quantitative Measures for Medical Fundus and Mammography Images Enhancement
Intriago-Pazmiño, Monserrate; Ibarra-Fiallo, Julio; Guzmán-Castillo, Adán; Alonso-Calvo, Raúl; Crespo, José
Enhancing the visibility of medical images is part of the initial or preprocessing phase within a computer vision system. This image preparation is essential for subsequent system tasks such as segmentation or classification. Therefore, quantitative validation of medical image preprocessing is crucial. In this work, four metrics are studied: Contrast Improvement Index (CII), Enhancement Measurement Estimation (EME), Entropy EME (EMEE), and Entropy. The objective is to find the best parameters for each metric. The study is performed on five medical image datasets, three retinal fundus sets (DRIVE, ROPFI, HRF-POORQ), and two mammography image sets (MIAS, DDSM). Metrics are calculated using a binary mask image to discard the background.&#13;
Using the fundus and mask datasets, the best results were obtained with the EMEE and EMEE metrics, which achieved mean improvements of up to 186% and 75%, respectively. For mammography datasets and using masks of the region of interest, the two metrics with the highest percentage improvement were CII and EMEE, which obtained means of up to 396% and 129%, respectively. Based on the experimental results provided, we can conclude that EMEE, EME, and CII metrics can achieve better enhancement assessment in this type of medical imaging.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-13T11:45:48Z&#13;
No. of bitstreams: 1&#13;
ip2022_12_002_0.pdf: 5667122 bytes, checksum: f9c96ad1df94e8ae5b1a983bd37fe2c1 (MD5); Made available in DSpace on 2023-03-13T11:45:48Z (GMT). No. of bitstreams: 1&#13;
ip2022_12_002_0.pdf: 5667122 bytes, checksum: f9c96ad1df94e8ae5b1a983bd37fe2c1 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14337">
<title>Synthetic Aperture Radar Automatic Target Recognition Based on a Simple Attention Mechanism</title>
<link>https://reunir.unir.net/handle/123456789/14337</link>
<description>Synthetic Aperture Radar Automatic Target Recognition Based on a Simple Attention Mechanism
Ukwuoma, Chiagoziem Chima; Zhiguang, Qin; Tienin, Bole W.; Yussif, Sophyani B.; Ejiyi, Chukwuebuka Joseph; Urama, Gilbert C.; Ukwuoma, Chibueze D.; Chikwendu, Ijeoma Amuche
A simple but effective channel attention module is proposed for Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR). The channel attention technique has shown recent success in improving Deep Convolutional Neural Networks (CNN). The resolution of SAR images does not surpass optical images thus information flow of SAR images becomes relatively poor when the network depth is raised blindly leading to a serious gradients explosion/vanishing. To resolve the issue of SAR image recognition efficiency and ambiguity trade-off, we proposed a simple Channel Attention module into the ResNet Architecture as our network backbone, which utilizes few parameters yet results in a performance gain. Our simple attention module, which follows the implementation of Efficient Channel Attention, shows that avoiding dimensionality reduction is essential for learning as well as an appropriate cross-channel interaction can preserve performance and decrease model complexity. We also explored the One Policy Learning Rate on the ResNet-50 architecture and compared it with the proposed attention based ResNet-50 architecture. A thorough analysis of the MSTAR Dataset demonstrates the efficacy of the suggested strategy over the most recent findings. With the Attention-based model and the One Policy Learning Rate-based architecture, we were able to obtain recognition rate of 100% and 99.8%, respectively.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-13T11:10:00Z&#13;
No. of bitstreams: 1&#13;
ip2023_02_004.pdf: 4381897 bytes, checksum: 240caf320d19eb5d4de43c7dd61fb2ea (MD5); Made available in DSpace on 2023-03-13T11:10:00Z (GMT). No. of bitstreams: 1&#13;
ip2023_02_004.pdf: 4381897 bytes, checksum: 240caf320d19eb5d4de43c7dd61fb2ea (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14336">
<title>Emotion-Aware Monitoring of Users’ Reaction With a Multi-Perspective Analysis of Long- and Short-Term Topics on Twitter</title>
<link>https://reunir.unir.net/handle/123456789/14336</link>
<description>Emotion-Aware Monitoring of Users’ Reaction With a Multi-Perspective Analysis of Long- and Short-Term Topics on Twitter
Cavaliere, Danilo; Fenza, Giuseppe; Loia, Vincenzo; Nota, Francesco
Social networks, such as Twitter, play like a disinformation spread booster giving the chance to individuals and organizations to influence users’ beliefs on purpose through tweets causing destabilization effects to the community. As a consequence, there is a need for solutions to analyse users’ reactions to topics debated in the community. To this purpose, state-of-the-art methods focus on selecting the most debated topics over time, ignoring less-frequent-discussed topics. In this paper, a framework for users’ reaction and topic analysis is introduced. First the method extracts topics as frequent itemsets of named entities from tweets collected, hence the support over time and RoBERTa-based sentiment analysis are applied to assess the current topic spread and the emotional impact, then a time-grid-based approach allows a granule-level analysis of the collected features that can be exploited for predicting future users’ reactions towards topics. Finally, a three-perspective score function is introduced to build comparative ranked lists of the most relevant topics according to topic sentiment, importance and spread. Experiences demonstrate the potential of the framework on IEEE COVID-19 Tweets Dataset.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-13T11:00:19Z&#13;
No. of bitstreams: 1&#13;
ip2023_02_003.pdf: 1790012 bytes, checksum: 313fafa79d29a1bd740e0945384dcf6e (MD5); Made available in DSpace on 2023-03-13T11:00:19Z (GMT). No. of bitstreams: 1&#13;
ip2023_02_003.pdf: 1790012 bytes, checksum: 313fafa79d29a1bd740e0945384dcf6e (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14335">
<title>Real World Anomalous Scene Detection and Classification using Multilayer Deep Neural Networks</title>
<link>https://reunir.unir.net/handle/123456789/14335</link>
<description>Real World Anomalous Scene Detection and Classification using Multilayer Deep Neural Networks
Jan, Atif; Khan, Gul Muhammad
Surveillance videos record malicious events in a locality utilizing various machine learning algorithms for detection. Deep-learning algorithms being the most prominent AI algorithms are data-hungry as well as computationally expensive. These algorithms perform better when trained over a diverse and huge set of examples. These modern AI methods have a dire need of utilizing human intelligence to pamper the problem in such a way as to reduce the ultimate effort in terms of computational cost. In this research work, a novel methodology termed Bag of Focus (BoF) based training methodology has been proposed. BoF is based on the concept of selecting motion-intensive blocks in a long video, for training different deep neural networks (DNN's). The methodology reduced the computational overhead by 90% (ten times) in comparison to when full-length videos are entertained. It has been observed that training networks using BoF are equally effective in terms of performance for the same network trained over the full-length dataset. In this research work, firstly, a fine-grained annotated dataset including instance and activity information has been developed for real-world volume crimes. Secondly, a BoF-based methodology has been introduced for effective training of the state-of-the-art 3D, and 2D Convolutional Neural Networks (CNNs). Lastly, a comparison between the state-of-the-art networks have been presented for malicious event recognition in videos. It has been observed that 2D CNN even with lesser parameters achieved a promising classification accuracy of 98.7% and Area under the curve (AUC) of 99.7%.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-13T10:46:39Z&#13;
No. of bitstreams: 1&#13;
ip2021_10_010_0.pdf: 3198406 bytes, checksum: 0f55f0dc5ce2de966e69cb4804ae19b1 (MD5); Made available in DSpace on 2023-03-13T10:46:39Z (GMT). No. of bitstreams: 1&#13;
ip2021_10_010_0.pdf: 3198406 bytes, checksum: 0f55f0dc5ce2de966e69cb4804ae19b1 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14334">
<title>A Hybrid Parallel Classification Model for the Diagnosis of Chronic Kidney Disease</title>
<link>https://reunir.unir.net/handle/123456789/14334</link>
<description>A Hybrid Parallel Classification Model for the Diagnosis of Chronic Kidney Disease
Singh, Vijendra; Jain, Divya
Chronic Kidney Disease (CKD) has become a prevalent disease nowadays, affecting people globally around the world. Accurate prediction of CKD progression over time is essential for reducing its associated mortality and morbidity rates. This paper proposes a fast, novel hybrid approach to diagnose Chronic Renal Disease. The proposed approach is based on the optimization of SVM classifier with the hybridized dimensionality reduction approach to identify the most informative parameters for CKD diagnosis. It handles the selection of features through two steps. The first one is a filter-based approach using ReliefF method to assign weights and ranks to each feature of the dataset. The second step is the dimensionality reduction of the best-selected subset by means of PCA, a feature extraction technique. For faster execution of datasets, simultaneous execution on multiple processors is employed. The proposed model achieved the highest prediction accuracy of 92.5% on the clinical CKD dataset compared to existing methods - ‘CFS+SVM’ (60.45%), ‘ReliefF + SVM’ (86%), ‘MIFS + SVM’ (56.72%), ‘ReliefF + CFS + SVM’ (54.37). The proposed work is also examined on the benchmarked Chronic Kidney Disease Dataset and achieved classification accuracy of 98.5% compared to the accuracy with other methods -‘CFS+SVM’ (92.7%), ‘ReliefF + SVM’ (89.6%), ‘MIFS + SVM’ (94.7%). The experimental outcomes positively demonstrate that the proposed hybridized model is effective in undertaking medical data classification tasks and is, therefore, a promising tool for the diagnosis of CKD patients. The proposed approach is statistically validated with the Friedman test with significant results compared to other techniques. The proposed approach also executes in the least time with improved prediction accuracy and competes with and even outperforms other methods in the literature.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-13T10:30:50Z&#13;
No. of bitstreams: 1&#13;
ip2021_10_008.pdf: 1663611 bytes, checksum: 010c00dc7292ae899ce2d0844508a619 (MD5); Made available in DSpace on 2023-03-13T10:30:50Z (GMT). No. of bitstreams: 1&#13;
ip2021_10_008.pdf: 1663611 bytes, checksum: 010c00dc7292ae899ce2d0844508a619 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14333">
<title>RGBeat: A Recoloring Algorithm for Deutan and Protan Dichromats</title>
<link>https://reunir.unir.net/handle/123456789/14333</link>
<description>RGBeat: A Recoloring Algorithm for Deutan and Protan Dichromats
Ribeiro, Madalena; Gomes, Abel
Deutan and protan dichromats only see exactly two hues in the HSV color space, 240-blue (240o) and 60-yellow (60 o). Consequently, they see both reds and greens as yellows; therefore, they cannot distinguish reds from greens very well. Thus, their color space is 2D and results from the intersection between the HSV color cone and the 60º-240º plane. The RGBeat recoloring algorithm’s main contribution here is that it is the first recoloring algorithm that enhances the color perception of deutan and protan dichromats but without compromising the lifelong color perceptual learning. Also, as far as we know, this is the first HTML5-compliant web recoloring approach for dichromat people that considers both text and image recoloring in an integrated manner.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-13T10:13:54Z&#13;
No. of bitstreams: 1&#13;
ip2022_01_003_0.pdf: 14595713 bytes, checksum: 66c1760f71dd542e2c028df01beab30b (MD5); Made available in DSpace on 2023-03-13T10:13:54Z (GMT). No. of bitstreams: 1&#13;
ip2022_01_003_0.pdf: 14595713 bytes, checksum: 66c1760f71dd542e2c028df01beab30b (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14327">
<title>RIADA: A Machine-Learning Based Infrastructure for Recognising the Emotions of Spotify Songs</title>
<link>https://reunir.unir.net/handle/123456789/14327</link>
<description>RIADA: A Machine-Learning Based Infrastructure for Recognising the Emotions of Spotify Songs
Álvarez, P.; García de Quirós, J.; Baldassarri, S.
The music emotions can help to improve the personalization of services and contents offered by music streaming providers. Many research works based on the use of machine learning techniques have addressed the problem of recognising the music emotions during the last years. Nevertheless, the results obtained are only applied on small-size music repositories and do not consider what the users feel when they listen to the songs. These issues prevent the existing proposals to be integrated into the personalization mechanisms of the online music providers. In this paper, we present the RIADA infrastructure which is composed by a set of systems able to annotate emotionally the catalog of songs offered by Spotify based on the users’ perception. RIADA works with the Spotify playlist miner and data services to build emotion recognition models that can solve the open challenges previously mentioned. Machine learning algorithms, music information retrieval techniques, architectures for parallelization of applications and cloud computing have been combined to develop a complex result of engineering able to integrate the music emotions into the Spotify-based applications.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-10T15:13:55Z&#13;
No. of bitstreams: 1&#13;
ip2022_04_02.pdf: 1334801 bytes, checksum: 5657c9b5a9277b31891d505065b116b1 (MD5); Made available in DSpace on 2023-03-10T15:13:55Z (GMT). No. of bitstreams: 1&#13;
ip2022_04_02.pdf: 1334801 bytes, checksum: 5657c9b5a9277b31891d505065b116b1 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14326">
<title>Cosine Similarity Based Hierarchical Skeleton and Cross Indexing for Large Scale Image Retrieval Using Mapreduce Framework</title>
<link>https://reunir.unir.net/handle/123456789/14326</link>
<description>Cosine Similarity Based Hierarchical Skeleton and Cross Indexing for Large Scale Image Retrieval Using Mapreduce Framework
Qianwen, Zhong
The imaging data in various fields like industries, institutions, medical, and so on has grown exponentially in recent years. An innovative software solution is required for the efficient management of image data. The MapReduce framework is used for large-scale image data processing. Various cross-indexing techniques are developed to transform the image into binary sequences but retrieving the image from the reducer on the feature vector results in a major challenge. Image retrieval using large-scale image databases attained major attention, where cross-indexing plays a key role in the research community. Therefore, in this research, a new method for image retrieval, named Cosine Similarity-based hierarchical skeleton and cross-indexing, is proposed to perform the retrieval process in the MapReduce framework effectively. The feature vector of the images is converted to binary sequences. The Most Significant Bit (MSB) of the binary code is used to store the images in the mapper using the cross-indexing model. The image retrieval process is achieved through the reducer based on the tanimoto similarity measure. The binary sequence for the query image is calculated based on the feature vector. The MSB bit of the binary code is matched with the MSB code of the images in&#13;
the mapper to achieve the retrieval process. The proposed method effectively achieved better performance through the cross-indexing model with the usage of the feature vector. The performance of the proposed method is compared with the existing techniques using the UK bench dataset. The proposed method attains the values of 0.784, 0.729, 0.75, 31.23, 17.84secfor F1-score, precision, recall, computational cost, and computational time with the query set-1 by considering four mappers.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-10T15:08:19Z&#13;
No. of bitstreams: 1&#13;
ip2023_01_008.pdf: 2790409 bytes, checksum: bb52e6d3cb166c7d1e3675ea8a327baa (MD5); Made available in DSpace on 2023-03-10T15:08:19Z (GMT). No. of bitstreams: 1&#13;
ip2023_01_008.pdf: 2790409 bytes, checksum: bb52e6d3cb166c7d1e3675ea8a327baa (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14325">
<title>Multi-Agent and Fuzzy Inference-Based Framework for Traffic Light Optimization</title>
<link>https://reunir.unir.net/handle/123456789/14325</link>
<description>Multi-Agent and Fuzzy Inference-Based Framework for Traffic Light Optimization
Ikidid, Abdelouafi; Abdelaziz, El Fazziki; Sadgal, Mohammed
Despite the fact that agent technologies have widely gained popularity in distributed systems, their potential for advanced management of vehicle traffic has not been sufficiently explored. This paper presents a traffic simulation framework based on agent technology and fuzzy logic. The objective of this framework is to act on the phase layouts represented by its sequences and length to maximize throughput and fluidize traffic at an isolated intersection and for the whole multi-intersection network, through both inter- and intra-intersection collaboration and coordination. The optimizing of signal layouts is done in real time, and it is not only based on local stream factors but also on traffic stream conditions in surrounding intersections. The system profits from agent communication and collaboration as well as coordination features, along with decentralized organization, to decompose the traffic control optimization into subproblems and enable the distributed resolution. Thus, the separate parts can be resolved rapidly by parallel tasking. It also uses fuzzy technology to handle the uncertainty of traffic conditions. An instance of the proposed framework was validated and designed in the ANYLOGIC simulator. Instantiation results and analysis denote that the designed system can significantly develop the efficiency at an individual intersection as well as in the multi-intersection network. It reduces the average travel delay and the time spent in the network compared to multi-agent-based adaptative signal control systems.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-10T15:01:38Z&#13;
No. of bitstreams: 1&#13;
ip2021_12_002.pdf: 1280024 bytes, checksum: af272fdf10087c5ff553bc37f4787e81 (MD5); Made available in DSpace on 2023-03-10T15:01:38Z (GMT). No. of bitstreams: 1&#13;
ip2021_12_002.pdf: 1280024 bytes, checksum: af272fdf10087c5ff553bc37f4787e81 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14324">
<title>Deep Learning Assisted Medical Insurance Data Analytics With Multimedia System</title>
<link>https://reunir.unir.net/handle/123456789/14324</link>
<description>Deep Learning Assisted Medical Insurance Data Analytics With Multimedia System
Zhang, Cheng; Vinodhini, B.; Muthu, Bala Anand
Big Data presents considerable challenges to deep learning for transforming complex, high-dimensional, and heterogeneous biomedical data into health care data. Various kinds of data are analyzed in recent biomedical research that includes e-health records, medical imaging, text, and IoT sensor data, which are complex, badly labeled, heterogeneous, and usually unstructured. Conventional statistical learning and data mining methods usually require first to extract features to acquire more robust and effective variables from those data. These features help build clustering or prediction models. New useful paradigms are provided by the latest advancements based on deep learning technologies for obtaining end-to-end learning techniques from complex data. The abstractions of data are represented using the multiple layers of deep learning for building computational models. Clinician performance is augmented by the prospective of deep learning models in medical imaging interpretation, and automated segmentation is used to reduce the time for the diagnosis. This work presents a convolution neural network-based deep learning infrastructure that performs medical imaging data analysis in various pipeline stages, including data-loading, data-augmentation, network architectures, loss functions, and evaluation metrics. Our proposed deep learning approach supports both 2D as well as 3D medical image analysis. We evaluate the proposed system's performance using metrics like sensitivity, specificity, accuracy, and precision over the clinical data with and without augmentation.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-10T13:56:53Z&#13;
No. of bitstreams: 1&#13;
ip2023_01_009.pdf: 5077977 bytes, checksum: 92b7117bfbea0e044a01e228b9a51ae4 (MD5); Made available in DSpace on 2023-03-10T13:56:53Z (GMT). No. of bitstreams: 1&#13;
ip2023_01_009.pdf: 5077977 bytes, checksum: 92b7117bfbea0e044a01e228b9a51ae4 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14323">
<title>HDDSS: An Enhanced Heart Disease Decision Support System using RFE-ABGNB Algorithm</title>
<link>https://reunir.unir.net/handle/123456789/14323</link>
<description>HDDSS: An Enhanced Heart Disease Decision Support System using RFE-ABGNB Algorithm
Dhilsath Fathima, M.; Justin Samuel, S.; Raja, S. P.
Heart disease is the leading cause of mortality globally. Heart disease refers to a range of disorders that affect the heart and blood vessels. The risks of developing heart disease become minimized if heart disease is detected early. Previous studies have suggested many heart disease decision-support systems based on machine learning (ML) algorithms. However, the lower prediction accuracy is the main issue in these heart disease decisionsupport systems. The proposed work developed a heart disease decision-support system (HDDSS) that can predict whether or not a person has heart disease. The main goal of this research work is to use the RFEABGNB to improve HDDSS prediction accuracy. The Cleveland heart disease dataset is used for training and validating the proposed HDDSS. The two significant stages of HDDSS are the feature election stage and the classification modeling stage. The recursive feature elimination (RFE) technique is used in the first stage of HDDSS to select the relevant features of the heart disease dataset. In the second stage of HDDSS, the proposed Adaptive boosted Gaussian Naïve Bayes (ABGNB) algorithm has been used to construct a classification model for training and validating a heart disease decision-support system. An output of HDDSS is analyzed using various classification output measures. According to the results obtained, our proposed method attained a predictive performance of 92.87 percent. This HDDSS model would perform well when compared to other heart disease decision-support systems found in the literature. According to our experimental analysis, the RFE-ABGNB focused heart disease decision-support system is more appropriate for a heart disease prediction.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-10T13:47:13Z&#13;
No. of bitstreams: 1&#13;
ip2021_10_003.pdf: 1485176 bytes, checksum: 6b7f0f1040d71b4c29b19d0c283d2772 (MD5); Made available in DSpace on 2023-03-10T13:47:13Z (GMT). No. of bitstreams: 1&#13;
ip2021_10_003.pdf: 1485176 bytes, checksum: 6b7f0f1040d71b4c29b19d0c283d2772 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14322">
<title>Results of a Study to Improve the Spanish Version of the User Experience Questionnaire (UEQ)</title>
<link>https://reunir.unir.net/handle/123456789/14322</link>
<description>Results of a Study to Improve the Spanish Version of the User Experience Questionnaire (UEQ)
Hernández-Campos, Mónica; Thomaschewski, Jörg; Law, Yuen C.
This paper analyses changes in some items of the User Experience Questionnaire (UEQ) for use in the context of Costa Rican culture. Although a Spanish version of the UEQ was created in 2012, we use a double-translation and reconciliation model for detecting the more appropriate words for Costa Rican culture. These resulted in 7 new items that were added to the original Spanish version. In total, the resulting UEQ had 33 items. 161 participants took part in a study that examined both the original items and the new ones. Static analyses (Cronbach's Alpha, mean, variance, and confidence interval) were performed to measure the differences of the scales of the original items and the new UEQ variant with the Costa Rican words. Finally, confidence intervals of the individual items and Cronbach’s Alpha coefficient average of the affected scales were analysed. The results show, contrary to initial expectations, that the Costa Rican word version is neither better nor worse than the original Spanish version. However, this shows that the UEQ is very robust to some changes in the items.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-10T13:35:35Z&#13;
No. of bitstreams: 1&#13;
ip2022_11_003.pdf: 514919 bytes, checksum: 2bc2e2adaafe10ad962fdde1426f1df3 (MD5); Made available in DSpace on 2023-03-10T13:35:35Z (GMT). No. of bitstreams: 1&#13;
ip2022_11_003.pdf: 514919 bytes, checksum: 2bc2e2adaafe10ad962fdde1426f1df3 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14321">
<title>Local Model-Agnostic Explanations for Black-box Recommender Systems Using Interaction Graphs and Link Prediction Techniques</title>
<link>https://reunir.unir.net/handle/123456789/14321</link>
<description>Local Model-Agnostic Explanations for Black-box Recommender Systems Using Interaction Graphs and Link Prediction Techniques
Caro-Martínez, Marta; Jiménez-Díaz, Guillermo; Recio-García, Juan A.
Explanations in recommender systems are a requirement to improve users’ trust and experience. Traditionally, explanations in recommender systems are derived from their internal data regarding ratings, item features, and user profiles. However, this information is not available in black-box recommender systems that lack sufficient data transparency. This current work proposes a local model-agnostic, explanation-by-example method for recommender systems based on knowledge graphs to leverage this knowledge requirement. It only requires information about the interactions between users and items. Through the proper transformation of these knowledge graphs into item-based and user-based structures, link prediction techniques are applied to find similarities between the nodes and to identify explanatory items for the user’s recommendation. Experimental evaluation demonstrates that these knowledge graphs are more effective than classical content-based explanation approaches but have lower information requirements, making them more suitable for black-box recommender systems.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-10T13:25:30Z&#13;
No. of bitstreams: 1&#13;
ip2021_12_001_0.pdf: 1615281 bytes, checksum: f98ee39ee64a909c80a9c3160689213e (MD5); Made available in DSpace on 2023-03-10T13:25:30Z (GMT). No. of bitstreams: 1&#13;
ip2021_12_001_0.pdf: 1615281 bytes, checksum: f98ee39ee64a909c80a9c3160689213e (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14315">
<title>A Platform for Swimming Pool Detection and Legal Verification Using a Multi-Agent System and Remote Image Sensing</title>
<link>https://reunir.unir.net/handle/123456789/14315</link>
<description>A Platform for Swimming Pool Detection and Legal Verification Using a Multi-Agent System and Remote Image Sensing
Sánchez San Blas, Héctor; Carmona Balea, Antía; Sales, A.; Augusto Silva, Luís; Villarrubia González, Gabriel
Spain is the second country in Europe with the most swimming pools. However, the legal literature estimates that 20% of swimming pools are not declared or irregular.The administration has a corps of people who manually analyze satellite or drone images to detect illegal or irregular structures. This method is costly in terms of effort and time, and it is also a method based on the subjectivity of the person carrying it out. This proposal aims to design a platform that allows the automatic detection of irregular pools. Using geographic information tools (GIS) based on orthophotography, combined with advanced machine learning techniques for object detection, allows this work. Furthermore, using a multi-agent architecture allows the system to be modular, with the possibility of the different parts of the system working together, balancing the workload. The proposed system has been validated by testing it in different towns in Spain. The system has shown promisin results in performing this task, with an F1-Score of 97.1%.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-09T16:11:46Z&#13;
No. of bitstreams: 1&#13;
ip2023_01_002.pdf: 15157584 bytes, checksum: a865b94d7d0ba34b25ee70e7ae98f5c5 (MD5); Made available in DSpace on 2023-03-09T16:11:46Z (GMT). No. of bitstreams: 1&#13;
ip2023_01_002.pdf: 15157584 bytes, checksum: a865b94d7d0ba34b25ee70e7ae98f5c5 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14312">
<title>Validity and Intra Rater Reliability of a New Device for Tongue Force Measurement</title>
<link>https://reunir.unir.net/handle/123456789/14312</link>
<description>Validity and Intra Rater Reliability of a New Device for Tongue Force Measurement
Diaz-Saez, Marta Carlota; Beltran-Alacreu, Hector; Gil-Castillo, Javier; Navarro-Fernández, Gonzalo; Cebrian Carretero, Jose Luis; Gil-Martínez, Alfonso
Background. The tongue is made up of multiple muscles both extrinsic and intrinsic. The hyoid, jaw and maxillary complex contain the tongue, which hangs between these structures forming an important biomechanical system. This organ has to work in coordination with craniofacial structures to ensure normal orofacial functioning. There are different devices on the market for tongue force measurement. However, they are not accessible for patients due to their size and very high prices. Likewise, other devices have not yet carried out validity and reliability studies. The purpose of this study was to validate a new device proving that it is accurate compared to the algometer. Moreover, the study wanted to determine the intra-rater reliability of a protocol to assess the maximum tongue force in asymptomatic subjects. Material and methods. This is an observational-longitudinal study with repeated measurements. A prototype device was developed specifically for this study to measure tongue force through force-sensitive resistor sensors. The prototype system was equipped with a device to perform and transmit the measurement and a C++ programming software in the computer to take data from the session. Different formulas were made to calibrate the system. For validity, the force measured by the prototype and the algometer was compared. For intra-rater reliability, 29 asymptomatic Spanish subjects were recruited, and a standardized protocol was carried out for the tests. Results. Experiments to assess validity showed a strong correlation (r&gt;0.97) and an excellent reliability (ICC&gt;0.90) between devices.On the other hand, the intra-rater reliability analysis showed an excellent ICC (0.93) with a 95% CI of 0.86 to 0.97 and a MDC90 of 6.26N. Conclusion. We demonstrated good validity values and high intra-rater reliability for the prototype device for the maximum tongue force.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-09T15:14:12Z&#13;
No. of bitstreams: 1&#13;
ip2022_02_001.pdf: 1108123 bytes, checksum: 83add9a60fbe02db172335dc2a1497f1 (MD5); Made available in DSpace on 2023-03-09T15:14:12Z (GMT). No. of bitstreams: 1&#13;
ip2022_02_001.pdf: 1108123 bytes, checksum: 83add9a60fbe02db172335dc2a1497f1 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14310">
<title>Mapping the Situation of Educational Technologies in the Spanish University System Using Social Network Analysis and Visualization</title>
<link>https://reunir.unir.net/handle/123456789/14310</link>
<description>Mapping the Situation of Educational Technologies in the Spanish University System Using Social Network Analysis and Visualization
Vargas Quesada, B.; Zarco, Carmen; Cordón, Oscar
Educational Technologies (EdTech) are based on the use of Information and Communication Technologies (ICT) to improve the quality of teaching and learning. EdTech is experiencing great development at different educational levels worldwide, especially since the appearance of Covid-19. The recent publication of a study by the ICT Sectorial of CRUE Universidades Españolas, the Spanish University Association, is the first report on the implementation of such technologies within Spain´s University System. This paper presents two different maps based on the data from that report. Together, they illustrate the penetration of different types of EdTech in our university system and shed light on the strategic interest behind their adoption. Our goal is to produce self-explanatory maps that can be easily and directly interpreted. The first map reflects wide granularity in terms of the global importance of technologies, while the second points to relevant conclusions given the spatial position of Spain´s universities, and the size of the nodes that represent them (directly related with their strategic interests on EdTech), as well as with the local relationships existing among them (identifying similarities on those strategic interests).
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-09T08:55:53Z&#13;
No. of bitstreams: 1&#13;
ip2021_09_04.pdf: 1539656 bytes, checksum: 2f1c40a9c7eae6019e6ebdf25943ac5d (MD5); Made available in DSpace on 2023-03-09T08:55:53Z (GMT). No. of bitstreams: 1&#13;
ip2021_09_04.pdf: 1539656 bytes, checksum: 2f1c40a9c7eae6019e6ebdf25943ac5d (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14309">
<title>Point Cloud Deep Learning Solution for Hand Gesture Recognition</title>
<link>https://reunir.unir.net/handle/123456789/14309</link>
<description>Point Cloud Deep Learning Solution for Hand Gesture Recognition
Osimani, César; Ojeda-Castelo, Juan Jesus; Piedra-Fernandez, Jose A.
In the last couple of years, there has been an increasing need for Human-Computer Interaction (HCI) systems that do not require touching the devices to control them, such as ATMs, self service kiosks in airports, terminals in public offices, among others. The use of hand gestures offers a natural alternative to achieve control without touching the devices. This paper presents a solution that allows the recognition of hand gestures by analyzing three-dimensional landmarks using deep learning. These landmarks are extracted by using a model created with machine learning techniques from a single standard RGB camera in order to define the skeleton of the hand with 21 landmarks distributed as follows: one on the wrist and four on each finger. This study proposes a deep neural network that was trained with 9 gestures receiving as input the 21 points of the hand. One of the main contributions, that considerably improves the performance, is a first layer of normalization and transformation of the landmarks. In our experimental analysis, we reach an accuracy of 99.87% recognizing of 9 hand gestures.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-09T08:40:35Z&#13;
No. of bitstreams: 1&#13;
ip2023_01_001.pdf: 3726735 bytes, checksum: 59ea71a7db13110cd8657efe5823c1ba (MD5); Made available in DSpace on 2023-03-09T08:40:35Z (GMT). No. of bitstreams: 1&#13;
ip2023_01_001.pdf: 3726735 bytes, checksum: 59ea71a7db13110cd8657efe5823c1ba (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14305">
<title>Editor’s Note</title>
<link>https://reunir.unir.net/handle/123456789/14305</link>
<description>Editor’s Note
Yang, Jiachen; Song, Houbing; Khurram Khan, Muhammad
With the rapid development of information and communication technologies, artificial intelligence and IoTs, more and more advanced technologies, such as machine learning, reinforcement learning, neural networks and fuzzy systems, have been introduced into industrial practices. The application of advanced technologies has greatly promoted the process of industrial revolution. However, there is big gap between controlled simulation and real evolving environment, which results in the unsatisfactory performance of the typical algorithms in practical environments. For example, in Underwater IoTs, a dynamic and uncertain marine environment can cause equipment damage, resulting in huge financial losses. Therefore, improving the robustness and adaptability of algorithms and systems, and proposing new solutions in practical applications to meet the requirements of self-developing, self-organizing, and evolving systems is essential to promote intelligent industrial applications.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-08T14:33:40Z
No. of bitstreams: 1
ijimai8_1_0_0.pdf: 60247 bytes, checksum: 479d5af4a15b484b21299a0d0b09d351 (MD5); Made available in DSpace on 2023-03-08T14:33:40Z (GMT). No. of bitstreams: 1
ijimai8_1_0_0.pdf: 60247 bytes, checksum: 479d5af4a15b484b21299a0d0b09d351 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14304">
<title>An Efficient Probabilistic Methodology to Evaluate Web Sources as Data Source for Warehousing</title>
<link>https://reunir.unir.net/handle/123456789/14304</link>
<description>An Efficient Probabilistic Methodology to Evaluate Web Sources as Data Source for Warehousing
Sharan Sinha, Hariom; Kumar Choudhary, Saket; Kumar Solanki, Vijender
Internet is the largest source of data and the requirement of data analytics have fueled the data warehouse to switch from structured conventional Data Warehouse to complex Web Data Warehouse. The dynamic and complex nature of web poses various types of complexities during synthesis of web data into a conventional warehouse. Multi-Criteria-Decision Making (MCDM) is a prominent mechanism to select the best data for storing into the data-warehouse. In this article, a method, based on the probabilistic analysis of SAW and TOPSIS methods, has been proposed to select web data sources as data sources for web data warehouse. This method deals more efficiently with the dynamic and complex nature of web. Here, the result of the selection employs the analysis of both the methods (SAW and TOPSIS) to evaluate the probability of selection of respective score (1-9) for each feature. With these probability values, the probability of selection of the next web sources has been be determined. Moreover, using the same probability values, mean score and standard deviation of the scores of respective features of selected web sources have been deduced, which are further used to fix the standard score of each feature for selection of web sources. The standard score is a parameter of the proposed Mean-Standard-Deviation (MSD) method to check the suitability of web sources individually, whereas others do the same on comparative basis. The proposed method cuts down the cost of the repetitive comparison operation, once after computation of the Standard score using Mean and Standard deviation of each individual feature. Here, the respective value of the standard score of each feature is only compared with the score of each respective feature of the next web sources, so it reduces the cost of computation and selects the web sources faster as well.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-08T14:29:34Z
No. of bitstreams: 1
ijimai8_1_9_0.pdf: 1385402 bytes, checksum: e91de83950e00b8d900611eb43d65c24 (MD5); Made available in DSpace on 2023-03-08T14:29:34Z (GMT). No. of bitstreams: 1
ijimai8_1_9_0.pdf: 1385402 bytes, checksum: e91de83950e00b8d900611eb43d65c24 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14303">
<title>A Spatio-Temporal Attention Graph Convolutional Networks for Sea Surface Temperature Prediction</title>
<link>https://reunir.unir.net/handle/123456789/14303</link>
<description>A Spatio-Temporal Attention Graph Convolutional Networks for Sea Surface Temperature Prediction
Chen, Desheng; Wen, Jiabao; Lv, Caiyun
Sea surface temperature (SST) is an important index to detect ocean changes, predict SST anomalies, and prevent natural disasters caused by abnormal changes, dynamic variation of which have a profound impact on the whole marine ecosystem and the dynamic changes of climate. In order to better capture the dynamic changes of ocean temperature, it’s vitally essential to predict the SST in the future. A new spatio-temporal attention graph convolutional network (STAGCN) for SST prediction was proposed in this paper which can capture spatial dependence and temporal correlation in the way of integrating gated recurrent unit (GRU) model with graph convolutional network (GCN) and introduced attention mechanism. The STAGCN model adopts the GCN model to learn the topological structure between ocean location points for extracting the spatial characteristics from the ocean position nodes network. Besides, capturing temporal correlation by learning dynamic variation of SST time series data, a GRU model is introduced into the STAGCN model to deal with the prediction problem about long time series, the input of which is the SST data with spatial characteristics. To capture the significance of SST information at different times and increase the accuracy of SST forecast, the attention mechanism was used to obtain the spatial and temporal characteristics globally. In this study, the proposed STAGCN model was trained and tested on the East China Sea. Experiments with different prediction lengths show that the model can capture the spatio-temporal correlation of regional-scale sea surface temperature series and almost uniformly outperforms other classical models under different sea areas and different prediction levels, in which the root mean square error is reduced by about 0.2 compared with the LSTM model.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-08T14:10:36Z
No. of bitstreams: 1
ijimai8_1_6.pdf: 3816814 bytes, checksum: c979aaa13f7cc3fab1abd8850ed7409b (MD5); Made available in DSpace on 2023-03-08T14:10:36Z (GMT). No. of bitstreams: 1
ijimai8_1_6.pdf: 3816814 bytes, checksum: c979aaa13f7cc3fab1abd8850ed7409b (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14295">
<title>Using the Statistical Machine Learning Models ARIMA and SARIMA to Measure the Impact of Covid-19 on Official Provincial Sales of Cigarettes in Spain</title>
<link>https://reunir.unir.net/handle/123456789/14295</link>
<description>Using the Statistical Machine Learning Models ARIMA and SARIMA to Measure the Impact of Covid-19 on Official Provincial Sales of Cigarettes in Spain
Andueza, Andoni; Del Arco-Osuna, Miguel Ángel; Fornés, Bernat; González-Crespo, Rubén; Martín-Álvarez, Juan Manuel
From a public health perspective, tobacco use is addictive by nature and triggers several cancers, cardiovascular and respiratory diseases, reproductive disorders, and many other adverse health effects leading to many deaths. In this context, the need to eradicate tobacco-related health problems and the increasingly complex environments of tobacco research require sophisticated analytical methods to handle large amounts of data and perform highly specialized tasks. In this study, time series models are used: autoregressive integrated moving average (ARIMA) and seasonal autoregressive integrated moving average (SARIMA) to forecast the impact of COVID-19 on sales of cigarette in Spanish provinces. To find the optimal solution, initial combinations of model parameters automatically selected the ARIMA model, followed by finding the optimized model parameters based on the best fit between the predictions and the test data. The analytical tools Autocorrelation Function (ACF), Partial Autocorrelation Function (PACF), Akaike Information Criterion (AIC), and Bayesian Information Criterion (BIC) were used to assess the reliability of the models. The evaluation metrics that are used as criteria to select the best model are: mean absolute error (MAE), root mean square error (RMSE), mean absolute percentage error (MAPE), mean percentage error (MPE), mean error (ME) and mean absolute standardized error (MASE). The results show that the national average impact is slight. However, in border provinces with France or with a high influx of tourists, a strong impact of COVID-19 on tobacco sales has been observed. In addition, the least impact has been observed in border provinces with Gibraltar. Policymakers need to make the right decisions about the tobacco price differentials that are observed between neighboring European countries when there is constant and abundant cross-border human transit. To keep smoking under control, all countries must make harmonized decisions.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-07T14:25:59Z&#13;
No. of bitstreams: 1&#13;
ijimai8_1_7.pdf: 3695376 bytes, checksum: ad125ea27f7e74c81ad2d02f5b972af0 (MD5); Made available in DSpace on 2023-03-07T14:25:59Z (GMT). No. of bitstreams: 1&#13;
ijimai8_1_7.pdf: 3695376 bytes, checksum: ad125ea27f7e74c81ad2d02f5b972af0 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14294">
<title>COVID-19 Disease Prediction Using Weighted Ensemble Transfer Learning</title>
<link>https://reunir.unir.net/handle/123456789/14294</link>
<description>COVID-19 Disease Prediction Using Weighted Ensemble Transfer Learning
Kumar Roy, Pradeep; Singh, Ashish
Health experts use advanced technological equipment to find complex diseases and diagnose them. Medical imaging nowadays is popular for detecting abnormalities in human bodies. This research discusses using the Internet of Medical Things in the COVID-19 crisis perspective. COVID-19 disease created an unforgettable remark on human memory. It is something like never happened before, and people do not expect it in the future. Medical experts are continuously working on getting a solution for this deadly disease. This pandemic warns the healthcare system to find an alternative solution to monitor the infected person remotely. Internet of Medical Things can be helpful in a pandemic scenario. This paper suggested a ensemble transfer learning framework predict COVID-19 infection. The model used the weighted transfer learning concept and predicted the COVID- 19 infected people with an F1-score of 0.997 for the best case on the test dataset.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-07T14:15:11Z
No. of bitstreams: 1
ijimai8_1_2.pdf: 3010491 bytes, checksum: 662915d100ed2fbf3de8e7dac2c15780 (MD5); Made available in DSpace on 2023-03-07T14:15:11Z (GMT). No. of bitstreams: 1
ijimai8_1_2.pdf: 3010491 bytes, checksum: 662915d100ed2fbf3de8e7dac2c15780 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14293">
<title>Sentiment Analysis and Classification of Hotel Opinions in Twitter With the Transformer Architecture</title>
<link>https://reunir.unir.net/handle/123456789/14293</link>
<description>Sentiment Analysis and Classification of Hotel Opinions in Twitter With the Transformer Architecture
Arroni, Sergio; Galán, Yerai; Guzmán-Guzmán, Xiomarah; Nuñez-Valdez, Edward Rolando; Gómez, Alberto
Sentiment analysis is of great importance to parties who are interested is analyzing the public opinion in social networks. In recent years, deep learning, and particularly, the attention-based architecture, has taken over the field, to the point where most research in Natural Language Processing (NLP) has been shifted towards the development of bigger and bigger attention-based transformer models. However, those models are developed to be all-purpose NLP models, so for a concrete smaller problem, a reduced and specifically studied model can perform better. We propose a simpler attention-based model that makes use of the transformer architecture to predict the sentiment expressed in tweets about hotels in Las Vegas. With their relative predicted performance, we compare the similarity of our ranking to the actual ranking in TripAdvisor to those obtained by more rudimentary sentiment analysis approaches, outperforming them with a 0.64121 Spearman correlation coefficient. We also compare our performance to DistilBERT, obtaining faster and more accurate results and proving that a model designed for a particular problem can perform better than models with several millions of trainable parameters.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-07T13:58:12Z
No. of bitstreams: 1
ijimai8_1_5.pdf: 1019610 bytes, checksum: 0d504615f95590cf76ca3443c1d0c2a3 (MD5); Made available in DSpace on 2023-03-07T13:58:13Z (GMT). No. of bitstreams: 1
ijimai8_1_5.pdf: 1019610 bytes, checksum: 0d504615f95590cf76ca3443c1d0c2a3 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14292">
<title>Blockchain Based Cloud Management Architecture for Maximum Availability</title>
<link>https://reunir.unir.net/handle/123456789/14292</link>
<description>Blockchain Based Cloud Management Architecture for Maximum Availability
Arias Maestro, Alberto; Sanjuán Martínez, Óscar; Teredesai, Ankur M.; García-Díaz, Vicente
Contemporary cloud application and Edge computing orchestration systems rely on controller/worker design patterns to allocate, distribute, and manage resources. Standard solutions like Apache Mesos, Docker Swarm, and Kubernetes can span multiple zones at data centers, multiple global regions, and even consumer point of presence locations. Previous research has concluded that random network partitions cannot be avoided in these scenarios, leaving system designers to choose between consistency and availability, as defined by the CAP theorem. Controller/worker architectures guarantee configuration consistency via the employment of redundant storage systems, in most cases coordinated via consensus algorithms such as Paxos or Raft. These algorithms ensure information consistency against network failures while decreasing availability as network regions increase. Mainstream blockchain technology provides a solution to this compromise while decentralizing control via a fully distributed architecture coordinated through Byzantine-resistant consensus algorithms. This research proposes a blockchain-based decentralized architecture for cloud resource management systems. We analyze and compare the characteristics of the proposed architecture concerning the consistency, availability, and partition resistance of architectures that rely on Paxos/Raft distributed data stores. Our research demonstrates that the proposed blockchain-based decentralized architecture noticeably increases the system availability, including cases of network partitioning, without a significant impact on configuration consistency.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-07T13:45:41Z&#13;
No. of bitstreams: 1&#13;
ijimai8_1_8.pdf: 913686 bytes, checksum: e9f4dff7f3c87f658ee8363bca9a7573 (MD5); Made available in DSpace on 2023-03-07T13:45:41Z (GMT). No. of bitstreams: 1&#13;
ijimai8_1_8.pdf: 913686 bytes, checksum: e9f4dff7f3c87f658ee8363bca9a7573 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14291">
<title>An Efficient Bet-GCN Approach for Link Prediction</title>
<link>https://reunir.unir.net/handle/123456789/14291</link>
<description>An Efficient Bet-GCN Approach for Link Prediction
Saxena, Rahul; Pankaj Patil, Spandan; Kumar Verma, Atul; Jadeja, Mahipal; Vyas, Pranshu; Bhateja, Vikrant; Chun-Wei Lin, Jerry
The task of determining whether or not a link will exist between two entities, given the current position of the network, is called link prediction. The study of predicting and analyzing links between entities in a network is emerging as one of the most interesting research areas to explore. In the field of social network analysis, finding mutual friends, predicting the friendship status between two network individuals in the near future, etc., contributes significantly to a better understanding of the underlying network dynamics. The concept has many applications in biological networks, such as finding possible connections (possible interactions) between genes and predicting protein-protein interactions. Apart from these, the concept has applications in many other areas of network science. Exploration based on Graph Neural Networks (GNNs) to accomplish such tasks is another focus that is attracting a lot of attention these days. These approaches leverage the strength of the structural information of the network along with the properties of the nodes to make efficient predictions and classifications. In this work, we propose a network centrality based approach combined with Graph Convolution Networks (GCNs) to predict the connections between network nodes. We propose an idea to select training nodes for the model based on high edge betweenness centrality, which improves the prediction accuracy of the model. The study was conducted using three benchmark networks: CORA, Citeseer, and PubMed. The prediction accuracies for these networks are: 95.08%, 95.07%, and 95.3%. The performance of the model is comprehensive and comparable to the other prior art methods and studies. Moreover, the performance of the model is evaluated with 90.13% for WikiCS and 87.7% for Amazon Product network to show the generalizability of the model. The paper discusses in detail the reason for the improved predictive ability of the model both theoretically and experimentally. Our results are generalizable and our model has the potential to provide good results for link prediction tasks in any domain.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-07T13:26:55Z
No. of bitstreams: 1
ijimai8_1_4.pdf: 3433075 bytes, checksum: 3bb4103136d7d2931ab87df24ce17f51 (MD5); Made available in DSpace on 2023-03-07T13:26:55Z (GMT). No. of bitstreams: 1
ijimai8_1_4.pdf: 3433075 bytes, checksum: 3bb4103136d7d2931ab87df24ce17f51 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14290">
<title>Dataset and Baselines for IID and OOD Image Classification Considering Data Quality and Evolving Environments</title>
<link>https://reunir.unir.net/handle/123456789/14290</link>
<description>Dataset and Baselines for IID and OOD Image Classification Considering Data Quality and Evolving Environments
Zhang, Zhuo; Li, Yang; Gong, Yicheng; Yang, Yue; Ma, Shukun; Guo, Xiaolan; Ercisli, Sezai
At present, artificial intelligence is in a period of rapid development, and deep learning has begun to be applied in various fields. Data, as a key part of the deep learning, its efficiency and stability, will directly affect the performance of the model, so it is valued by people. In order to make the dataset efficient, many active learning methods have been proposed, the dataset containing independent identically distribution (IID) samples is reduced with excellent performance; in order to make the dataset more stable, it should be solved that the model encounters out-of-distribution (OOD) samples to improve generalization performance. However, the current active learning method design and the method of adding OOD samples lack guidance, and people do not know what samples should be selected and which OOD samples will be added to better improve the generalization performance. In this paper, we propose a dataset containing a variety of elements called a dataset with Complete Sample Elements(CSE), the labels such as rotation angle and distance in addition to the common classification labels. These labels can help people analyze the distribution characteristics of each element of an efficient dataset, thereby inspiring new active learning methods; we also construct a corresponding OOD test set, which can not only detect the generalization performance of the model, but also helps explore metrics between OOD samples and existing dataset to guide the selected method of OOD samples, so that it can improve generalization efficiently. In this paper, we explore the distribution characteristics of efficient datasets in terms of angle element, and confirm that an efficient dataset tends to contain samples with different appearance. At the same time, experiments have proved the positive influence of the addition of OOD samples on the generalization performance of dataset.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-07T13:03:45Z
No. of bitstreams: 1
ijimai8_1_1.pdf: 6576201 bytes, checksum: 4ee9b392f5fdb25f490e679ed2f10c21 (MD5); Made available in DSpace on 2023-03-07T13:03:45Z (GMT). No. of bitstreams: 1
ijimai8_1_1.pdf: 6576201 bytes, checksum: 4ee9b392f5fdb25f490e679ed2f10c21 (MD5)
</description>
</item>
<item rdf:about="https://reunir.unir.net/handle/123456789/14289">
<title>Human Activity Recognition From Sensorised Patient's Data in Healthcare: A Streaming Deep Learning-Based Approach</title>
<link>https://reunir.unir.net/handle/123456789/14289</link>
<description>Human Activity Recognition From Sensorised Patient's Data in Healthcare: A Streaming Deep Learning-Based Approach
Hurtado, Sandro; García-Nieto, José; Popov, Anton; Navas-Delgado, Ismael
Physical inactivity is one of the main risk factors for mortality, and its relationship with the main chronic diseases has experienced intensive medical research. A well-known method for assessing people’s activity is the use of accelerometers implanted in wearables and mobile phones. However, a series of main critical issues arise in the healthcare context related to the limited amount of available labelled data to build a classification model. Moreover, the discrimination ability of activities is often challenging to capture since the variety of movement patterns in a particular group of patients (e.g. obesity or geriatric patients) is limited over time. Consequently, the proposed work presents a novel approach for Human Activity Recognition (HAR) in healthcare to avoid this problem. This proposal is based on semi-supervised classification with Encoder-Decoder Convolutional Neural Networks (CNNs) using a combination strategy of public labelled and private unlabelled raw sensor data. In this sense, the model will be able to take advantage of the large amount of unlabelled data available by extracting relevant characteristics in these data, which will increase the knowledge in the innermost layers. Hence, the trained model can generalize well when used in real-world use cases. Additionally, real-time patient monitoring is provided by Apache Spark streaming processing with sliding windows. For testing purposes, a real-world case study is conducted with a group of overweight patients in the healthcare system of Andalusia (Spain), classifying close to 30 TBs of accelerometer sensor-based data. The proposed HAR streaming deep-learning approach properly classifies movement patterns in real-time conditions, crucial for long-term daily patient monitoring.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-07T12:39:36Z
No. of bitstreams: 1
ijimai8_1_3.pdf: 3518563 bytes, checksum: 6573b5bd2a0673fa99edd01c4851aa6c (MD5); Made available in DSpace on 2023-03-07T12:39:36Z (GMT). No. of bitstreams: 1
ijimai8_1_3.pdf: 3518563 bytes, checksum: 6573b5bd2a0673fa99edd01c4851aa6c (MD5)
</description>
</item>
</rdf:RDF>
