<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns="http://www.w3.org/2005/Atom">
<title>vol. 8, nº 2, june 2023</title>
<link href="https://reunir.unir.net/handle/123456789/14815" rel="alternate"/>
<subtitle/>
<id>https://reunir.unir.net/handle/123456789/14815</id>
<updated>2024-11-06T21:24:34Z</updated>
<dc:date>2024-11-06T21:24:34Z</dc:date>
<entry>
<title>Editor’s Note</title>
<link href="https://reunir.unir.net/handle/123456789/14832" rel="alternate"/>
<author>
<name>Gaona-García, Paulo Alonso</name>
</author>
<id>https://reunir.unir.net/handle/123456789/14832</id>
<updated>2023-06-05T14:28:13Z</updated>
<summary type="text">Editor’s Note
Gaona-García, Paulo Alonso
Artificial Intelligence (AI) represents one of the fastest growing areas of knowledge, sectors and fields of action globally. This growth has allowed to mark different positions, where the most favorable ones are oriented to its unquestionable contribution to facilitate decision making in various fields of society, as well as other sectors that mark a strong position for its use to be carried out in a regulated and measured way due to the scope and risks to which we are exposed. For this reason, rigorous methods are increasingly required for the design and development of AI-based computational models; methods that involve strict mechanisms for their validation, as well as the analysis of possible risks and scope that they may have on the field of application where they are being exposed. This type of aspects would definitely mark a valuable and relevant milestone to define several paths within which we can find two: 1) if it is definitely necessary to set limits on the use of AI by establishing increasingly sophisticated regulatory frameworks on various areas involving data protection and regulated use of the same, and 2) to remove all barriers so that it can be exploited openly in all its dimensions in any area of our society. Hence the importance of analysing the different risks and threats that AI may present within the particular context in which it is being applied.&#13;
Based on this panorama, this regular edition of the “International Journal Interactive Multimedia and Artificial Intelligence” presents a series of papers where proposals are oriented to different fields and sectors, which make use of diverse approaches, methods, models and AI-based systems that allow us to have a generalized idea of how these challenges are being addressed in some fields of our society. In particular, this regular issue collects research topics focusing on addressing the problems of evolving recommender systems, classification models, decision support systems, system modelling, data analytics, optimization algorithms, image retrieval, deep neural networks, social network analysis, and the relevance of the design of User Experience (UX) proposals.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-06-05T14:28:13Z
No. of bitstreams: 1
ijimai8_2_0_1.pdf: 95746 bytes, checksum: c911ddb3a3adcb6f9dd5511649802c70 (MD5); Made available in DSpace on 2023-06-05T14:28:13Z (GMT). No. of bitstreams: 1
ijimai8_2_0_1.pdf: 95746 bytes, checksum: c911ddb3a3adcb6f9dd5511649802c70 (MD5)
</summary>
</entry>
<entry>
<title>ResNet18 Supported Inspection of Tuberculosis in Chest Radiographs With Integrated Deep, LBP, and DWT Features</title>
<link href="https://reunir.unir.net/handle/123456789/14831" rel="alternate"/>
<author>
<name>Rajinikanth, Venkatesan</name>
</author>
<author>
<name>Kadry, Seifedine</name>
</author>
<author>
<name>Moreno-Ger, Pablo</name>
</author>
<id>https://reunir.unir.net/handle/123456789/14831</id>
<updated>2023-11-22T08:22:52Z</updated>
<summary type="text">ResNet18 Supported Inspection of Tuberculosis in Chest Radiographs With Integrated Deep, LBP, and DWT Features
Rajinikanth, Venkatesan; Kadry, Seifedine; Moreno-Ger, Pablo
The lung is a vital organ in human physiology and disease in lung causes various health issues. The acute disease in lung is a medical emergency and hence several methods are developed and implemented to detect the lung abnormality. Tuberculosis (TB) is one of the common lung disease and premature diagnosis and treatment is necessary to cure the disease with appropriate medication. Clinical level assessment of TB is commonly performed with chest radiographs (X-ray) and the recorded images are then examined to identify TB and its harshness. This research proposes a TB detection framework using integrated optimal deep and handcrafted features. The different stages of this work include (i) X-ray collection and processing, (ii) Pretrained Deep-Learning (PDL) scheme-based feature mining, (iii) Feature extraction with Local Binary Pattern (LBP) and Discrete Wavelet Transform (DWT), (iv) Feature optimization with Firefly-Algorithm, (v) Feature ranking and serial concatenation, and (vi) Classification by means of a 5-fold cross confirmation. The result of this study validates that, the ResNet18 scheme helps to achieve a better accuracy with SoftMax (95.2%) classifier and Decision Tree Classifier (99%) with deep and concatenated features, respectively. Further, overall performance of Decision Tree is better compared to other classifiers.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-06-05T14:02:05Z
No. of bitstreams: 1
ijimai8_2_4.pdf: 3550633 bytes, checksum: 4967a3d490556c67b7a1f4f2ef738197 (MD5); Made available in DSpace on 2023-06-05T14:02:05Z (GMT). No. of bitstreams: 1
ijimai8_2_4.pdf: 3550633 bytes, checksum: 4967a3d490556c67b7a1f4f2ef738197 (MD5)
</summary>
</entry>
<entry>
<title>Digit Recognition Using Composite Features With Decision Tree Strategy</title>
<link href="https://reunir.unir.net/handle/123456789/14830" rel="alternate"/>
<author>
<name>Chen, Chung-Hsing</name>
</author>
<author>
<name>Huang, Ko-Wei</name>
</author>
<id>https://reunir.unir.net/handle/123456789/14830</id>
<updated>2023-06-05T13:55:48Z</updated>
<summary type="text">Digit Recognition Using Composite Features With Decision Tree Strategy
Chen, Chung-Hsing; Huang, Ko-Wei
At present, check transactions are one of the most common forms of money transfer in the market. The information for check exchange is printed using magnetic ink character recognition (MICR), widely used in the banking industry, primarily for processing check transactions. However, the magnetic ink card reader is specialized and expensive, resulting in general accounting departments or bookkeepers using manual data registration instead. An organization that deals with parts or corporate services might have to process 300 to 400 checks each day, which would require a considerable amount of labor to perform the registration process. The cost of a single-sided scanner is only 1/10 of the MICR; hence, using image recognition technology is an economical solution. In this study, we aim to use multiple features for character recognition of E13B, comprising ten numbers and four symbols. For the numeric part, we used statistical features such as image density features, geometric features, and simple decision trees for classification. The symbols of E13B are composed of three distinct rectangles, classified according to their size and relative position. Using the same sample set, MLP, LetNet-5, Alexnet, and hybrid CNN-SVM were used to train the numerical part of the artificial intelligence network as the experimental control group to verify the accuracy and speed of the proposed method. The results of this study were used to verify the performance and usability of the proposed method. Our proposed method obtained all test samples correctly, with a recognition rate close to 100%. A prediction time of less than one millisecond per character, with an average value of 0.03 ms, was achieved, over 50 times faster than state-of-the-art methods. The accuracy rate is also better than all comparative state-of-the-art methods. The proposed method was also applied to an embedded device to ensure the CPU would be used for verification instead of a high-end GPU.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-06-05T13:55:48Z
No. of bitstreams: 1
ijimai8_2_10.pdf: 2043799 bytes, checksum: a6c642303d81bfc5179a8053e5e7e086 (MD5); Made available in DSpace on 2023-06-05T13:55:48Z (GMT). No. of bitstreams: 1
ijimai8_2_10.pdf: 2043799 bytes, checksum: a6c642303d81bfc5179a8053e5e7e086 (MD5)
</summary>
</entry>
<entry>
<title>Exploring ChatGPT's Potential for Consultation, Recommendations and Report Diagnosis: Gastric Cancer and Gastroscopy Reports’ Case</title>
<link href="https://reunir.unir.net/handle/123456789/14593" rel="alternate"/>
<author>
<name>Zhou, Jiaming</name>
</author>
<author>
<name>Li, Tengyue</name>
</author>
<author>
<name>Fong, Simon James</name>
</author>
<author>
<name>Dey, Nilanjan</name>
</author>
<author>
<name>González-Crespo, Rubén</name>
</author>
<id>https://reunir.unir.net/handle/123456789/14593</id>
<updated>2024-03-13T15:43:38Z</updated>
<summary type="text">Exploring ChatGPT's Potential for Consultation, Recommendations and Report Diagnosis: Gastric Cancer and Gastroscopy Reports’ Case
Zhou, Jiaming; Li, Tengyue; Fong, Simon James; Dey, Nilanjan; González-Crespo, Rubén
Artificial intelligence (AI) has shown its effectiveness in helping clinical users meet evolving challenges. Recently, ChatGPT, a newly launched AI chatbot with exceptional text comprehension capabilities, has triggered a global wave of AI popularization and application in seeking answers through human‒machine dialogues. Gastric cancer, as a globally prevalent disease, has a five-year survival rate of up to 90% when detected early and treated promptly. This research aims to explore ChatGPT's potential in disseminating gastric cancer knowledge, providing consultation recommendations, and interpreting endoscopy reports. Through experimentation, the GPT-4 model of ChatGPT achieved an appropriateness of 91.3% and a consistency of 95.7% in a gastric cancer knowledge test. Furthermore, GPT-4 has demonstrated considerable potential in consultation recommendations and endoscopy report analysis.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-05-03T10:25:57Z&#13;
No. of bitstreams: 1&#13;
ip2023_04_007.pdf: 918647 bytes, checksum: 4646c89b726a20dbc836aa8df0455beb (MD5); Made available in DSpace on 2023-05-03T10:25:57Z (GMT). No. of bitstreams: 1&#13;
ip2023_04_007.pdf: 918647 bytes, checksum: 4646c89b726a20dbc836aa8df0455beb (MD5)
</summary>
</entry>
<entry>
<title>Adaptation of Applications to Compare Development Frameworks in Deep Learning for Decentralized Android Applications</title>
<link href="https://reunir.unir.net/handle/123456789/14592" rel="alternate"/>
<author>
<name>Sainz-de-Abajo, Beatriz</name>
</author>
<author>
<name>Laso, Sergio</name>
</author>
<author>
<name>Garcia-Alonso, Jose</name>
</author>
<id>https://reunir.unir.net/handle/123456789/14592</id>
<updated>2023-06-05T13:32:44Z</updated>
<summary type="text">Adaptation of Applications to Compare Development Frameworks in Deep Learning for Decentralized Android Applications
Sainz-de-Abajo, Beatriz; Laso, Sergio; Garcia-Alonso, Jose
Not all frameworks used in machine learning and deep learning integrate with Android, which requires some prerequisites. The primary objective of this paper is to present the results of the analysis and a comparison of deep learning development frameworks, which can be adapted into fully decentralized Android apps from a cloud server. As a work methodology, we develop and/or modify the test applications that these frameworks offer us a priori in such a way that it allows an equitable comparison of the analysed characteristics of interest.&#13;
These parameters are related to attributes that a user would consider, such as (1) percentage of success; (2) battery consumption; and (3) power consumption of the processor. After analysing numerical results, the proposed framework that best behaves in relation to the analysed characteristics for the development of an Android application is TensorFlow, which obtained the best score against Caffe2 and Snapdragon NPE in the percentage of correct answers, battery consumption, and device CPU power consumption. Data consumption was not considered because we focus on decentralized cloud storage applications in this study.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-05-03T10:16:58Z&#13;
No. of bitstreams: 1&#13;
ip2023_04_006.pdf: 2388930 bytes, checksum: 7da9701560ea4c79115755aca69cea4c (MD5); Made available in DSpace on 2023-05-03T10:16:58Z (GMT). No. of bitstreams: 1&#13;
ip2023_04_006.pdf: 2388930 bytes, checksum: 7da9701560ea4c79115755aca69cea4c (MD5)
</summary>
</entry>
<entry>
<title>Development of a Shared UX Vision Based on UX Factors Ascertained Through Attribution</title>
<link href="https://reunir.unir.net/handle/123456789/14587" rel="alternate"/>
<author>
<name>Winter, Dominique</name>
</author>
<author>
<name>Hausmann, Carolin</name>
</author>
<author>
<name>Hinderks, Andreas</name>
</author>
<author>
<name>Thomaschewski, Jörg</name>
</author>
<id>https://reunir.unir.net/handle/123456789/14587</id>
<updated>2023-06-26T09:29:22Z</updated>
<summary type="text">Development of a Shared UX Vision Based on UX Factors Ascertained Through Attribution
Winter, Dominique; Hausmann, Carolin; Hinderks, Andreas; Thomaschewski, Jörg
User experience (UX) is an important quality in differentiating products. For a product team, it is a challenge to develop a good positive user experience. A common UX vision for the product team supports the team in making goal-oriented decisions regarding the user experience. This paper presents an approach to developing a shared UX vision. This UX vision is developed by the product team while a collaborative session. To validate our approach, we conducted a first validation study. In this study, we conducted a collaborative session with two groups and a total of 37 participants. The group of participants comprised product managers, UX designers and comparable professional profiles. At the end of the collaborative session, participants had to fill out a questionnaire. Through questions and observations, we identified ten good practices and four bad practices in the application of our approach to developing a UX vision. The top 3 good practices mentioned by the&#13;
participants include the definition of decision-making procedures (G1), determining the UX vision with the team (G2), and using general factors of the UX as a basis (G3). The top 3 bad practices are: providing too little time for the development of the UX vision (B1), not providing clear cluster designations (B2) and working without user data (B3). The results show that the present approach for developing a UX vision helps to promote a shared understanding of the intended UX in a quickly and simply way.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-05-03T08:51:40Z&#13;
No. of bitstreams: 1&#13;
ip2023_04_001.pdf: 1637462 bytes, checksum: f2830f59be2d0b581d0f5e756ad54183 (MD5); Made available in DSpace on 2023-05-03T08:51:41Z (GMT). No. of bitstreams: 1&#13;
ip2023_04_001.pdf: 1637462 bytes, checksum: f2830f59be2d0b581d0f5e756ad54183 (MD5)
</summary>
</entry>
<entry>
<title>On the Importance of UX Quality Aspects for Different Product Categories</title>
<link href="https://reunir.unir.net/handle/123456789/14368" rel="alternate"/>
<author>
<name>Schrepp, Martin</name>
</author>
<author>
<name>Kollmorgen, Jessica</name>
</author>
<author>
<name>Meiners, Anna-Lena</name>
</author>
<author>
<name>Hinderks, Andreas</name>
</author>
<author>
<name>Winter, Dominique</name>
</author>
<author>
<name>Santoso, Harry B.</name>
</author>
<author>
<name>Thomaschewski, Jörg</name>
</author>
<id>https://reunir.unir.net/handle/123456789/14368</id>
<updated>2023-06-05T13:26:38Z</updated>
<summary type="text">On the Importance of UX Quality Aspects for Different Product Categories
Schrepp, Martin; Kollmorgen, Jessica; Meiners, Anna-Lena; Hinderks, Andreas; Winter, Dominique; Santoso, Harry B.; Thomaschewski, Jörg
User experience (UX) is a holistic concept. We conceptualize UX as a set of semantically distinct quality aspects. These quality aspects relate subjectively perceived properties of the user interaction with a product to the psychological needs of users. Not all possible UX quality aspects are equally important for all products. The main use case of a product can determine the relative importance of UX aspects for the overall impression of the UX. In this paper, the authors present several studies that investigate this dependency between the product category and the importance of several well-known UX aspects. A method to measure the importance of such UX aspects is presented. In addition, the authors show that the observed importance ratings are stable, i.e., reproducible, and hardly influenced by demographic factors or cultural background. Thus, the ratings reported in our studies can be reused by UX professionals to find out which aspects of UX they should concentrate on in product design and evaluation.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-15T10:21:46Z&#13;
No. of bitstreams: 1&#13;
ip2023_03_001.pdf: 1323641 bytes, checksum: 56fa34ed7b6a0677146aedec4cd55891 (MD5); Made available in DSpace on 2023-03-15T10:21:46Z (GMT). No. of bitstreams: 1&#13;
ip2023_03_001.pdf: 1323641 bytes, checksum: 56fa34ed7b6a0677146aedec4cd55891 (MD5)
</summary>
</entry>
<entry>
<title>Rhetorical Pattern Finding</title>
<link href="https://reunir.unir.net/handle/123456789/14367" rel="alternate"/>
<author>
<name>Gómez, Francisco</name>
</author>
<author>
<name>Tizón Díaz, Manuel</name>
</author>
<author>
<name>Arronte Alvarez, Aitor</name>
</author>
<author>
<name>Padilla, Victor</name>
</author>
<id>https://reunir.unir.net/handle/123456789/14367</id>
<updated>2023-11-21T15:44:25Z</updated>
<summary type="text">Rhetorical Pattern Finding
Gómez, Francisco; Tizón Díaz, Manuel; Arronte Alvarez, Aitor; Padilla, Victor
In this paper, we research rhetorical patterns from a musicological and computational standpoint. First, a theoretical examination of what constitutes a rhetorical pattern is conducted. Out of that examination, which includes primary sources and the study of the main composers, a formal definition of rhetorical patterns is proposed. Among the rhetorical figures, a set of imitative rhetorical figures is selected for our study, namely, epizeuxis, palilogy, synonymia, and polyptoton. Next, we design a computational model of the selected rhetorical patterns to automatically find those patterns in a corpus consisting of masses by Renaissance composer Tomás Luis de Victoria. In order to have a ground truth with which to test out our model, a group of experts manually annotated the rhetorical patterns. To deal with the problem of reaching a consensus on the annotations, a four-round Delphi method was followed by the annotators. The rhetorical patterns found by the annotators and by the algorithm are compared and their differences discussed. The algorithm reports almost all the patterns annotated by the experts and some additional patterns. The algorithm reports almost all the patterns annotated by the experts (recall: 98.11%) and some additional patterns (precision: 71.73%). These patterns correspond to rhetorical patterns within other rhetorical patterns, which were overlooked by the annotators on the basis of their contextual knowledge. These results pose issues as to how to integrate that contextual knowledge into the computational model.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-15T10:09:25Z&#13;
No. of bitstreams: 1&#13;
ip2022_10_002.pdf: 1408728 bytes, checksum: 5f3989a04ead9602a8cbf4325fca66c2 (MD5); Made available in DSpace on 2023-03-15T10:09:25Z (GMT). No. of bitstreams: 1&#13;
ip2022_10_002.pdf: 1408728 bytes, checksum: 5f3989a04ead9602a8cbf4325fca66c2 (MD5)
</summary>
</entry>
<entry>
<title>A Hybrid Secure Cloud Platform Maintenance Based on Improved Attribute-Based Encryption Strategies</title>
<link href="https://reunir.unir.net/handle/123456789/14366" rel="alternate"/>
<author>
<name>Kumar, Abhishek</name>
</author>
<author>
<name>Kumar, Swarn Avinash</name>
</author>
<author>
<name>Dutt, Vishal</name>
</author>
<author>
<name>Dubey, A. K.</name>
</author>
<author>
<name>Narang, Sushil</name>
</author>
<id>https://reunir.unir.net/handle/123456789/14366</id>
<updated>2024-02-06T09:38:30Z</updated>
<summary type="text">A Hybrid Secure Cloud Platform Maintenance Based on Improved Attribute-Based Encryption Strategies
Kumar, Abhishek; Kumar, Swarn Avinash; Dutt, Vishal; Dubey, A. K.; Narang, Sushil
In the modern era, Cloud Platforms are the most needed port to maintain documents remotely with proper security norms. The concept of cloud environments is similar to the network channel. Still, the Cloud is considered the refined form of network, in which the data can easily be stored into the server without any range restrictions. The data maintained into the remote server needs a high-security feature, and the processing power of data should be high to retrieve the data back from the respective server. In the past, there were several security schemes available to protect the remote cloud server reasonably. However, the attack possibilities over the cloud platform remain; only all the researchers continuously work on this platform without any delay. This paper introduces a hybrid data security scheme called the Improved Attribute-Based Encryption Scheme (IABES). This IABES combines two powerful data security algorithms: Advanced Encryption Standard (AES) and Attribute-Based Encryption (ABE) algorithm. These two algorithms are combined to provide massive support to the proposed approach of data maintenance over the remote cloud server with high-end security norms. This hybrid data security algorithm assures the data cannot be attacked over the server by the attacker or intruder in any case because of its robustness. The essential generation process generates a credential for the users. It cannot be identified or visible to anyone as well as the generated certificates cannot be extracted even if the corresponding user forgets the credentials. The only way to get back the certification is resetting the credential. The obtained results prove the accuracy level of the proposed cypher security schemes compared with the regular cloud security management scheme, and the proposed algorithm essential generation process is unique. No one can guess or acquire it. Even the person may be the service provider or server administrator. For all, the proposed system assures data maintenance over the cloud platform with a high level of security and robustness in Quality of Service.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-15T09:41:36Z&#13;
No. of bitstreams: 1&#13;
ip2021_11_004.pdf: 1309547 bytes, checksum: 411470b57830a77ebfaabbf6704f3cda (MD5); Made available in DSpace on 2023-03-15T09:41:36Z (GMT). No. of bitstreams: 1&#13;
ip2021_11_004.pdf: 1309547 bytes, checksum: 411470b57830a77ebfaabbf6704f3cda (MD5)
</summary>
</entry>
<entry>
<title>A Greedy Randomized Adaptive Search With Probabilistic Learning for solving the Uncapacitated Plant Cycle Location Problem</title>
<link href="https://reunir.unir.net/handle/123456789/14356" rel="alternate"/>
<author>
<name>López-Plata, Israel</name>
</author>
<author>
<name>Expósito-Izquierdo, Christopher</name>
</author>
<author>
<name>Lalla-Ruiz, Eduardo</name>
</author>
<author>
<name>Melián-Batista, Belén</name>
</author>
<author>
<name>Moreno-Vega, J. Marcos</name>
</author>
<id>https://reunir.unir.net/handle/123456789/14356</id>
<updated>2023-06-05T11:00:56Z</updated>
<summary type="text">A Greedy Randomized Adaptive Search With Probabilistic Learning for solving the Uncapacitated Plant Cycle Location Problem
López-Plata, Israel; Expósito-Izquierdo, Christopher; Lalla-Ruiz, Eduardo; Melián-Batista, Belén; Moreno-Vega, J. Marcos
In this paper, we address the Uncapacitated Plant Cycle Location Problem. It is a location-routing problem aimed at determining a subset of locations to set up plants dedicated to serving customers. We propose a mathematical formulation to model the problem. The high computational burden required by the formulation when tackling large scenarios encourages us to develop a Greedy Randomized Adaptive Search Procedure with Probabilistic Learning Model. Its rationale is to divide the problem into two interconnected sub-problems.&#13;
The computational results indicate the high performance of our proposal in terms of the quality of reported solutions and computational time. Specifically, we have overcome the best approach from the literature on a wide range of scenarios.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-14T10:51:43Z&#13;
No. of bitstreams: 1&#13;
ip2022_04_003.pdf: 1912008 bytes, checksum: 3a6b10eb9b2c0334e124757b3c6f94bb (MD5); Made available in DSpace on 2023-03-14T10:51:43Z (GMT). No. of bitstreams: 1&#13;
ip2022_04_003.pdf: 1912008 bytes, checksum: 3a6b10eb9b2c0334e124757b3c6f94bb (MD5)
</summary>
</entry>
<entry>
<title>OntoInfoG++: A Knowledge Fusion Semantic Approach for Infographics Recommendation</title>
<link href="https://reunir.unir.net/handle/123456789/14355" rel="alternate"/>
<author>
<name>Deepak, Gerard</name>
</author>
<author>
<name>Vibakar, Adithya</name>
</author>
<author>
<name>Santhanavijayan, A.</name>
</author>
<id>https://reunir.unir.net/handle/123456789/14355</id>
<updated>2023-06-05T10:30:03Z</updated>
<summary type="text">OntoInfoG++: A Knowledge Fusion Semantic Approach for Infographics Recommendation
Deepak, Gerard; Vibakar, Adithya; Santhanavijayan, A.
As humans tend to improvise and learn on a constant basis, the need for visualizing and recommending knowledge is increasing. Since the World Wide Web is exploded with a lot of multimedia content and with a growing amount of research papers on the Web, there is a potential need for inferential multimedia like the infographics which can lead to an ultimate new level of learning from most viable information sources on the Web. The potential growth and future of technology have called for the need of a Web 3.0 compliant infographic recommendation system in order to be able to visualize, design and develop aesthetically. The trend of the Web has asked for better infographic recommendations in the attempt of technological exploration. This paper proposes the OntoInfoG++ which is a knowledge centric recommendation approach for Infographics that encompasses the amalgamation of metadata derived from multiple heterogeneous sources and the crowd sourced ontologies to recommend infographics based on the topic of interest of the user. The user- clicks are taken into consideration along with an Ontology which is modeled using the titles and the keywords extracted from the dataset comprising of research papers. The approach models user topic of interest from the Query Words, Current User-Clicks, and from standard Knowledge Stores like the BibSonomy, DBpedia, Wikidata, LOD Cloud, and crowd sourced Ontologies. The semantic alignment is achieved using three distinct measures namely the Horn’s index, EnAPMI measure and information entropy. The resultant infographic recommendation has been achieved by computing the semantic similarity between enriched topics of interest and infographic labels and arrange the recommended infographics in the increasing order of their semantic similarity to yield a chronological order for the meaningful arrangement of infographics. The OntoInfoG++ has achieved an overall F-measure of 97.27 % which is the best-in-class F-measure for an infographic recommendation system.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-14T10:34:38Z&#13;
No. of bitstreams: 1&#13;
ip2021_12_005.pdf: 1688814 bytes, checksum: 798e8a63663c6db51e90b023e2be4df0 (MD5); Made available in DSpace on 2023-03-14T10:34:38Z (GMT). No. of bitstreams: 1&#13;
ip2021_12_005.pdf: 1688814 bytes, checksum: 798e8a63663c6db51e90b023e2be4df0 (MD5)
</summary>
</entry>
<entry>
<title>A Comparative Evaluation of Bayesian Networks Structure Learning Using Falcon Optimization Algorithm</title>
<link href="https://reunir.unir.net/handle/123456789/14352" rel="alternate"/>
<author>
<name>Qasim Awla, Hoshang</name>
</author>
<author>
<name>Wahhab Kareem, Shahab</name>
</author>
<author>
<name>Salih Mohammed, Amin</name>
</author>
<id>https://reunir.unir.net/handle/123456789/14352</id>
<updated>2023-06-05T11:39:44Z</updated>
<summary type="text">A Comparative Evaluation of Bayesian Networks Structure Learning Using Falcon Optimization Algorithm
Qasim Awla, Hoshang; Wahhab Kareem, Shahab; Salih Mohammed, Amin
Bayesian networks are analytical models that may represent probabilistic dependent connections among variables and are useful in machine learning for generating knowledge structure. Due to the vastness of the solution space, learning Bayesian network (BN) structures from data is an NP-hard problem. The score and search technique is one Bayesian Network structure learning strategy. In Bayesian network structure learning the Falcon Optimization Algorithm (FOA) is presented and evaluated by the authors. Inserting, Reversing, Moving, and Deleting, are used in the method to create the FOA for finding the best structural solution. The FOA algorithm is based on the falcon's searching technique during drought conditions. The suggested technique is compared to the score metric function of Pigeon Inspired search algorithm, Greedy Search, and Antlion optimization search algorithm. The performance of these techniques in terms of confusion matrices was further evaluated by the authors using a variety of benchmark data sets. The Falcon optimization algorithm outperforms the previous algorithms and generates higher scores and accuracy values, as evidenced by the results of our experiments.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-14T10:05:41Z&#13;
No. of bitstreams: 1&#13;
ip2023_01_004.pdf: 1954539 bytes, checksum: a8f6a836f3c25d18979bf2fa465f2a14 (MD5); Made available in DSpace on 2023-03-14T10:05:41Z (GMT). No. of bitstreams: 1&#13;
ip2023_01_004.pdf: 1954539 bytes, checksum: a8f6a836f3c25d18979bf2fa465f2a14 (MD5)
</summary>
</entry>
<entry>
<title>Resource and Process Management With a Decision Model Based on Fuzzy Logic</title>
<link href="https://reunir.unir.net/handle/123456789/14349" rel="alternate"/>
<author>
<name>Fornerón Martínez, J. T.</name>
</author>
<author>
<name>Agostini, F.</name>
</author>
<author>
<name>la Red, David L.</name>
</author>
<id>https://reunir.unir.net/handle/123456789/14349</id>
<updated>2023-06-05T13:22:59Z</updated>
<summary type="text">Resource and Process Management With a Decision Model Based on Fuzzy Logic
Fornerón Martínez, J. T.; Agostini, F.; la Red, David L.
The allocation of the resources to be shared in the context of a distributed processing system needs to be coordinated through the mutual exclusion mechanism, which will decide the order in which the shared resources will be allocated to those processes that require them. This paper proposes an aggregation operator, which can be used by a module that manages the shared resources, whose function is to assign the resources to the processes according to their requirements (shared resources) and the status of the distributed nodes in which the processes operate (computational load), by using 2-tuple associated to linguistic labels.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-14T09:17:36Z&#13;
No. of bitstreams: 1&#13;
ip2023_02_009.pdf: 2277350 bytes, checksum: aca528a795efb6cf7d0b6eff1a46ce6d (MD5); Made available in DSpace on 2023-03-14T09:17:36Z (GMT). No. of bitstreams: 1&#13;
ip2023_02_009.pdf: 2277350 bytes, checksum: aca528a795efb6cf7d0b6eff1a46ce6d (MD5)
</summary>
</entry>
<entry>
<title>Real World Anomalous Scene Detection and Classification using Multilayer Deep Neural Networks</title>
<link href="https://reunir.unir.net/handle/123456789/14335" rel="alternate"/>
<author>
<name>Jan, Atif</name>
</author>
<author>
<name>Khan, Gul Muhammad</name>
</author>
<id>https://reunir.unir.net/handle/123456789/14335</id>
<updated>2023-06-05T09:56:41Z</updated>
<summary type="text">Real World Anomalous Scene Detection and Classification using Multilayer Deep Neural Networks
Jan, Atif; Khan, Gul Muhammad
Surveillance videos record malicious events in a locality utilizing various machine learning algorithms for detection. Deep-learning algorithms being the most prominent AI algorithms are data-hungry as well as computationally expensive. These algorithms perform better when trained over a diverse and huge set of examples. These modern AI methods have a dire need of utilizing human intelligence to pamper the problem in such a way as to reduce the ultimate effort in terms of computational cost. In this research work, a novel methodology termed Bag of Focus (BoF) based training methodology has been proposed. BoF is based on the concept of selecting motion-intensive blocks in a long video, for training different deep neural networks (DNN's). The methodology reduced the computational overhead by 90% (ten times) in comparison to when full-length videos are entertained. It has been observed that training networks using BoF are equally effective in terms of performance for the same network trained over the full-length dataset. In this research work, firstly, a fine-grained annotated dataset including instance and activity information has been developed for real-world volume crimes. Secondly, a BoF-based methodology has been introduced for effective training of the state-of-the-art 3D, and 2D Convolutional Neural Networks (CNNs). Lastly, a comparison between the state-of-the-art networks have been presented for malicious event recognition in videos. It has been observed that 2D CNN even with lesser parameters achieved a promising classification accuracy of 98.7% and Area under the curve (AUC) of 99.7%.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-13T10:46:39Z&#13;
No. of bitstreams: 1&#13;
ip2021_10_010_0.pdf: 3198406 bytes, checksum: 0f55f0dc5ce2de966e69cb4804ae19b1 (MD5); Made available in DSpace on 2023-03-13T10:46:39Z (GMT). No. of bitstreams: 1&#13;
ip2021_10_010_0.pdf: 3198406 bytes, checksum: 0f55f0dc5ce2de966e69cb4804ae19b1 (MD5)
</summary>
</entry>
<entry>
<title>A Hybrid Parallel Classification Model for the Diagnosis of Chronic Kidney Disease</title>
<link href="https://reunir.unir.net/handle/123456789/14334" rel="alternate"/>
<author>
<name>Singh, Vijendra</name>
</author>
<author>
<name>Jain, Divya</name>
</author>
<id>https://reunir.unir.net/handle/123456789/14334</id>
<updated>2023-06-05T09:42:52Z</updated>
<summary type="text">A Hybrid Parallel Classification Model for the Diagnosis of Chronic Kidney Disease
Singh, Vijendra; Jain, Divya
Chronic Kidney Disease (CKD) has become a prevalent disease nowadays, affecting people globally around the world. Accurate prediction of CKD progression over time is essential for reducing its associated mortality and morbidity rates. This paper proposes a fast, novel hybrid approach to diagnose Chronic Renal Disease. The proposed approach is based on the optimization of SVM classifier with the hybridized dimensionality reduction approach to identify the most informative parameters for CKD diagnosis. It handles the selection of features through two steps. The first one is a filter-based approach using ReliefF method to assign weights and ranks to each feature of the dataset. The second step is the dimensionality reduction of the best-selected subset by means of PCA, a feature extraction technique. For faster execution of datasets, simultaneous execution on multiple processors is employed. The proposed model achieved the highest prediction accuracy of 92.5% on the clinical CKD dataset compared to existing methods - ‘CFS+SVM’ (60.45%), ‘ReliefF + SVM’ (86%), ‘MIFS + SVM’ (56.72%), ‘ReliefF + CFS + SVM’ (54.37). The proposed work is also examined on the benchmarked Chronic Kidney Disease Dataset and achieved classification accuracy of 98.5% compared to the accuracy with other methods -‘CFS+SVM’ (92.7%), ‘ReliefF + SVM’ (89.6%), ‘MIFS + SVM’ (94.7%). The experimental outcomes positively demonstrate that the proposed hybridized model is effective in undertaking medical data classification tasks and is, therefore, a promising tool for the diagnosis of CKD patients. The proposed approach is statistically validated with the Friedman test with significant results compared to other techniques. The proposed approach also executes in the least time with improved prediction accuracy and competes with and even outperforms other methods in the literature.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-13T10:30:50Z&#13;
No. of bitstreams: 1&#13;
ip2021_10_008.pdf: 1663611 bytes, checksum: 010c00dc7292ae899ce2d0844508a619 (MD5); Made available in DSpace on 2023-03-13T10:30:50Z (GMT). No. of bitstreams: 1&#13;
ip2021_10_008.pdf: 1663611 bytes, checksum: 010c00dc7292ae899ce2d0844508a619 (MD5)
</summary>
</entry>
<entry>
<title>RGBeat: A Recoloring Algorithm for Deutan and Protan Dichromats</title>
<link href="https://reunir.unir.net/handle/123456789/14333" rel="alternate"/>
<author>
<name>Ribeiro, Madalena</name>
</author>
<author>
<name>Gomes, Abel</name>
</author>
<id>https://reunir.unir.net/handle/123456789/14333</id>
<updated>2023-06-05T10:33:14Z</updated>
<summary type="text">RGBeat: A Recoloring Algorithm for Deutan and Protan Dichromats
Ribeiro, Madalena; Gomes, Abel
Deutan and protan dichromats only see exactly two hues in the HSV color space, 240-blue (240o) and 60-yellow (60 o). Consequently, they see both reds and greens as yellows; therefore, they cannot distinguish reds from greens very well. Thus, their color space is 2D and results from the intersection between the HSV color cone and the 60º-240º plane. The RGBeat recoloring algorithm’s main contribution here is that it is the first recoloring algorithm that enhances the color perception of deutan and protan dichromats but without compromising the lifelong color perceptual learning. Also, as far as we know, this is the first HTML5-compliant web recoloring approach for dichromat people that considers both text and image recoloring in an integrated manner.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-13T10:13:54Z&#13;
No. of bitstreams: 1&#13;
ip2022_01_003_0.pdf: 14595713 bytes, checksum: 66c1760f71dd542e2c028df01beab30b (MD5); Made available in DSpace on 2023-03-13T10:13:54Z (GMT). No. of bitstreams: 1&#13;
ip2022_01_003_0.pdf: 14595713 bytes, checksum: 66c1760f71dd542e2c028df01beab30b (MD5)
</summary>
</entry>
<entry>
<title>RIADA: A Machine-Learning Based Infrastructure for Recognising the Emotions of Spotify Songs</title>
<link href="https://reunir.unir.net/handle/123456789/14327" rel="alternate"/>
<author>
<name>Álvarez, P.</name>
</author>
<author>
<name>García de Quirós, J.</name>
</author>
<author>
<name>Baldassarri, S.</name>
</author>
<id>https://reunir.unir.net/handle/123456789/14327</id>
<updated>2023-06-05T10:57:46Z</updated>
<summary type="text">RIADA: A Machine-Learning Based Infrastructure for Recognising the Emotions of Spotify Songs
Álvarez, P.; García de Quirós, J.; Baldassarri, S.
The music emotions can help to improve the personalization of services and contents offered by music streaming providers. Many research works based on the use of machine learning techniques have addressed the problem of recognising the music emotions during the last years. Nevertheless, the results obtained are only applied on small-size music repositories and do not consider what the users feel when they listen to the songs. These issues prevent the existing proposals to be integrated into the personalization mechanisms of the online music providers. In this paper, we present the RIADA infrastructure which is composed by a set of systems able to annotate emotionally the catalog of songs offered by Spotify based on the users’ perception. RIADA works with the Spotify playlist miner and data services to build emotion recognition models that can solve the open challenges previously mentioned. Machine learning algorithms, music information retrieval techniques, architectures for parallelization of applications and cloud computing have been combined to develop a complex result of engineering able to integrate the music emotions into the Spotify-based applications.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-10T15:13:55Z&#13;
No. of bitstreams: 1&#13;
ip2022_04_02.pdf: 1334801 bytes, checksum: 5657c9b5a9277b31891d505065b116b1 (MD5); Made available in DSpace on 2023-03-10T15:13:55Z (GMT). No. of bitstreams: 1&#13;
ip2022_04_02.pdf: 1334801 bytes, checksum: 5657c9b5a9277b31891d505065b116b1 (MD5)
</summary>
</entry>
<entry>
<title>Cosine Similarity Based Hierarchical Skeleton and Cross Indexing for Large Scale Image Retrieval Using Mapreduce Framework</title>
<link href="https://reunir.unir.net/handle/123456789/14326" rel="alternate"/>
<author>
<name>Qianwen, Zhong</name>
</author>
<id>https://reunir.unir.net/handle/123456789/14326</id>
<updated>2023-06-05T13:17:26Z</updated>
<summary type="text">Cosine Similarity Based Hierarchical Skeleton and Cross Indexing for Large Scale Image Retrieval Using Mapreduce Framework
Qianwen, Zhong
The imaging data in various fields like industries, institutions, medical, and so on has grown exponentially in recent years. An innovative software solution is required for the efficient management of image data. The MapReduce framework is used for large-scale image data processing. Various cross-indexing techniques are developed to transform the image into binary sequences but retrieving the image from the reducer on the feature vector results in a major challenge. Image retrieval using large-scale image databases attained major attention, where cross-indexing plays a key role in the research community. Therefore, in this research, a new method for image retrieval, named Cosine Similarity-based hierarchical skeleton and cross-indexing, is proposed to perform the retrieval process in the MapReduce framework effectively. The feature vector of the images is converted to binary sequences. The Most Significant Bit (MSB) of the binary code is used to store the images in the mapper using the cross-indexing model. The image retrieval process is achieved through the reducer based on the tanimoto similarity measure. The binary sequence for the query image is calculated based on the feature vector. The MSB bit of the binary code is matched with the MSB code of the images in&#13;
the mapper to achieve the retrieval process. The proposed method effectively achieved better performance through the cross-indexing model with the usage of the feature vector. The performance of the proposed method is compared with the existing techniques using the UK bench dataset. The proposed method attains the values of 0.784, 0.729, 0.75, 31.23, 17.84secfor F1-score, precision, recall, computational cost, and computational time with the query set-1 by considering four mappers.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-10T15:08:19Z&#13;
No. of bitstreams: 1&#13;
ip2023_01_008.pdf: 2790409 bytes, checksum: bb52e6d3cb166c7d1e3675ea8a327baa (MD5); Made available in DSpace on 2023-03-10T15:08:19Z (GMT). No. of bitstreams: 1&#13;
ip2023_01_008.pdf: 2790409 bytes, checksum: bb52e6d3cb166c7d1e3675ea8a327baa (MD5)
</summary>
</entry>
<entry>
<title>Multi-Agent and Fuzzy Inference-Based Framework for Traffic Light Optimization</title>
<link href="https://reunir.unir.net/handle/123456789/14325" rel="alternate"/>
<author>
<name>Ikidid, Abdelouafi</name>
</author>
<author>
<name>Abdelaziz, El Fazziki</name>
</author>
<author>
<name>Sadgal, Mohammed</name>
</author>
<id>https://reunir.unir.net/handle/123456789/14325</id>
<updated>2023-06-05T10:21:45Z</updated>
<summary type="text">Multi-Agent and Fuzzy Inference-Based Framework for Traffic Light Optimization
Ikidid, Abdelouafi; Abdelaziz, El Fazziki; Sadgal, Mohammed
Despite the fact that agent technologies have widely gained popularity in distributed systems, their potential for advanced management of vehicle traffic has not been sufficiently explored. This paper presents a traffic simulation framework based on agent technology and fuzzy logic. The objective of this framework is to act on the phase layouts represented by its sequences and length to maximize throughput and fluidize traffic at an isolated intersection and for the whole multi-intersection network, through both inter- and intra-intersection collaboration and coordination. The optimizing of signal layouts is done in real time, and it is not only based on local stream factors but also on traffic stream conditions in surrounding intersections. The system profits from agent communication and collaboration as well as coordination features, along with decentralized organization, to decompose the traffic control optimization into subproblems and enable the distributed resolution. Thus, the separate parts can be resolved rapidly by parallel tasking. It also uses fuzzy technology to handle the uncertainty of traffic conditions. An instance of the proposed framework was validated and designed in the ANYLOGIC simulator. Instantiation results and analysis denote that the designed system can significantly develop the efficiency at an individual intersection as well as in the multi-intersection network. It reduces the average travel delay and the time spent in the network compared to multi-agent-based adaptative signal control systems.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-10T15:01:38Z&#13;
No. of bitstreams: 1&#13;
ip2021_12_002.pdf: 1280024 bytes, checksum: af272fdf10087c5ff553bc37f4787e81 (MD5); Made available in DSpace on 2023-03-10T15:01:38Z (GMT). No. of bitstreams: 1&#13;
ip2021_12_002.pdf: 1280024 bytes, checksum: af272fdf10087c5ff553bc37f4787e81 (MD5)
</summary>
</entry>
<entry>
<title>Deep Learning Assisted Medical Insurance Data Analytics With Multimedia System</title>
<link href="https://reunir.unir.net/handle/123456789/14324" rel="alternate"/>
<author>
<name>Zhang, Cheng</name>
</author>
<author>
<name>Vinodhini, B.</name>
</author>
<author>
<name>Muthu, Bala Anand</name>
</author>
<id>https://reunir.unir.net/handle/123456789/14324</id>
<updated>2023-06-05T13:14:02Z</updated>
<summary type="text">Deep Learning Assisted Medical Insurance Data Analytics With Multimedia System
Zhang, Cheng; Vinodhini, B.; Muthu, Bala Anand
Big Data presents considerable challenges to deep learning for transforming complex, high-dimensional, and heterogeneous biomedical data into health care data. Various kinds of data are analyzed in recent biomedical research that includes e-health records, medical imaging, text, and IoT sensor data, which are complex, badly labeled, heterogeneous, and usually unstructured. Conventional statistical learning and data mining methods usually require first to extract features to acquire more robust and effective variables from those data. These features help build clustering or prediction models. New useful paradigms are provided by the latest advancements based on deep learning technologies for obtaining end-to-end learning techniques from complex data. The abstractions of data are represented using the multiple layers of deep learning for building computational models. Clinician performance is augmented by the prospective of deep learning models in medical imaging interpretation, and automated segmentation is used to reduce the time for the diagnosis. This work presents a convolution neural network-based deep learning infrastructure that performs medical imaging data analysis in various pipeline stages, including data-loading, data-augmentation, network architectures, loss functions, and evaluation metrics. Our proposed deep learning approach supports both 2D as well as 3D medical image analysis. We evaluate the proposed system's performance using metrics like sensitivity, specificity, accuracy, and precision over the clinical data with and without augmentation.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-10T13:56:53Z&#13;
No. of bitstreams: 1&#13;
ip2023_01_009.pdf: 5077977 bytes, checksum: 92b7117bfbea0e044a01e228b9a51ae4 (MD5); Made available in DSpace on 2023-03-10T13:56:53Z (GMT). No. of bitstreams: 1&#13;
ip2023_01_009.pdf: 5077977 bytes, checksum: 92b7117bfbea0e044a01e228b9a51ae4 (MD5)
</summary>
</entry>
<entry>
<title>HDDSS: An Enhanced Heart Disease Decision Support System using RFE-ABGNB Algorithm</title>
<link href="https://reunir.unir.net/handle/123456789/14323" rel="alternate"/>
<author>
<name>Dhilsath Fathima, M.</name>
</author>
<author>
<name>Justin Samuel, S.</name>
</author>
<author>
<name>Raja, S. P.</name>
</author>
<id>https://reunir.unir.net/handle/123456789/14323</id>
<updated>2023-06-05T09:31:27Z</updated>
<summary type="text">HDDSS: An Enhanced Heart Disease Decision Support System using RFE-ABGNB Algorithm
Dhilsath Fathima, M.; Justin Samuel, S.; Raja, S. P.
Heart disease is the leading cause of mortality globally. Heart disease refers to a range of disorders that affect the heart and blood vessels. The risks of developing heart disease become minimized if heart disease is detected early. Previous studies have suggested many heart disease decision-support systems based on machine learning (ML) algorithms. However, the lower prediction accuracy is the main issue in these heart disease decisionsupport systems. The proposed work developed a heart disease decision-support system (HDDSS) that can predict whether or not a person has heart disease. The main goal of this research work is to use the RFEABGNB to improve HDDSS prediction accuracy. The Cleveland heart disease dataset is used for training and validating the proposed HDDSS. The two significant stages of HDDSS are the feature election stage and the classification modeling stage. The recursive feature elimination (RFE) technique is used in the first stage of HDDSS to select the relevant features of the heart disease dataset. In the second stage of HDDSS, the proposed Adaptive boosted Gaussian Naïve Bayes (ABGNB) algorithm has been used to construct a classification model for training and validating a heart disease decision-support system. An output of HDDSS is analyzed using various classification output measures. According to the results obtained, our proposed method attained a predictive performance of 92.87 percent. This HDDSS model would perform well when compared to other heart disease decision-support systems found in the literature. According to our experimental analysis, the RFE-ABGNB focused heart disease decision-support system is more appropriate for a heart disease prediction.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-10T13:47:13Z&#13;
No. of bitstreams: 1&#13;
ip2021_10_003.pdf: 1485176 bytes, checksum: 6b7f0f1040d71b4c29b19d0c283d2772 (MD5); Made available in DSpace on 2023-03-10T13:47:13Z (GMT). No. of bitstreams: 1&#13;
ip2021_10_003.pdf: 1485176 bytes, checksum: 6b7f0f1040d71b4c29b19d0c283d2772 (MD5)
</summary>
</entry>
<entry>
<title>Local Model-Agnostic Explanations for Black-box Recommender Systems Using Interaction Graphs and Link Prediction Techniques</title>
<link href="https://reunir.unir.net/handle/123456789/14321" rel="alternate"/>
<author>
<name>Caro-Martínez, Marta</name>
</author>
<author>
<name>Jiménez-Díaz, Guillermo</name>
</author>
<author>
<name>Recio-García, Juan A.</name>
</author>
<id>https://reunir.unir.net/handle/123456789/14321</id>
<updated>2023-06-05T10:17:54Z</updated>
<summary type="text">Local Model-Agnostic Explanations for Black-box Recommender Systems Using Interaction Graphs and Link Prediction Techniques
Caro-Martínez, Marta; Jiménez-Díaz, Guillermo; Recio-García, Juan A.
Explanations in recommender systems are a requirement to improve users’ trust and experience. Traditionally, explanations in recommender systems are derived from their internal data regarding ratings, item features, and user profiles. However, this information is not available in black-box recommender systems that lack sufficient data transparency. This current work proposes a local model-agnostic, explanation-by-example method for recommender systems based on knowledge graphs to leverage this knowledge requirement. It only requires information about the interactions between users and items. Through the proper transformation of these knowledge graphs into item-based and user-based structures, link prediction techniques are applied to find similarities between the nodes and to identify explanatory items for the user’s recommendation. Experimental evaluation demonstrates that these knowledge graphs are more effective than classical content-based explanation approaches but have lower information requirements, making them more suitable for black-box recommender systems.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-10T13:25:30Z&#13;
No. of bitstreams: 1&#13;
ip2021_12_001_0.pdf: 1615281 bytes, checksum: f98ee39ee64a909c80a9c3160689213e (MD5); Made available in DSpace on 2023-03-10T13:25:30Z (GMT). No. of bitstreams: 1&#13;
ip2021_12_001_0.pdf: 1615281 bytes, checksum: f98ee39ee64a909c80a9c3160689213e (MD5)
</summary>
</entry>
<entry>
<title>Validity and Intra Rater Reliability of a New Device for Tongue Force Measurement</title>
<link href="https://reunir.unir.net/handle/123456789/14312" rel="alternate"/>
<author>
<name>Diaz-Saez, Marta Carlota</name>
</author>
<author>
<name>Beltran-Alacreu, Hector</name>
</author>
<author>
<name>Gil-Castillo, Javier</name>
</author>
<author>
<name>Navarro-Fernández, Gonzalo</name>
</author>
<author>
<name>Cebrian Carretero, Jose Luis</name>
</author>
<author>
<name>Gil-Martínez, Alfonso</name>
</author>
<id>https://reunir.unir.net/handle/123456789/14312</id>
<updated>2023-06-05T10:42:12Z</updated>
<summary type="text">Validity and Intra Rater Reliability of a New Device for Tongue Force Measurement
Diaz-Saez, Marta Carlota; Beltran-Alacreu, Hector; Gil-Castillo, Javier; Navarro-Fernández, Gonzalo; Cebrian Carretero, Jose Luis; Gil-Martínez, Alfonso
Background. The tongue is made up of multiple muscles both extrinsic and intrinsic. The hyoid, jaw and maxillary complex contain the tongue, which hangs between these structures forming an important biomechanical system. This organ has to work in coordination with craniofacial structures to ensure normal orofacial functioning. There are different devices on the market for tongue force measurement. However, they are not accessible for patients due to their size and very high prices. Likewise, other devices have not yet carried out validity and reliability studies. The purpose of this study was to validate a new device proving that it is accurate compared to the algometer. Moreover, the study wanted to determine the intra-rater reliability of a protocol to assess the maximum tongue force in asymptomatic subjects. Material and methods. This is an observational-longitudinal study with repeated measurements. A prototype device was developed specifically for this study to measure tongue force through force-sensitive resistor sensors. The prototype system was equipped with a device to perform and transmit the measurement and a C++ programming software in the computer to take data from the session. Different formulas were made to calibrate the system. For validity, the force measured by the prototype and the algometer was compared. For intra-rater reliability, 29 asymptomatic Spanish subjects were recruited, and a standardized protocol was carried out for the tests. Results. Experiments to assess validity showed a strong correlation (r&gt;0.97) and an excellent reliability (ICC&gt;0.90) between devices.On the other hand, the intra-rater reliability analysis showed an excellent ICC (0.93) with a 95% CI of 0.86 to 0.97 and a MDC90 of 6.26N. Conclusion. We demonstrated good validity values and high intra-rater reliability for the prototype device for the maximum tongue force.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-09T15:14:12Z&#13;
No. of bitstreams: 1&#13;
ip2022_02_001.pdf: 1108123 bytes, checksum: 83add9a60fbe02db172335dc2a1497f1 (MD5); Made available in DSpace on 2023-03-09T15:14:12Z (GMT). No. of bitstreams: 1&#13;
ip2022_02_001.pdf: 1108123 bytes, checksum: 83add9a60fbe02db172335dc2a1497f1 (MD5)
</summary>
</entry>
<entry>
<title>Mapping the Situation of Educational Technologies in the Spanish University System Using Social Network Analysis and Visualization</title>
<link href="https://reunir.unir.net/handle/123456789/14310" rel="alternate"/>
<author>
<name>Vargas Quesada, B.</name>
</author>
<author>
<name>Zarco, Carmen</name>
</author>
<author>
<name>Cordón, Oscar</name>
</author>
<id>https://reunir.unir.net/handle/123456789/14310</id>
<updated>2023-11-22T08:36:11Z</updated>
<summary type="text">Mapping the Situation of Educational Technologies in the Spanish University System Using Social Network Analysis and Visualization
Vargas Quesada, B.; Zarco, Carmen; Cordón, Oscar
Educational Technologies (EdTech) are based on the use of Information and Communication Technologies (ICT) to improve the quality of teaching and learning. EdTech is experiencing great development at different educational levels worldwide, especially since the appearance of Covid-19. The recent publication of a study by the ICT Sectorial of CRUE Universidades Españolas, the Spanish University Association, is the first report on the implementation of such technologies within Spain´s University System. This paper presents two different maps based on the data from that report. Together, they illustrate the penetration of different types of EdTech in our university system and shed light on the strategic interest behind their adoption. Our goal is to produce self-explanatory maps that can be easily and directly interpreted. The first map reflects wide granularity in terms of the global importance of technologies, while the second points to relevant conclusions given the spatial position of Spain´s universities, and the size of the nodes that represent them (directly related with their strategic interests on EdTech), as well as with the local relationships existing among them (identifying similarities on those strategic interests).
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2023-03-09T08:55:53Z&#13;
No. of bitstreams: 1&#13;
ip2021_09_04.pdf: 1539656 bytes, checksum: 2f1c40a9c7eae6019e6ebdf25943ac5d (MD5); Made available in DSpace on 2023-03-09T08:55:53Z (GMT). No. of bitstreams: 1&#13;
ip2021_09_04.pdf: 1539656 bytes, checksum: 2f1c40a9c7eae6019e6ebdf25943ac5d (MD5)
</summary>
</entry>
</feed>
