<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns="http://www.w3.org/2005/Atom">
<title>vol. 7, nº 2, december 2021</title>
<link href="https://reunir.unir.net/handle/123456789/13039" rel="alternate"/>
<subtitle/>
<id>https://reunir.unir.net/handle/123456789/13039</id>
<updated>2024-11-04T14:17:07Z</updated>
<dc:date>2024-11-04T14:17:07Z</dc:date>
<entry>
<title>Mapping and Deep Analysis of Image Dehazing: Coherent Taxonomy, Datasets, Open Challenges, Motivations, and Recommendations</title>
<link href="https://reunir.unir.net/handle/123456789/13074" rel="alternate"/>
<author>
<name>Hameed Abdulkareem, Karrar</name>
</author>
<author>
<name>Arbaiy, Nureize</name>
</author>
<author>
<name>Hussein Arif, Zainab</name>
</author>
<author>
<name>Nasser Al-Mhiqani, Mohammed</name>
</author>
<author>
<name>Abed Mohammed, Mazin</name>
</author>
<author>
<name>Kadry, Seifedine</name>
</author>
<author>
<name>Alkareem Alyasseri, Zaid Abdi</name>
</author>
<id>https://reunir.unir.net/handle/123456789/13074</id>
<updated>2022-05-11T12:07:45Z</updated>
<summary type="text">Mapping and Deep Analysis of Image Dehazing: Coherent Taxonomy, Datasets, Open Challenges, Motivations, and Recommendations
Hameed Abdulkareem, Karrar; Arbaiy, Nureize; Hussein Arif, Zainab; Nasser Al-Mhiqani, Mohammed; Abed Mohammed, Mazin; Kadry, Seifedine; Alkareem Alyasseri, Zaid Abdi
Our study aims to review and analyze the most relevant studies in the image dehazing field. Many aspects have been deemed necessary to provide a broad understanding of various studies that have been examined through surveying the existing literature. These aspects are as follows: datasets that have been used in the literature, challenges that other researchers have faced, motivations, and recommendations for diminishing the obstacles in the reported literature. A systematic protocol is employed to search all relevant articles on image dehazing, with variations in keywords, in addition to searching for evaluation and benchmark studies. The search process is established on three online databases, namely, IEEE Xplore, Web of Science (WOS), and ScienceDirect (SD), from 2008 to 2021. These indices are selected because they are sufficient in terms of coverage. Along with definition of the inclusion and exclusion criteria, we include 152 articles to the final set. A total of 55 out of 152 articles focused on various studies that conducted image dehazing, and 13 out 152 studies covered most of the review papers based on scenarios and general overviews. Finally, most of the included articles centered on the development of image dehazing algorithms based on real-time scenario (84/152) articles. Image dehazing removes unwanted visual effects and is often considered an image enhancement technique, which requires a fully automated algorithm to work under real-time outdoor applications, a reliable evaluation method, and datasets based on different weather conditions. Many relevant studies have been conducted to meet these critical requirements. We conducted objective image quality assessment experimental comparison of various image dehazing algorithms. In conclusions unlike other review papers, our study distinctly reflects different observations on image dehazing areas. We believe that the result of this study can serve as a useful guideline for practitioners who are looking for a comprehensive view on image dehazing.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-05-11T12:07:45Z
No. of bitstreams: 1
ijimai7_2_16_0.pdf: 673766 bytes, checksum: 6f99792edfcf6637a4da6f9b2805caa7 (MD5); Made available in DSpace on 2022-05-11T12:07:45Z (GMT). No. of bitstreams: 1
ijimai7_2_16_0.pdf: 673766 bytes, checksum: 6f99792edfcf6637a4da6f9b2805caa7 (MD5)
</summary>
</entry>
<entry>
<title>Editor's Note</title>
<link href="https://reunir.unir.net/handle/123456789/13073" rel="alternate"/>
<author>
<name>Blanco Valencia, Xiomara Patricia</name>
</author>
<id>https://reunir.unir.net/handle/123456789/13073</id>
<updated>2022-05-11T11:45:44Z</updated>
<summary type="text">Editor's Note
Blanco Valencia, Xiomara Patricia
The International Journal of Interactive Multimedia and Artificial Intelligence - IJIMAI - provides a space in which scientists and professionals can report about new advances in Artificial Intelligence (AI). On this occasion, for the last edition of the year, I am pleased to present a regular issue including different investigations covering aspects and problems in AI and its use in various fields such as medicine, education, image analysis, protection of data, among others.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-05-11T11:45:44Z
No. of bitstreams: 1
ijimai7_2_0_0.pdf: 69842 bytes, checksum: 12909db8b3fc6ff8060906ae0e1a5aa5 (MD5); Made available in DSpace on 2022-05-11T11:45:44Z (GMT). No. of bitstreams: 1
ijimai7_2_0_0.pdf: 69842 bytes, checksum: 12909db8b3fc6ff8060906ae0e1a5aa5 (MD5)
</summary>
</entry>
<entry>
<title>A Study on RGB Image Multi-Thresholding using Kapur/Tsallis Entropy and Moth-Flame Algorithm</title>
<link href="https://reunir.unir.net/handle/123456789/13072" rel="alternate"/>
<author>
<name>Rajinikanth, V.</name>
</author>
<author>
<name>Kadry, Seifedine</name>
</author>
<author>
<name>González-Crespo, Rubén</name>
</author>
<author>
<name>Verdú, Elena</name>
</author>
<id>https://reunir.unir.net/handle/123456789/13072</id>
<updated>2022-07-01T10:18:32Z</updated>
<summary type="text">A Study on RGB Image Multi-Thresholding using Kapur/Tsallis Entropy and Moth-Flame Algorithm
Rajinikanth, V.; Kadry, Seifedine; González-Crespo, Rubén; Verdú, Elena
In the literature, a considerable number of image processing and evaluation procedures are proposed and implemented in various domains due to their practical importance. Thresholding is one of the pre-processing techniques, widely implemented to enhance the information in a class of gray/RGB class pictures. The thresholding helps to enhance the image by grouping the similar pixels based on the chosen thresholds. In this research, an entropy assisted threshold is implemented for the benchmark RGB images. The aim of this work is to examine the thresholding performance of well-known entropy functions, such as Kapur’s and Tsallis for a chosen image threshold. This work employs a Moth-Flame-Optimization (MFO) algorithm to support the automatic identification of the finest threshold (Th) on the benchmark RGB image for a chosen threshold value (Th=2,3,4,5). After getting the threshold image, a comparison is performed against its original picture and the necessary Picture-Quality-Values (PQV) is computed to confirm the merit of the proposed work. The experimental investigation is demonstrated using benchmark images with various dimensions and the outcome of this study confirms that the MFO helps to get a satisfactory result compared to the other heuristic algorithms considered in this study.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-05-11T11:42:38Z
No. of bitstreams: 1
ijimai7_2_15_0.pdf: 2567870 bytes, checksum: e38f65b0be37534802f6765f3deb16b7 (MD5); Made available in DSpace on 2022-05-11T11:42:38Z (GMT). No. of bitstreams: 1
ijimai7_2_15_0.pdf: 2567870 bytes, checksum: e38f65b0be37534802f6765f3deb16b7 (MD5)
</summary>
</entry>
<entry>
<title>Local Technology to Enhance Data Privacy and Security in Educational Technology</title>
<link href="https://reunir.unir.net/handle/123456789/13071" rel="alternate"/>
<author>
<name>Amo, Daniel</name>
</author>
<author>
<name>Prinsloo, Paul</name>
</author>
<author>
<name>Alier, Marc</name>
</author>
<author>
<name>Fonseca, David</name>
</author>
<author>
<name>Torres Kompen, Ricardo</name>
</author>
<author>
<name>Canaleta, Xavier</name>
</author>
<author>
<name>Herrero-Martín, Javier</name>
</author>
<id>https://reunir.unir.net/handle/123456789/13071</id>
<updated>2022-05-11T09:59:51Z</updated>
<summary type="text">Local Technology to Enhance Data Privacy and Security in Educational Technology
Amo, Daniel; Prinsloo, Paul; Alier, Marc; Fonseca, David; Torres Kompen, Ricardo; Canaleta, Xavier; Herrero-Martín, Javier
In educational environments, technological adoption in the last 10 years has enabled a data-driven and decisionmaking paradigm in organizations. The integration of cloud services in schools and universities is a positive shift in the field of learning, but it also presents threats to all academic roles that need to be discussed in terms of protection, privacy, and confidentiality. Cloud storage brings the ubiquity of data to this technical transition and a delusive opportunity for cost savings. In many cases, this suggests that certain actors, beyond the control of schools and colleges, collect, handle and treat educational data on private servers and data centers. This privatization enables the manipulation of stored records, leaks, and unauthorized access. In this article, we expose the possibilities that open from the viewpoint of local technology adoption. We seek to reduce or even totally solve the detrimental effects of using cloud-based instructional and analytical technology, mixing or only using local technology. Technological methods that conform to this alternate viewpoint and new lines of study are also being suggested and created.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-05-11T09:59:51Z
No. of bitstreams: 1
ijimai7_2_23_0.pdf: 616385 bytes, checksum: 1a3b22dd452061264d7aa7cab4e0873e (MD5); Made available in DSpace on 2022-05-11T09:59:51Z (GMT). No. of bitstreams: 1
ijimai7_2_23_0.pdf: 616385 bytes, checksum: 1a3b22dd452061264d7aa7cab4e0873e (MD5)
</summary>
</entry>
<entry>
<title>Cross-Lingual Neural Network Speech Synthesis Based on Multiple Embeddings</title>
<link href="https://reunir.unir.net/handle/123456789/13070" rel="alternate"/>
<author>
<name>Nosek, Tijana V.</name>
</author>
<author>
<name>Suzić, Siniša B.</name>
</author>
<author>
<name>Pekar, Darko J.</name>
</author>
<author>
<name>Obradović, Radovan J.</name>
</author>
<author>
<name>Sečujski, Milan S.</name>
</author>
<author>
<name>Delić, Vlado D.</name>
</author>
<id>https://reunir.unir.net/handle/123456789/13070</id>
<updated>2022-05-11T09:49:49Z</updated>
<summary type="text">Cross-Lingual Neural Network Speech Synthesis Based on Multiple Embeddings
Nosek, Tijana V.; Suzić, Siniša B.; Pekar, Darko J.; Obradović, Radovan J.; Sečujski, Milan S.; Delić, Vlado D.
The paper presents a novel architecture and method for speech synthesis in multiple languages, in voices of multiple speakers and in multiple speaking styles, even in cases when speech from a particular speaker in the target language was not present in the training data. The method is based on the application of neural network embedding to combinations of speaker and style IDs, but also to phones in particular phonetic contexts, without any prior linguistic knowledge on their phonetic properties. This enables the network not only to efficiently capture similarities and differences between speakers and speaking styles, but to establish appropriate relationships between phones belonging to different languages, and ultimately to produce synthetic speech in the voice of a certain speaker in a language that he/she has never spoken. The validity of the proposed approach has been confirmed through experiments with models trained on speech corpora of American English and Mexican Spanish. It has also been shown that the proposed approach supports the use of neural vocoders, i.e. that they are able to produce synthesized speech of good quality even in languages that they were not trained on.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-05-11T09:49:49Z
No. of bitstreams: 1
ijimai7_2_10_0.pdf: 519266 bytes, checksum: e996a82f085e590cd34e1ee6e1e1490c (MD5); Made available in DSpace on 2022-05-11T09:49:49Z (GMT). No. of bitstreams: 1
ijimai7_2_10_0.pdf: 519266 bytes, checksum: e996a82f085e590cd34e1ee6e1e1490c (MD5)
</summary>
</entry>
<entry>
<title>Learning Analytics to Detect Evidence of Fraudulent Behaviour in Online Examinations</title>
<link href="https://reunir.unir.net/handle/123456789/13060" rel="alternate"/>
<author>
<name>Balderas, Antonio</name>
</author>
<author>
<name>Palomo-Duarte, Manuel</name>
</author>
<author>
<name>Caballero-Hernández, Juan Antonio</name>
</author>
<author>
<name>Rodriguez-Garcia, Mercedes</name>
</author>
<author>
<name>Dodero, Juan Manuel</name>
</author>
<id>https://reunir.unir.net/handle/123456789/13060</id>
<updated>2022-05-10T12:33:15Z</updated>
<summary type="text">Learning Analytics to Detect Evidence of Fraudulent Behaviour in Online Examinations
Balderas, Antonio; Palomo-Duarte, Manuel; Caballero-Hernández, Juan Antonio; Rodriguez-Garcia, Mercedes; Dodero, Juan Manuel
Lecturers are often reluctant to set examinations online because of the potential problems of fraudulent behaviour from their students. This concern has increased during the coronavirus pandemic because courses that were previously designed to be taken face-to-face have to be conducted online. The courses have had to be redesigned, including seminars, laboratory sessions and evaluation activities. This has brought lecturers and students into conflict because, according to the students, the activities and examinations that have been redesigned to avoid cheating are also harder. The lecturers’ concern is that students can collaborate in taking examinations that must be taken individually without the lecturers being able to do anything to prevent it, i.e. fraudulent collaboration. This research proposes a process model to obtain evidence of students who attempt to fraudulently collaborate, based on the information in the learning environment logs. It is automated in a software tool that checks how the students took the examinations and the grades that they obtained. It is applied in a case study with more than 100 undergraduate students. The results are positive and its use allowed lecturers to detect evidence of fraudulent collaboration by several clusters of students from their submission timestamps and the grades obtained.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-05-10T12:33:15Z
No. of bitstreams: 1
ijimai7_2_21_0.pdf: 800273 bytes, checksum: c4750db2120f55ac1a6e1c2443fd10fb (MD5); Made available in DSpace on 2022-05-10T12:33:15Z (GMT). No. of bitstreams: 1
ijimai7_2_21_0.pdf: 800273 bytes, checksum: c4750db2120f55ac1a6e1c2443fd10fb (MD5)
</summary>
</entry>
<entry>
<title>Optimized DWT Based Digital Image Watermarking and Extraction Using RNN-LSTM</title>
<link href="https://reunir.unir.net/handle/123456789/13059" rel="alternate"/>
<author>
<name>Kumari, R. Radha</name>
</author>
<author>
<name>Kumar, V. Vijaya</name>
</author>
<author>
<name>Naidu, K. Rama</name>
</author>
<id>https://reunir.unir.net/handle/123456789/13059</id>
<updated>2022-05-10T11:58:38Z</updated>
<summary type="text">Optimized DWT Based Digital Image Watermarking and Extraction Using RNN-LSTM
Kumari, R. Radha; Kumar, V. Vijaya; Naidu, K. Rama
The rapid growth of Internet and the fast emergence of multi-media applications over the past decades have led to new problems such as illegal copying, digital plagiarism, distribution and use of copyrighted digital data. Watermarking digital data for copyright protection is a current need of the community. For embedding watermarks, robust algorithms in die media will resolve copyright infringements. Therefore, to enhance the robustness, optimization techniques and deep neural network concepts are utilized. In this paper, the optimized Discrete Wavelet Transform (DWT) is utilized for embedding the watermark. The optimization algorithm is a combination of Simulated Annealing (SA) and Tunicate Swarm Algorithm (TSA). After performing the embedding process, the extraction is processed by deep neural network concept of Recurrent Neural Network based Long Short-Term Memory (RNN-LSTM). From the extraction process, the original image is obtained by this RNN-LSTM method. The experimental set up is carried out in the MATLAB platform. The performance metrics of PSNR, NC and SSIM are determined and compared with existing optimization and machine learning approaches. The results are achieved under various attacks to show the robustness of the proposed work.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-05-10T11:58:38Z
No. of bitstreams: 1
ijimai7_2_14_0.pdf: 1179334 bytes, checksum: df77eba60fbb6264e4a2d75b4b7c206b (MD5); Made available in DSpace on 2022-05-10T11:58:38Z (GMT). No. of bitstreams: 1
ijimai7_2_14_0.pdf: 1179334 bytes, checksum: df77eba60fbb6264e4a2d75b4b7c206b (MD5)
</summary>
</entry>
<entry>
<title>Music Boundary Detection using Convolutional Neural Networks: A Comparative Analysis of Combined Input Features</title>
<link href="https://reunir.unir.net/handle/123456789/13058" rel="alternate"/>
<author>
<name>Hernandez-Olivan, Carlos</name>
</author>
<author>
<name>Beltran, Jose R.</name>
</author>
<author>
<name>Diaz-Guerra, David</name>
</author>
<id>https://reunir.unir.net/handle/123456789/13058</id>
<updated>2022-05-10T11:49:12Z</updated>
<summary type="text">Music Boundary Detection using Convolutional Neural Networks: A Comparative Analysis of Combined Input Features
Hernandez-Olivan, Carlos; Beltran, Jose R.; Diaz-Guerra, David
The analysis of the structure of musical pieces is a task that remains a challenge for Artificial Intelligence, especially in the field of Deep Learning. It requires prior identification of the structural boundaries of the music pieces, whose structural boundary analysis has recently been studied with unsupervised methods and supervised neural networks trained with human annotations. The supervised neural networks that have been used in previous studies are Convolutional Neural Networks (CNN) that use Mel-Scaled Log-magnitude Spectograms features (MLS), Self-Similarity Matrices (SSM) or Self-Similarity Lag Matrices (SSLM) as inputs. In previously published studies, pre-processing is done in different ways using different distance metrics, and different audio features are used for computing the inputs, so a generalised pre-processing method for calculating model inputs is missing. The objective of this work is to establish a general method to pre-process these inputs by comparing the results obtained by taking the inputs calculated from different pooling strategies, distance metrics and audio characteristics, also taking into account the computing time to obtain them. We also establish the most effective combination of inputs to be delivered to the CNN to provide the most efficient way to extract the boundaries of the structure of the music pieces. With an adequate combination of input matrices and pooling strategies, we obtain an accuracy F1 of 0.411 that outperforms a current work done under the same conditions (same public available dataset for training and testing).
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-05-10T11:49:12Z
No. of bitstreams: 1
ijimai7_2_8_0.pdf: 1263458 bytes, checksum: fa397fb6164f4ce8bcd04a49851d0259 (MD5); Made available in DSpace on 2022-05-10T11:49:12Z (GMT). No. of bitstreams: 1
ijimai7_2_8_0.pdf: 1263458 bytes, checksum: fa397fb6164f4ce8bcd04a49851d0259 (MD5)
</summary>
</entry>
<entry>
<title>An Extensive Analysis of Machine Learning Based Boosting Algorithms for Software Maintainability Prediction</title>
<link href="https://reunir.unir.net/handle/123456789/13057" rel="alternate"/>
<author>
<name>Gupta, Shikha</name>
</author>
<author>
<name>Chug, Anuradha</name>
</author>
<id>https://reunir.unir.net/handle/123456789/13057</id>
<updated>2022-05-10T11:27:53Z</updated>
<summary type="text">An Extensive Analysis of Machine Learning Based Boosting Algorithms for Software Maintainability Prediction
Gupta, Shikha; Chug, Anuradha
Software Maintainability is an indispensable factor to acclaim for the quality of particular software. It describes the ease to perform several maintenance activities to make a software adaptable to the modified environment. The availability &amp; growing popularity of a wide range of Machine Learning (ML) algorithms for data analysis further provides the motivation for predicting this maintainability. However, an extensive analysis &amp; comparison of various ML based Boosting Algorithms (BAs) for Software Maintainability Prediction (SMP) has not been made yet. Therefore, the current study analyzes and compares five different BAs, i.e., AdaBoost, GBM, XGB, LightGBM, and CatBoost, for SMP using open-source datasets. Performance of the propounded prediction models has been evaluated using Root Mean Square Error (RMSE), Mean Magnitude of Relative Error (MMRE), Pred(0.25), Pred(0.30), &amp; Pred(0.75) as prediction accuracy measures followed by a non-parametric statistical test and a post hoc analysis to account for the differences in the performances of various BAs. Based on the residual errors obtained, it was observed that GBM is the best performer, followed by LightGBM for RMSE, whereas, in the case of MMRE, XGB performed the best for six out of the seven datasets, i.e., for 85.71% of the total datasets by providing minimum values for MMRE, ranging from 0.90 to 3.82. Further, on applying the statistical test and on performing the post hoc analysis, it was found that significant differences exist in the performance of different BAs and, XGB and CatBoost outperformed all other BAs for MMRE. Lastly, a comparison of BAs with four other ML algorithms has also been made to bring out BAs superiority over other algorithms. This study would open new doors for the software developers for carrying out comparatively more precise predictions well in time and hence reduce the overall maintenance costs.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-05-10T11:27:53Z
No. of bitstreams: 1
ijimai7_2_9_0.pdf: 759011 bytes, checksum: aa9073f3d12617e05859b21b70e6c44f (MD5); Made available in DSpace on 2022-05-10T11:27:53Z (GMT). No. of bitstreams: 1
ijimai7_2_9_0.pdf: 759011 bytes, checksum: aa9073f3d12617e05859b21b70e6c44f (MD5)
</summary>
</entry>
<entry>
<title>Extensive Classification of Visual Art Paintings for Enhancing Education System using Hybrid SVM-ANN with Sparse Metric Learning based on Kernel Regression</title>
<link href="https://reunir.unir.net/handle/123456789/13056" rel="alternate"/>
<author>
<name>Xu, Fei</name>
</author>
<author>
<name>Wu, Tong</name>
</author>
<author>
<name>Huang, Shali</name>
</author>
<author>
<name>Han, Kuntong</name>
</author>
<author>
<name>Lin, Wenwen</name>
</author>
<author>
<name>Wu, Shizhong</name>
</author>
<author>
<name>CB, Sivaparthipan</name>
</author>
<author>
<name>Dinesh Jackson, Samuel R</name>
</author>
<id>https://reunir.unir.net/handle/123456789/13056</id>
<updated>2022-05-10T10:21:56Z</updated>
<summary type="text">Extensive Classification of Visual Art Paintings for Enhancing Education System using Hybrid SVM-ANN with Sparse Metric Learning based on Kernel Regression
Xu, Fei; Wu, Tong; Huang, Shali; Han, Kuntong; Lin, Wenwen; Wu, Shizhong; CB, Sivaparthipan; Dinesh Jackson, Samuel R
In recent decades, the collection of visual art paintings is large, digitized, and available for public uses that are rapidly growing. The development of multi-media systems is needed due to the huge amount of digitized artwork collections for retrieving and archiving this large-scale data. This multimedia system benefits from high-level tasks and has an essential step for measuring the similarity of visual between the artistic items. For modeling the similarities between the artworks or paintings, it is essential to extract useful features of visual paintings and propose the best approach for learning these similarity metrics. The infield of visual arts education, knowing the similarities and features, makes education more attractive by enhancing cognitive development in students. In this paper, the detailed visual features are listed, and the similarity measurement between the paintings is optimized by the Sparse Metric Learning-based Kernel Regression (KR-SML). A classification model is developed using hybrid SVM-ANN for semantic-level understanding to predict painting’s genre, artist, and style. Furthermore, the Human-Computer Interaction (HCI) based formulation model is built to analyze the proposed technique. The simulation results show that the proposed model is better in terms of performance than other existing techniques.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-05-10T10:21:56Z
No. of bitstreams: 1
ijimai7_2_19_0.pdf: 727196 bytes, checksum: 4be73a56f4fce334e76982d11114a5fb (MD5); Made available in DSpace on 2022-05-10T10:21:56Z (GMT). No. of bitstreams: 1
ijimai7_2_19_0.pdf: 727196 bytes, checksum: 4be73a56f4fce334e76982d11114a5fb (MD5)
</summary>
</entry>
<entry>
<title>Audio-Visual Automatic Speech Recognition Using PZM, MFCC and Statistical Analysis</title>
<link href="https://reunir.unir.net/handle/123456789/13055" rel="alternate"/>
<author>
<name>Debnath, Saswati</name>
</author>
<author>
<name>Roy, Pinki</name>
</author>
<id>https://reunir.unir.net/handle/123456789/13055</id>
<updated>2022-05-10T10:09:43Z</updated>
<summary type="text">Audio-Visual Automatic Speech Recognition Using PZM, MFCC and Statistical Analysis
Debnath, Saswati; Roy, Pinki
Audio-Visual Automatic Speech Recognition (AV-ASR) has become the most promising research area when the audio signal gets corrupted by noise. The main objective of this paper is to select the important and discriminative audio and visual speech features to recognize audio-visual speech. This paper proposes Pseudo Zernike Moment (PZM) and feature selection method for audio-visual speech recognition. Visual information is captured from the lip contour and computes the moments for lip reading. We have extracted 19th order of Mel Frequency Cepstral Coefficients (MFCC) as speech features from audio. Since all the 19 speech features are not equally important, therefore, feature selection algorithms are used to select the most efficient features. The various statistical algorithm such as Analysis of Variance (ANOVA), Kruskal-wallis, and Friedman test are employed to analyze the significance of features along with Incremental Feature Selection (IFS) technique. Statistical analysis is used to analyze the statistical significance of the speech features and after that IFS is used to select the speech feature subset. Furthermore, multiclass Support Vector Machine (SVM), Artificial Neural Network (ANN) and Naive Bayes (NB) machine learning techniques are used to recognize the speech for both the audio and visual modalities. Based on the recognition rate combined decision is taken from the two individual recognition systems. This paper compares the result achieved by the proposed model and the existing model for both audio and visual speech recognition. Zernike Moment (ZM) is compared with PZM and shows that our proposed model using PZM extracts better discriminative features for visual speech recognition. This study also proves that audio feature selection using statistical analysis outperforms methods without any feature selection technique.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-05-10T10:09:43Z
No. of bitstreams: 1
ijimai7_2_11_0.pdf: 784862 bytes, checksum: 035a4cc4bff43d9f2f2d56c0006aacd5 (MD5); Made available in DSpace on 2022-05-10T10:09:43Z (GMT). No. of bitstreams: 1
ijimai7_2_11_0.pdf: 784862 bytes, checksum: 035a4cc4bff43d9f2f2d56c0006aacd5 (MD5)
</summary>
</entry>
<entry>
<title>Feasibility and Acceptability of a Mobile-Based Emotion Recognition Approach for Bipolar Disorder</title>
<link href="https://reunir.unir.net/handle/123456789/13054" rel="alternate"/>
<author>
<name>Daus, H.</name>
</author>
<author>
<name>Backenstrass, M.</name>
</author>
<id>https://reunir.unir.net/handle/123456789/13054</id>
<updated>2022-05-10T09:57:04Z</updated>
<summary type="text">Feasibility and Acceptability of a Mobile-Based Emotion Recognition Approach for Bipolar Disorder
Daus, H.; Backenstrass, M.
Over the past years, the mobile Health approach has motivated research projects to develop mood monitoring systems for bipolar disorder. Whereas mobile-based approaches have examined self-assessment or sensor data, so far, potentially important emotional aspects of this disease have been neglected. Thus, we developed an emotion-sensitive system that analyzes the verbal and facial expressions of bipolar patients in regard to their emotional cues. In this article, preliminary findings of a pilot study with five bipolar patients with respect to the acceptability and feasibility of the new approach are presented and discussed. There were individual differences in the usage frequency of the participants, and improvements regarding its handling were suggested. From the technical point of view, the video analysis was less dependable than the audio analysis and recognized almost exclusively the facial expressions of happiness. However, the system was feasible and well-accepted. The results indicate that further developments could facilitate the long-term analysis of expressed emotions in bipolar or other disorders without invading the privacy of patients.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-05-10T09:57:04Z
No. of bitstreams: 1
ijimai7_2_1_0.pdf: 347042 bytes, checksum: 4f2e7d6bd582069c7e54223681bdedd6 (MD5); Made available in DSpace on 2022-05-10T09:57:04Z (GMT). No. of bitstreams: 1
ijimai7_2_1_0.pdf: 347042 bytes, checksum: 4f2e7d6bd582069c7e54223681bdedd6 (MD5)
</summary>
</entry>
<entry>
<title>Design of a Virtual Assistant to Improve Interaction Between the Audience and the Presenter</title>
<link href="https://reunir.unir.net/handle/123456789/13051" rel="alternate"/>
<author>
<name>Cobos-Guzman, S.</name>
</author>
<author>
<name>Nuere, S.</name>
</author>
<author>
<name>De Miguel, L.</name>
</author>
<id>https://reunir.unir.net/handle/123456789/13051</id>
<updated>2022-05-09T12:28:43Z</updated>
<summary type="text">Design of a Virtual Assistant to Improve Interaction Between the Audience and the Presenter
Cobos-Guzman, S.; Nuere, S.; De Miguel, L.
This article presents a novel design of a Virtual Assistant as part of a human-machine interaction system to improve communication between the presenter and the audience that can be used in education or general presentations for improving interaction during the presentations (e.g., auditoriums with 200 people). The main goal of the proposed model is the design of a framework of interaction to increase the level of attention of the public in key aspects of the presentation. In this manner, the collaboration between the presenter and Virtual Assistant could improve the level of learning among the public. The design of the Virtual Assistant relies on non-anthropomorphic forms with ‘live’ characteristics generating an intuitive and self-explainable interface. A set of intuitive and useful virtual interactions to support the presenter was designed. This design was validated from various types of the public with a psychological study based on a discrete emotions’ questionnaire confirming the adequacy of the proposed solution. The human-machine interaction system supporting the Virtual Assistant should automatically recognize the attention level of the audience from audiovisual resources and synchronize the Virtual Assistant with the presentation. The system involves a complex artificial intelligence architecture embracing perception of high-level features from audio and video, knowledge representation, and reasoning for pervasive and affective computing and reinforcement learning to teach the intelligent agent to decide on the best strategy to increase the level of attention of the audience.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-05-09T12:28:43Z
No. of bitstreams: 1
ijimai7_2_20_0.pdf: 957939 bytes, checksum: b1c73d047076f7adcd9f2a6149d79c46 (MD5); Made available in DSpace on 2022-05-09T12:28:43Z (GMT). No. of bitstreams: 1
ijimai7_2_20_0.pdf: 957939 bytes, checksum: b1c73d047076f7adcd9f2a6149d79c46 (MD5)
</summary>
</entry>
<entry>
<title>A Case-Based Reasoning Model Powered by Deep Learning for Radiology Report Recommendation</title>
<link href="https://reunir.unir.net/handle/123456789/13050" rel="alternate"/>
<author>
<name>Amador-Domínguez, Elvira</name>
</author>
<author>
<name>Serrano, Emilio</name>
</author>
<author>
<name>Manrique, Daniel</name>
</author>
<author>
<name>Bajo, Javier</name>
</author>
<id>https://reunir.unir.net/handle/123456789/13050</id>
<updated>2022-05-09T12:13:39Z</updated>
<summary type="text">A Case-Based Reasoning Model Powered by Deep Learning for Radiology Report Recommendation
Amador-Domínguez, Elvira; Serrano, Emilio; Manrique, Daniel; Bajo, Javier
Case-Based Reasoning models are one of the most used reasoning paradigms in expert-knowledge-driven areas. One of the most prominent fields of use of these systems is the medical sector, where explainable models are required. However, these models are considerably reliant on user input and the introduction of relevant curated data. Deep learning approaches offer an analogous solution, where user input is not required. This paper proposes a hybrid Case-Based Reasoning, Deep Learning framework for medical-related applications, focusing on the generation of medical reports. The proposal combines the explainability and user-focused approach of case-based reasoning models with the deep learning techniques performance. Moreover, the framework is fully modular to fit a wide variety of tasks and data, such as real-time sensor captured data, images, or text, to name a few. An implementation of the proposed framework focusing on radiology report generation assistance is provided. This implementation is used to evaluate the proposal, showing that it can provide meaningful and accurate corrections, even when the amount of information available is minimal. Additional tests on the optimization degree of the case base are also performed, evidencing how the proposed framework can optimize this base to achieve optimal performance.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-05-09T12:13:39Z
No. of bitstreams: 1
ijimai7_2_2_0.pdf: 1951937 bytes, checksum: b7226a70b3bb8ff7393614bff7771290 (MD5); Made available in DSpace on 2022-05-09T12:13:39Z (GMT). No. of bitstreams: 1
ijimai7_2_2_0.pdf: 1951937 bytes, checksum: b7226a70b3bb8ff7393614bff7771290 (MD5)
</summary>
</entry>
<entry>
<title>Acoustic Classification of Mosquitoes using Convolutional Neural Networks Combined with Activity Circadian Rhythm Information</title>
<link href="https://reunir.unir.net/handle/123456789/13049" rel="alternate"/>
<author>
<name>Kim, Jaehoon</name>
</author>
<author>
<name>Oh, Jeongkyu</name>
</author>
<author>
<name>Heo, Tae-Young</name>
</author>
<id>https://reunir.unir.net/handle/123456789/13049</id>
<updated>2022-05-09T11:55:08Z</updated>
<summary type="text">Acoustic Classification of Mosquitoes using Convolutional Neural Networks Combined with Activity Circadian Rhythm Information
Kim, Jaehoon; Oh, Jeongkyu; Heo, Tae-Young
Many researchers have used sound sensors to record audio data from insects, and used these data as inputs of machine learning algorithms to classify insect species. In image classification, the convolutional neural network (CNN), a well-known deep learning algorithm, achieves better performance than any other machine learning algorithm. This performance is affected by the characteristics of the convolution filter (ConvFilter) learned inside the network. Furthermore, CNN performs well in sound classification. Unlike image classification,&#13;
however, there is little research on suitable ConvFilters for sound classification. Therefore, we compare the performances of three convolution filters, 1D-ConvFilter, 3×1 2D-ConvFilter, and 3×3 2D-ConvFilter, in two different network configurations, when classifying mosquitoes using audio data. In insect sound classification, most machine learning researchers use only audio data as input. However, a classification model, which combines other information such as activity circadian rhythm, should intuitively yield improved classification&#13;
results. To utilize such relevant additional information, we propose a method that defines this information as a priori probabilities and combines them with CNN outputs. Of the networks, VGG13 with 3×3 2D-ConvFilter showed the best performance in classifying mosquito species, with an accuracy of 80.8%. Moreover, adding activity circadian rhythm information to the networks showed an average performance improvement of 5.5%. The VGG13 network with 1D-ConvFilter achieved the highest accuracy of 85.7% with the additional activity circadian rhythm information.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-05-09T11:55:08Z
No. of bitstreams: 1
ijimai7_2_6_0.pdf: 809756 bytes, checksum: c065207b8a017f7d4558654b23d8afb1 (MD5); Made available in DSpace on 2022-05-09T11:55:08Z (GMT). No. of bitstreams: 1
ijimai7_2_6_0.pdf: 809756 bytes, checksum: c065207b8a017f7d4558654b23d8afb1 (MD5)
</summary>
</entry>
<entry>
<title>Deep Multi-Model Fusion for Human Activity Recognition Using Evolutionary Algorithms</title>
<link href="https://reunir.unir.net/handle/123456789/13048" rel="alternate"/>
<author>
<name>Verma, Kamal Kant</name>
</author>
<author>
<name>Singh, Brij Mohan</name>
</author>
<id>https://reunir.unir.net/handle/123456789/13048</id>
<updated>2022-05-09T11:48:54Z</updated>
<summary type="text">Deep Multi-Model Fusion for Human Activity Recognition Using Evolutionary Algorithms
Verma, Kamal Kant; Singh, Brij Mohan
Machine recognition of the human activities is an active research area in computer vision. In previous study, either one or two types of modalities have been used to handle this task. However, the grouping of maximum information improves the recognition accuracy of human activities. Therefore, this paper proposes an automatic human activity recognition system through deep fusion of multi-streams along with decision-level score optimization using evolutionary algorithms on RGB, depth maps and 3d skeleton joint information. Our proposed approach works in three phases, 1) space-time activity learning using two 3D Convolutional Neural Network (3DCNN) and a Long Sort Term Memory (LSTM) network from RGB, Depth and skeleton joint positions 2) Training of SVM using the activities learned from previous phase for each model and score generation using trained SVM 3) Score fusion and optimization using two Evolutionary algorithm such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO) algorithm. The proposed approach is validated on two 3D challenging datasets, MSRDailyActivity3D and UTKinectAction3D. Experiments on these two datasets achieved 85.94% and 96.5% accuracies, respectively. The experimental results show the usefulness of the proposed representation. Furthermore, the fusion of different modalities improves recognition accuracies rather than using one or two types of information and obtains the state-of-art results.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-05-09T11:48:54Z
No. of bitstreams: 1
ijimai7_2_5_0.pdf: 1042722 bytes, checksum: 9b2093802be0e848d08c745da7e2c2f7 (MD5); Made available in DSpace on 2022-05-09T11:48:54Z (GMT). No. of bitstreams: 1
ijimai7_2_5_0.pdf: 1042722 bytes, checksum: 9b2093802be0e848d08c745da7e2c2f7 (MD5)
</summary>
</entry>
<entry>
<title>Towards a Solution to Create, Test and Publish Mixed Reality Experiences for Occupational Safety and Health Learning: Training-MR</title>
<link href="https://reunir.unir.net/handle/123456789/13047" rel="alternate"/>
<author>
<name>Lopez, Miguel Angel</name>
</author>
<author>
<name>Terrón, Sara</name>
</author>
<author>
<name>Lombardo, Juan Manuel</name>
</author>
<author>
<name>González-Crespo, Rubén</name>
</author>
<id>https://reunir.unir.net/handle/123456789/13047</id>
<updated>2022-05-09T11:06:25Z</updated>
<summary type="text">Towards a Solution to Create, Test and Publish Mixed Reality Experiences for Occupational Safety and Health Learning: Training-MR
Lopez, Miguel Angel; Terrón, Sara; Lombardo, Juan Manuel; González-Crespo, Rubén
Artificial intelligence, Internet of Things, Human Augmentation, virtual reality, or mixed reality have been rapidly implemented in Industry 4.0, as they improve the productivity of workers. This productivity improvement can come largely from modernizing tools, improving training, and implementing safer working methods. Human Augmentation is helping to place workers in unique environments through virtual reality or mixed reality, by applying them to training actions in a totally innovative way. Science still has to overcome several technological challenges to achieve widespread application of these tools. One of them is the democratisation of these experiences, for which is essential to make them more accessible, reducing the cost of creation that is the main barrier to entry. The cost of these mixed reality experiences lies in the effort required to design and build these mixed reality training experiences. Nevertheless, the tool presented in this paper is a solution to these current limitations. A solution for designing, building and publishing experiences is presented in this paper. With the solution, content creators will be able to create their own training experiences in a semiassisted way and eventually publish them in the Cloud. Students will be able to access this training offered as a service, using Microsoft HoloLens2. In this paper, the reader will find technical details of the Training-MR, its architecture, mode of operation and communication
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-05-09T11:06:25Z
No. of bitstreams: 1
ijimai7_2_18_0.pdf: 576012 bytes, checksum: c6f28db410ee001b42b21c4253108fe8 (MD5); Made available in DSpace on 2022-05-09T11:06:25Z (GMT). No. of bitstreams: 1
ijimai7_2_18_0.pdf: 576012 bytes, checksum: c6f28db410ee001b42b21c4253108fe8 (MD5)
</summary>
</entry>
<entry>
<title>Eye-Tracking Signals Based Affective Classification Employing Deep Gradient Convolutional Neural Networks</title>
<link href="https://reunir.unir.net/handle/123456789/13046" rel="alternate"/>
<author>
<name>Li, Yuanfeng</name>
</author>
<author>
<name>Deng, Jiangang</name>
</author>
<author>
<name>Wu, Qun</name>
</author>
<author>
<name>Wang, Ying</name>
</author>
<id>https://reunir.unir.net/handle/123456789/13046</id>
<updated>2022-05-09T10:43:19Z</updated>
<summary type="text">Eye-Tracking Signals Based Affective Classification Employing Deep Gradient Convolutional Neural Networks
Li, Yuanfeng; Deng, Jiangang; Wu, Qun; Wang, Ying
Utilizing biomedical signals as a basis to calculate the human affective states is an essential issue of affective computing (AC). With the in-depth research on affective signals, the combination of multi-model cognition and physiological indicators, the establishment of a dynamic and complete database, and the addition of high-tech innovative products become recent trends in AC. This research aims to develop a deep gradient convolutional neural network (DGCNN) for classifying affection by using an eye-tracking signals. General&#13;
signal process tools and pre-processing methods were applied firstly, such as Kalman filter, windowing with hamming, short-time Fourier transform (SIFT), and fast Fourier transform (FTT). Secondly, the eye-moving and tracking signals were converted into images. A convolutional neural networks-based training structure was subsequently applied; the experimental dataset was acquired by an eye-tracking device by assigning four affective stimuli (nervous, calm, happy, and sad) of 16 participants. Finally, the performance of DGCNN was compared with a decision tree (DT), Bayesian Gaussian model (BGM), and k-nearest neighbor (KNN) by using indices of true positive rate (TPR) and false negative rate (FPR). Customizing mini-batch, loss, learning rate, and gradients definition for the training structure of the deep neural network was also deployed finally. The predictive classification matrix showed the effectiveness of the proposed method for eye moving and tracking signals, which performs more than 87.2% inaccuracy. This research provided a feasible way to find more natural human-computer interaction through eye moving and tracking signals and has potential application on the affective production design process.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-05-09T10:43:19Z
No. of bitstreams: 1
ijimai7_2_4_0.pdf: 1115306 bytes, checksum: 12738b8c684505e1608f4e5f72641b36 (MD5); Made available in DSpace on 2022-05-09T10:43:19Z (GMT). No. of bitstreams: 1
ijimai7_2_4_0.pdf: 1115306 bytes, checksum: 12738b8c684505e1608f4e5f72641b36 (MD5)
</summary>
</entry>
<entry>
<title>Using Grip Strength as a Cardiovascular Risk Indicator Based on Hybrid Algorithms</title>
<link href="https://reunir.unir.net/handle/123456789/13045" rel="alternate"/>
<author>
<name>Bareño-Castellanos, E.F.</name>
</author>
<author>
<name>Gaona-García, Paulo Alonso</name>
</author>
<author>
<name>Ortiz-Guzmán, J.E.</name>
</author>
<author>
<name>Montenegro-Marin, Carlos Enrique</name>
</author>
<id>https://reunir.unir.net/handle/123456789/13045</id>
<updated>2022-05-09T10:04:52Z</updated>
<summary type="text">Using Grip Strength as a Cardiovascular Risk Indicator Based on Hybrid Algorithms
Bareño-Castellanos, E.F.; Gaona-García, Paulo Alonso; Ortiz-Guzmán, J.E.; Montenegro-Marin, Carlos Enrique
This article shows the application and design of a hybrid algorithm capable of classifying people into risk groups using data such as prehensile strength, body mass index and percentage of fat. The implementation was done on Python and proposes a tool to help make medical decisions regarding the cardiovascular health of patients. The data were taken in a systematic way, k-means and c-means algorithms were used for the classification of the data, for the prediction of new data two vectorial support machines were used, one for the k-means and the other for the c-means, obtaining as a result a 100% of precision in the vectorial support machine with c-means and a 92% in the one of k-means.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-05-09T10:04:52Z
No. of bitstreams: 1
ijimai7_2_3_0.pdf: 523245 bytes, checksum: a61323622496bfa5d731b2fe40c15ba6 (MD5); Made available in DSpace on 2022-05-09T10:04:52Z (GMT). No. of bitstreams: 1
ijimai7_2_3_0.pdf: 523245 bytes, checksum: a61323622496bfa5d731b2fe40c15ba6 (MD5)
</summary>
</entry>
<entry>
<title>Neighborhood Structure-Based Model for Multilingual Arbitrarily-Oriented Text Localization in Images/Videos</title>
<link href="https://reunir.unir.net/handle/123456789/13044" rel="alternate"/>
<author>
<name>Basavaraju, H.T. H.T.</name>
</author>
<author>
<name>Manjunath Aradhya, V.N.</name>
</author>
<author>
<name>Guru, D.S.</name>
</author>
<id>https://reunir.unir.net/handle/123456789/13044</id>
<updated>2022-05-09T09:55:41Z</updated>
<summary type="text">Neighborhood Structure-Based Model for Multilingual Arbitrarily-Oriented Text Localization in Images/Videos
Basavaraju, H.T. H.T.; Manjunath Aradhya, V.N.; Guru, D.S.
The text matter in an image or a video provides more important clue and semantic information of the particular event in the actual situation. Text localization task stands an interesting and challenging research-oriented process in the zone of image processing due to irregular alignments, brightness, degradation, and complexbackground. The multilingual textual information has different types of geometrical shapes and it makes further complex to locate the text information. In this work, an effective model is presented to locate the multilingual arbitrary oriented text. The proposed method developed a neighborhood structure model to locate the text region. Initially, the maxmin cluster is applied along with 3X3 sliding window to sharpen the text region. The neighborhood structure creates the boundary for every component using normal deviation calculated from the sharpened image. Finally, the double stroke structure model is employed to locate the accurate text region. The presented model is analyzed on five standard datasets such as NUS, arbitrarily oriented text, Hua's, MRRC and real-time video dataset with performance metrics such as recall, precision, and f-measure.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-05-09T09:55:41Z
No. of bitstreams: 1
ijimai7_2_12_0.pdf: 1719075 bytes, checksum: de7f8084c7e91f0a18dbbcd61eafbf71 (MD5); Made available in DSpace on 2022-05-09T09:55:41Z (GMT). No. of bitstreams: 1
ijimai7_2_12_0.pdf: 1719075 bytes, checksum: de7f8084c7e91f0a18dbbcd61eafbf71 (MD5)
</summary>
</entry>
<entry>
<title>Deep Feature Representation and Similarity Matrix based Noise Label Refinement Method for Efficient Face Annotation</title>
<link href="https://reunir.unir.net/handle/123456789/13043" rel="alternate"/>
<author>
<name>Suruliandi, A.</name>
</author>
<author>
<name>Kasthuri, A.</name>
</author>
<author>
<name>Raja, S. P.</name>
</author>
<id>https://reunir.unir.net/handle/123456789/13043</id>
<updated>2022-05-09T09:20:44Z</updated>
<summary type="text">Deep Feature Representation and Similarity Matrix based Noise Label Refinement Method for Efficient Face Annotation
Suruliandi, A.; Kasthuri, A.; Raja, S. P.
Face annotation is a naming procedure that assigns the correct name to a person emerging from an image. Faces that are manually annotated by people in online applications include incorrect labels, giving rise to the issue of label ambiguity. This may lead to mislabelling in face annotation. Consequently, an efficient method is still essential to enhance the reliability of face annotation. Hence, in this work, a novel method named the Similarity Matrix-based Noise Label Refinement (SMNLR) is proposed, which effectively predicts the accurate label from the noisy labelled facial images. To enhance the performance of the proposed method, the deep learning technique named Convolutional Neural Networks (CNN) is used for feature representation. Several experiments are conducted to evaluate the effectiveness of the proposed face annotation method using the LFW, IMFDB and Yahoo datasets. The experimental results clearly illustrate the robustness of the proposed SMNLR method in dealing with noisy labelled faces.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-05-09T09:20:44Z
No. of bitstreams: 1
ijimai7_2_7_0.pdf: 809172 bytes, checksum: 1e76e2d158efeed6ce1d2ca5a22e329b (MD5); Made available in DSpace on 2022-05-09T09:20:44Z (GMT). No. of bitstreams: 1
ijimai7_2_7_0.pdf: 809172 bytes, checksum: 1e76e2d158efeed6ce1d2ca5a22e329b (MD5)
</summary>
</entry>
<entry>
<title>Performance and Convergence Analysis of Modified C-Means Using Jeffreys-Divergence for Clustering</title>
<link href="https://reunir.unir.net/handle/123456789/13042" rel="alternate"/>
<author>
<name>Seal, Ayan</name>
</author>
<author>
<name>Karlekar, Aditya</name>
</author>
<author>
<name>Krejcar, Ondrej</name>
</author>
<author>
<name>Herrera-Viedma, Enrique</name>
</author>
<id>https://reunir.unir.net/handle/123456789/13042</id>
<updated>2022-05-09T09:04:12Z</updated>
<summary type="text">Performance and Convergence Analysis of Modified C-Means Using Jeffreys-Divergence for Clustering
Seal, Ayan; Karlekar, Aditya; Krejcar, Ondrej; Herrera-Viedma, Enrique
The size of data that we generate every day across the globe is undoubtedly astonishing due to the growth of the Internet of Things. So, it is a common practice to unravel important hidden facts and understand the massive data using clustering techniques. However, non- linear relations, which are essentially unexplored when compared to linear correlations, are more widespread within data that is high throughput. Often, nonlinear links can model a large amount of data in a more precise fashion and highlight critical trends and patterns. Moreover, selecting an appropriate measure of similarity is a well-known issue since many years when it comes to data clustering. In this work, a non-Euclidean similarity measure is proposed, which relies on non-linear Jeffreys-divergence (JS). We subsequently develop c- means using the proposed JS (J-c-means). The various properties of the JS and J-c-means are discussed. All the analyses were carried out on a few real-life and synthetic databases. The obtained outcomes show that J-c-means outperforms some cutting-edge c-means algorithms empirically.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-05-09T09:04:12Z
No. of bitstreams: 1
ijimai7_2_13_0.pdf: 1127561 bytes, checksum: 81ce1c10d0a4d67f6f8b9358569277f2 (MD5); Made available in DSpace on 2022-05-09T09:04:12Z (GMT). No. of bitstreams: 1
ijimai7_2_13_0.pdf: 1127561 bytes, checksum: 81ce1c10d0a4d67f6f8b9358569277f2 (MD5)
</summary>
</entry>
<entry>
<title>A Systematic Literature Review of Empirical Studies on Learning Analytics in Educational Games</title>
<link href="https://reunir.unir.net/handle/123456789/13041" rel="alternate"/>
<author>
<name>Tlili, Ahmed</name>
</author>
<author>
<name>Chang, Maiga</name>
</author>
<author>
<name>Moon, Jewoong</name>
</author>
<author>
<name>Liu, Zhichun</name>
</author>
<author>
<name>Burgos, Daniel</name>
</author>
<author>
<name>Chen, Nian-Shing</name>
</author>
<author>
<name>Kinshuk</name>
</author>
<id>https://reunir.unir.net/handle/123456789/13041</id>
<updated>2023-03-23T10:53:25Z</updated>
<summary type="text">A Systematic Literature Review of Empirical Studies on Learning Analytics in Educational Games
Tlili, Ahmed; Chang, Maiga; Moon, Jewoong; Liu, Zhichun; Burgos, Daniel; Chen, Nian-Shing; Kinshuk
Learning analytics (LA) in educational games is considered an emerging practice due to its potential of enhancing the learning process. Growing research on formative assessment has shed light on the ways in which students' meaningful and in-situ learning experiences can be supported through educational games. To understand learners' playful experiences during gameplay, researchers have applied LA, which focuses on understanding students' in-game behaviour trajectories and personal learning needs during play. However, there is a lack of studies exploring how further research on LA in educational games can be conducted. Only a few analyses have discussed how LA has been designed, integrated, and implemented in educational games. Accordingly, this systematic literature review examined how LA in educational games has evolved. The study findings suggest that: (1) there is an increasing need to consider factors such as student modelling, iterative game design and personalisation when designing and implementing LA through educational games; and (2) the use of LA creates&#13;
several challenges from technical, data management and ethical perspectives. In addition to outlining these findings, this article offers important notes for practitioners, and discusses the implications of the study’s results.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-05-09T07:55:16Z
No. of bitstreams: 1
ijimai7_2_22_0.pdf: 566366 bytes, checksum: 6ffdc8dea2da263c81f5f221c6dbb46e (MD5); Made available in DSpace on 2022-05-09T07:55:16Z (GMT). No. of bitstreams: 1
ijimai7_2_22_0.pdf: 566366 bytes, checksum: 6ffdc8dea2da263c81f5f221c6dbb46e (MD5)
</summary>
</entry>
<entry>
<title>Foundations for the Design of a Creative System Based on the Analysis of the Main Techniques that Stimulate Human Creativity</title>
<link href="https://reunir.unir.net/handle/123456789/13040" rel="alternate"/>
<author>
<name>De Garrido, L.</name>
</author>
<author>
<name>Gómez Sanz, J.J.</name>
</author>
<author>
<name>Pavón Mestras, Juan</name>
</author>
<id>https://reunir.unir.net/handle/123456789/13040</id>
<updated>2022-05-09T07:24:52Z</updated>
<summary type="text">Foundations for the Design of a Creative System Based on the Analysis of the Main Techniques that Stimulate Human Creativity
De Garrido, L.; Gómez Sanz, J.J.; Pavón Mestras, Juan
This work presents the design of a computational system with creative capacity, based on the synthesis of the main methods that stimulate human creativity. When analyzing each method, a set of characteristics that the computer system must have in order to emulate a creative capacity has been suggested. In this way, by integrating all the suggestions in a structured way, it is possible to design the general architecture and functioning strategy of a computer system that has the incremental creative capacity of well-known creative methods. This computational system is designed as a multi-agent system, made up of two groups of agents, the problem solving group and the creative group, the first one exploring and evaluating paths for suitable solutions, the second implementing creative methods to generate new paths that are provided to the first group.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-05-09T07:24:52Z
No. of bitstreams: 1
ijimai7_2_17.pdf: 982798 bytes, checksum: 8de4157a28208ebfdd5829fc0d99cd16 (MD5); Made available in DSpace on 2022-05-09T07:24:52Z (GMT). No. of bitstreams: 1
ijimai7_2_17.pdf: 982798 bytes, checksum: 8de4157a28208ebfdd5829fc0d99cd16 (MD5)
</summary>
</entry>
</feed>
