<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
<channel>
<title>vol. 5, nº 5, june 2019</title>
<link>https://reunir.unir.net/handle/123456789/12499</link>
<description/>
<pubDate>Fri, 08 Nov 2024 13:07:58 GMT</pubDate>
<dc:date>2024-11-08T13:07:58Z</dc:date>
<item>
<title>Contour Enhancement Algorithm for Improving Visual Perception of Deutan and Protan Dichromats</title>
<link>https://reunir.unir.net/handle/123456789/12534</link>
<description>Contour Enhancement Algorithm for Improving Visual Perception of Deutan and Protan Dichromats
Ribeiro, Madalena; Gomes, Abel
A variety of recoloring methods has been proposed in the literature to remedy the problem of confusing red-green colors faced by dichromat people (as well by other color-blinded people). The common strategy to mitigate this problem is to remap colors to other colors. But, it is clear this does not guarantee neither the necessary contrast to distinguish the elements of an image, nor the naturalness of colors learnt from past experience of each individual. In other words, the individual’s perceptual learning may not hold under color remapping. With this in mind, we introduce the first algorithm primarily focused on the enhancement of object contours in still images, instead of recoloring the pixels of the regions bounded by such contours. This is particularly adequate to increase contrast in images where we find adjacent regions that are color-indistinguishable from the dichromacy’s point of view.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-02-28T12:53:19Z
No. of bitstreams: 1
ijimai_5_5_10_pdf_53220.pdf: 2599974 bytes, checksum: d3cfc67d7ff776095cae39795af2fa3e (MD5); Made available in DSpace on 2022-02-28T12:53:19Z (GMT). No. of bitstreams: 1
ijimai_5_5_10_pdf_53220.pdf: 2599974 bytes, checksum: d3cfc67d7ff776095cae39795af2fa3e (MD5)
</description>
<guid isPermaLink="false">https://reunir.unir.net/handle/123456789/12534</guid>
</item>
<item>
<title>Editor’s Note</title>
<link>https://reunir.unir.net/handle/123456789/12533</link>
<description>Editor’s Note
Verdú, Elena
The International Journal of Interactive Multimedia and Artificial Intelligence - IJIMAI (ISSN 1989 - 1660) provides an interdisciplinary forum in which scientists and professionals can share their research results and report new advances on Artificial Intelligence (AI) tools or tools that use AI with interactive multimedia techniques. This regular issue presents research works based on different AI methods such as deep networks, genetic algorithms or classification trees algorithms. These methods are applied into many and various fields as video surveillance, forgery detection, facial recognition, activity recognition, hand written character recognition, clinical decision, marketing, renewable energy or social networking.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-02-28T12:26:40Z
No. of bitstreams: 1
ijimai_5_5_0_pdf_42920.pdf: 179636 bytes, checksum: bb3a08eac40ec4a0ee8f3668c0274858 (MD5); Made available in DSpace on 2022-02-28T12:26:40Z (GMT). No. of bitstreams: 1
ijimai_5_5_0_pdf_42920.pdf: 179636 bytes, checksum: bb3a08eac40ec4a0ee8f3668c0274858 (MD5)
</description>
<guid isPermaLink="false">https://reunir.unir.net/handle/123456789/12533</guid>
</item>
<item>
<title>Marketing Intelligence and Big Data: Digital Marketing Techniques on their Way to Becoming Social Engineering Techniques in Marketing</title>
<link>https://reunir.unir.net/handle/123456789/12532</link>
<description>Marketing Intelligence and Big Data: Digital Marketing Techniques on their Way to Becoming Social Engineering Techniques in Marketing
Lies, Jan
This contribution reviews the vast scope of digital application areas, which shape the digital marketing landscape and coin the present term “marketing intelligence” from a marketing technique point of view. Additionally, marketing intelligence as social engineering techniques are described. The review ranges from digital IT- and big data marketing until marketing 5.0 as digitalized trust marketing. The multiplicity of applications and interdependencies of the digital and social techniques reviewed should show that big data and marketing intelligence have already become a marketing reality. It becomes clear that marketing is witnessing a methodological, technical and cultural paradigm shift that augments and amplifies traditional outbound marketing with inbound marketing.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-02-28T12:18:06Z
No. of bitstreams: 1
ijimai_5_5_16_pdf_13983.pdf: 287444 bytes, checksum: c6acfcd1dc39d1736b3d6ce2e13ad815 (MD5); Made available in DSpace on 2022-02-28T12:18:06Z (GMT). No. of bitstreams: 1
ijimai_5_5_16_pdf_13983.pdf: 287444 bytes, checksum: c6acfcd1dc39d1736b3d6ce2e13ad815 (MD5)
</description>
<guid isPermaLink="false">https://reunir.unir.net/handle/123456789/12532</guid>
</item>
<item>
<title>Forecasting the Behavior of Gas Furnace Multivariate Time Series Using Ridge Polynomial Based Neural Network Models</title>
<link>https://reunir.unir.net/handle/123456789/12531</link>
<description>Forecasting the Behavior of Gas Furnace Multivariate Time Series Using Ridge Polynomial Based Neural Network Models
Waheeb, Waddah; Ghazali, Rozaida
In this paper, a new application of ridge polynomial based neural network models in multivariate time series forecasting is presented. The existing ridge polynomial based neural network models can be grouped into two groups. Group A consists of models that use only autoregressive inputs, whereas Group B consists of models that use autoregressive and moving-average (i.e., error feedback) inputs. The well-known Box-Jenkins gas furnace multivariate time series was used in the forecasting comparison between the two groups. Simulation results show that the models in Group B achieve significant forecasting performance as compared to the models in Group A. Therefore, the Box-Jenkins gas furnace data can be modeled better using neural networks when error feedback is used.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-02-28T11:51:12Z
No. of bitstreams: 1
ijimai_5_5_15_pdf_10586.pdf: 1192665 bytes, checksum: 342da9c69270f6bb01bd1e616bdf2c04 (MD5); Made available in DSpace on 2022-02-28T11:51:12Z (GMT). No. of bitstreams: 1
ijimai_5_5_15_pdf_10586.pdf: 1192665 bytes, checksum: 342da9c69270f6bb01bd1e616bdf2c04 (MD5)
</description>
<guid isPermaLink="false">https://reunir.unir.net/handle/123456789/12531</guid>
</item>
<item>
<title>A Recent Trend in Individual Counting Approach Using Deep Network</title>
<link>https://reunir.unir.net/handle/123456789/12530</link>
<description>A Recent Trend in Individual Counting Approach Using Deep Network
Ghazvini, Anahita; Abdullah, Siti Norul Huda Sheikh; Ayob, Masri
In video surveillance scheme, counting individuals is regarded as a crucial task. Of all the individual counting techniques in existence, the regression technique can offer enhanced performance under overcrowded area. However, this technique is unable to specify the details of counting individual such that it fails in locating the individual. On contrary, the density map approach is very effective to overcome the counting problems in various situations such as heavy overlapping and low resolution. Nevertheless, this approach may break down in cases when only the heads of individuals appear in video scenes, and it is also restricted to the feature’s types. The popular technique to obtain the pertinent information automatically is Convolutional Neural Network (CNN). However, the CNN based counting scheme is unable to sufficiently tackle three difficulties, namely, distributions of non-uniform density, changes of scale and variation of drastic scale. In this study, we cater a review on current counting techniques which are in correlation with deep net in different applications of crowded scene. The goal of this work is to specify the effectiveness of CNN applied on popular individuals counting approaches for attaining higher precision results.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-02-28T11:45:42Z
No. of bitstreams: 1
ijimai_5_5_1_pdf_11035.pdf: 274933 bytes, checksum: d70947019a084a20d5065c9d467967a2 (MD5); Made available in DSpace on 2022-02-28T11:45:42Z (GMT). No. of bitstreams: 1
ijimai_5_5_1_pdf_11035.pdf: 274933 bytes, checksum: d70947019a084a20d5065c9d467967a2 (MD5)
</description>
<guid isPermaLink="false">https://reunir.unir.net/handle/123456789/12530</guid>
</item>
<item>
<title>Optimal Performance of Doubly Fed Induction Generator Wind Farm Using Multi-Objective Genetic Algorithm</title>
<link>https://reunir.unir.net/handle/123456789/12529</link>
<description>Optimal Performance of Doubly Fed Induction Generator Wind Farm Using Multi-Objective Genetic Algorithm
Kamel, Salah; Jurado, Francisco; Elkasem, Ahmed; Rashad, Ahmed
The main purpose of this paper is allowing doubly fed induction generator wind farms (DFIG), which are connected to power system, to effectively participate in feeding electrical loads. The oscillation in power system is one of the challenges of the interconnection of wind farms to the grid. The model of DFIG contains several gains which need to be achieved with optimal values. This aim can be accomplished using an optimization algorithm in order to obtain the best performance. The multi-objective optimization algorithm is used to determine the optimal control system gains under several objectives. In this paper, a multi-objective genetic algorithm is applied to the DFIG model to determine the optimal values of the gains of DFIG control system. In order to point out the contribution of this work; the performance of optimized DFIG model is compared with the non-optimized model of DFIG. The results show that the optimized model of DFIG has better performance over the non-optimized DFIG model.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-02-28T11:25:50Z
No. of bitstreams: 1
ijimai_5_5_6_pdf_17227.pdf: 915651 bytes, checksum: da8f20b0a8735d65c463d86cbb119b5c (MD5); Made available in DSpace on 2022-02-28T11:25:50Z (GMT). No. of bitstreams: 1
ijimai_5_5_6_pdf_17227.pdf: 915651 bytes, checksum: da8f20b0a8735d65c463d86cbb119b5c (MD5)
</description>
<guid isPermaLink="false">https://reunir.unir.net/handle/123456789/12529</guid>
</item>
<item>
<title>GIFT: Gesture-Based Interaction by Fingers Tracking, an Interaction Technique for Virtual Environment</title>
<link>https://reunir.unir.net/handle/123456789/12528</link>
<description>GIFT: Gesture-Based Interaction by Fingers Tracking, an Interaction Technique for Virtual Environment
Ullah, S; Raees, M
Three Dimensional (3D) interaction is the plausible human interaction inside a Virtual Environment (VE). The rise of the Virtual Reality (VR) applications in various domains demands for a feasible 3D interface. Ensuring immersivity in a virtual space, this paper presents an interaction technique where manipulation is performed by the perceptive gestures of the two dominant fingers; thumb and index. The two fingertip-thimbles made of paper are used to trace states and positions of the fingers by an ordinary camera. Based on the positions of the fingers, the basic interaction tasks; selection, scaling, rotation, translation and navigation are performed by intuitive gestures of the fingers. Without keeping a gestural database, the features-free detection of the fingers guarantees speedier interactions. Moreover, the system is user-independent and depends neither on the size nor on the color of the users’ hand. With a case-study project; Interactions by the Gestures of Fingers (IGF) the technique is implemented for evaluation. The IGF application traces gestures of the fingers using the libraries of OpenCV at the back-end. At the front-end, the objects of the VE are rendered accordingly using the Open Graphics Library; OpenGL. The system is assessed in a moderate lighting condition by a group of 15 users. Furthermore, usability of the technique is investigated in games. Outcomes of the evaluations revealed that the approach is suitable for VR applications both in terms of cost and accuracy.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-02-28T11:19:40Z
No. of bitstreams: 1
ijimai_5_5_14_pdf_29363.pdf: 2694018 bytes, checksum: f9cf545f2eeb77a2cb238eb34e2b88f3 (MD5); Made available in DSpace on 2022-02-28T11:19:40Z (GMT). No. of bitstreams: 1
ijimai_5_5_14_pdf_29363.pdf: 2694018 bytes, checksum: f9cf545f2eeb77a2cb238eb34e2b88f3 (MD5)
</description>
<guid isPermaLink="false">https://reunir.unir.net/handle/123456789/12528</guid>
</item>
<item>
<title>User Identification and Verification from a Pair of Simultaneous EEG Channels Using Transform Based Features</title>
<link>https://reunir.unir.net/handle/123456789/12527</link>
<description>User Identification and Verification from a Pair of Simultaneous EEG Channels Using Transform Based Features
George, Loay; Hadi, Hend
In this study, the approach of combined features from two simultaneous Electroencephalogram (EEG) channels when a user is performing a certain mental task is discussed to increase the discrimination degree among subject classes, hence the visibility of using sets of features extracted from a single channel was investigated in previously published articles. The feature sets considered in previous studies is utilized to establish a combined set of features extracted from two channels. The first feature set is the energy density of power spectra of Discrete Fourier Transform (DFT) or Discrete Cosine Transform; the second one is the set of statistical moments of Discrete Wavelet Transform (DWT). Euclidean distance metric is used to accomplish feature set matching task. The combinations of features from two EEG channels showed high accuracy for the identification system, and competitive results for the verification system. The best achieved identification accuracy is (100%) for all proposed feature sets. For verification mode the best achieved Half Total Error Rate (HTER) is (0.88) with accuracy (99.12%) on Colorado State University (CSU) dataset, and (0.26) with accuracy (99.97%) on Motor Movement/Imagery (MMI) dataset.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-02-28T11:05:36Z
No. of bitstreams: 1
ijimai_5_5_7_pdf_30776.pdf: 1234323 bytes, checksum: ae5a77bae2101df0b707f9b23e170f63 (MD5); Made available in DSpace on 2022-02-28T11:05:36Z (GMT). No. of bitstreams: 1
ijimai_5_5_7_pdf_30776.pdf: 1234323 bytes, checksum: ae5a77bae2101df0b707f9b23e170f63 (MD5)
</description>
<guid isPermaLink="false">https://reunir.unir.net/handle/123456789/12527</guid>
</item>
<item>
<title>Sentiment Analysis on IMDb Movie Reviews Using Hybrid Feature Extraction Method</title>
<link>https://reunir.unir.net/handle/123456789/12526</link>
<description>Sentiment Analysis on IMDb Movie Reviews Using Hybrid Feature Extraction Method
Harish, B S; Kumar, Keerthi; Darshan, H K
Social Networking sites have become popular and common places for sharing wide range of emotions through short texts. These emotions include happiness, sadness, anxiety, fear, etc. Analyzing short texts helps in identifying the sentiment expressed by the crowd. Sentiment Analysis on IMDb movie reviews identifies the overall sentiment or opinion expressed by a reviewer towards a movie. Many researchers are working on pruning the sentiment analysis model that clearly identifies and distinguishes between a positive review and a negative review. In the proposed work, we show that the use of Hybrid features obtained by concatenating Machine Learning features (TF, TF-IDF) with Lexicon features (Positive-Negative word count, Connotation) gives better results both in terms of accuracy and complexity when tested against classifiers like SVM, Naïve Bayes, KNN and Maximum Entropy. The proposed model clearly differentiates between a positive review and negative review. Since understanding the context of the reviews plays an important role in classification, using hybrid features helps in capturing the context of the movie reviews and hence increases the accuracy of classification.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-02-28T10:47:30Z
No. of bitstreams: 1
ijimai_5_5_13_pdf_67503.pdf: 456286 bytes, checksum: f5a3f5578df8946e92c8c1e61e2d89fa (MD5); Made available in DSpace on 2022-02-28T10:47:30Z (GMT). No. of bitstreams: 1
ijimai_5_5_13_pdf_67503.pdf: 456286 bytes, checksum: f5a3f5578df8946e92c8c1e61e2d89fa (MD5)
</description>
<guid isPermaLink="false">https://reunir.unir.net/handle/123456789/12526</guid>
</item>
<item>
<title>IMCAD: Computer Aided System for Breast Masses Detection based on Immune Recognition</title>
<link>https://reunir.unir.net/handle/123456789/12525</link>
<description>IMCAD: Computer Aided System for Breast Masses Detection based on Immune Recognition
Djamila, Hamdadou; Belkhodja, Leila
Computer Aided Detection (CAD) systems are very important tools which help radiologists as a second reader in detecting early breast cancer in an efficient way, specially on screening mammograms. One of the challenging problems is the detection of masses, which are powerful signs of cancer, because of their poor apperance on mammograms. This paper investigates an automatic CAD for detection of breast masses in screening mammograms based on fuzzy segmentation and a bio-inspired method for pattern recognition: Artificial Immune Recognition System. The proposed approach is applied to real clinical images from the full field digital mammographic database: Inbreast. In order to validate our proposition, we propose the Receiver Operating Characteristic Curve as an analyzer of our IMCAD classifier system, which achieves a good area under curve, with a sensitivity of 100% and a specificity of 95%. The recognition system based on artificial immunity has shown its efficiency on recognizing masses from a very restricted set of training regions.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-02-28T10:36:30Z
No. of bitstreams: 1
ijimai_5_5_12_pdf_14881.pdf: 1446164 bytes, checksum: 160008d7c7caea9b08dc5dc1b63adb14 (MD5); Made available in DSpace on 2022-02-28T10:36:30Z (GMT). No. of bitstreams: 1
ijimai_5_5_12_pdf_14881.pdf: 1446164 bytes, checksum: 160008d7c7caea9b08dc5dc1b63adb14 (MD5)
</description>
<guid isPermaLink="false">https://reunir.unir.net/handle/123456789/12525</guid>
</item>
<item>
<title>Contribution to the Association Rules Visualization for Decision Support: A Combined Use Between Boolean Modeling and the Colored 2D Matrix</title>
<link>https://reunir.unir.net/handle/123456789/12524</link>
<description>Contribution to the Association Rules Visualization for Decision Support: A Combined Use Between Boolean Modeling and the Colored 2D Matrix
Atmani, Baghdad; Benhacine, Fatima Zohra; Abdelouhab, Fawzia Zohra
In the present paper we aim to study the visual decision support based on Cellular machine CASI (Cellular Automata for Symbolic Induction). The purpose is to improve the visualization of large sets of association rules, in order to perform Clinical decision support system and decrease doctors’ cognitive charge. One of the major problems in processing association rules is the exponential growth of generated rules volume which impacts doctor’s adaptation. In order to clarify it, many approaches meant to represent this set of association rules under visual context have been suggested. In this article we suggest to use jointly the CASI cellular machine and the colored 2D matrices to improve the visualization of association rules. Our approach has been divided into four important phases: (1) Data preparation, (2) Extracting association rules, (3) Boolean modeling of the rules base (4) 2D visualization colored by Boolean inferences.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-02-28T10:14:59Z
No. of bitstreams: 1
ijimai_5_5_5_pdf_16533.pdf: 827642 bytes, checksum: 6bd65e303d0548b2c2f9c5d58e17b265 (MD5); Made available in DSpace on 2022-02-28T10:14:59Z (GMT). No. of bitstreams: 1
ijimai_5_5_5_pdf_16533.pdf: 827642 bytes, checksum: 6bd65e303d0548b2c2f9c5d58e17b265 (MD5)
</description>
<guid isPermaLink="false">https://reunir.unir.net/handle/123456789/12524</guid>
</item>
<item>
<title>A Novel Approach on Visual Question Answering by Parameter Prediction using Faster Region Based Convolutional Neural Network</title>
<link>https://reunir.unir.net/handle/123456789/12505</link>
<description>A Novel Approach on Visual Question Answering by Parameter Prediction using Faster Region Based Convolutional Neural Network
Jha, Sudan; Dey, Anirban; Kumar, Raghvendra; Kumar-Solanki, Vijender
Visual Question Answering (VQA) is a stimulating process in the ﬁeld of Natural Language Processing (NLP) and Computer Vision (CV). In this process machine can find an answer to a natural language question which is related to an image. Question can be open-ended or multiple choice. Datasets of VQA contain mainly three components; questions, images and answers. Researchers overcome the VQA problem with deep learning based architecture that jointly combines both of two networks i.e. Convolution Neural Network (CNN) for visual (image) representation and Recurrent Neural Network (RNN) with Long Short Time Memory (LSTM) for textual (question) representation and trained the combined network end to end to generate the answer. Those models are able to answer the common and simple questions that are directly related to the image’s content. But different types of questions need different level of understanding to produce correct answers. To solve this problem, we use faster Region based-CNN (R-CNN) for extracting image features with an extra fully connected layer whose weights are dynamically obtained by LSTMs cell according to the question. We claim in this paper that a single R-CNN architecture can solve the problems related to VQA by modifying weights in the parameter prediction layer. Authors trained the network end to end by Stochastic Gradient Descent (SGD) using pretrained faster R-CNN and LSTM and tested it on benchmark datasets of VQA.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-02-24T10:50:56Z
No. of bitstreams: 1
ijimai_5_5_4_pdf_36854.pdf: 1645422 bytes, checksum: 2389b1ebfa24f250540f62850dc0ec52 (MD5); Made available in DSpace on 2022-02-24T10:50:56Z (GMT). No. of bitstreams: 1
ijimai_5_5_4_pdf_36854.pdf: 1645422 bytes, checksum: 2389b1ebfa24f250540f62850dc0ec52 (MD5)
</description>
<guid isPermaLink="false">https://reunir.unir.net/handle/123456789/12505</guid>
</item>
<item>
<title>Detecting Image Brush Editing Using the Discarded Coefficients and Intentions</title>
<link>https://reunir.unir.net/handle/123456789/12504</link>
<description>Detecting Image Brush Editing Using the Discarded Coefficients and Intentions
López Hernández, Fernando Carlos; de-la-Fuente-Valentín, Luis; Sarría, Íñigo
This paper describes a quick and simple method to detect brush editing in JPEG images. The novelty of the proposed method is based on detecting the discarded coefficients during the quantization of the image. Another novelty of this paper is the development of a subjective metric named intentions. The method directly analyzes the allegedly tampered image and generates a forgery mask indicating forgery evidence for each image block. The experiments show that our method works especially well in detecting brush strokes, and it works reasonably well with added captions and image splicing. However, the method is less effective detecting copy-moved and blurred regions. This means that our method can effectively contribute to implementing a complete imagetampering detection tool. The editing operations for which our method is less effective can be complemented with methods more adequate to detect them.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-02-24T10:27:11Z
No. of bitstreams: 1
ijimai_5_5_2_pdf_18456.pdf: 1323688 bytes, checksum: 9a6025ecf36c634d091e29269c999ff2 (MD5); Made available in DSpace on 2022-02-24T10:27:11Z (GMT). No. of bitstreams: 1
ijimai_5_5_2_pdf_18456.pdf: 1323688 bytes, checksum: 9a6025ecf36c634d091e29269c999ff2 (MD5)
</description>
<guid isPermaLink="false">https://reunir.unir.net/handle/123456789/12504</guid>
</item>
<item>
<title>Improved Behavior Monitoring and Classification Using Cues Parameters Extraction from Camera Array Images</title>
<link>https://reunir.unir.net/handle/123456789/12503</link>
<description>Improved Behavior Monitoring and Classification Using Cues Parameters Extraction from Camera Array Images
Jalal, Ahmad; Kamal, Shaharyar
Behavior monitoring and classification is a mechanism used to automatically identify or verify individual based on their human detection, tracking and behavior recognition from video sequences captured by a depth camera. In this paper, we designed a system that precisely classifies the nature of 3D body postures obtained by Kinect using an advanced recognizer. We proposed novel features that are suitable for depth data. These features are robust to noise, invariant to translation and scaling, and capable of monitoring fast human bodyparts movements. Lastly, advanced hidden Markov model is used to recognize different activities. In the extensive experiments, we have seen that our system consistently outperforms over three depth-based behavior datasets, i.e., IM-DailyDepthActivity, MSRDailyActivity3D and MSRAction3D in both posture classification and behavior recognition. Moreover, our system handles subject's body parts rotation, self-occlusion and body parts missing which significantly track complex activities and improve recognition rate. Due to easy accessible, low-cost and friendly deployment process of depth camera, the proposed system can be applied over various consumer-applications including patient-monitoring system, automatic video surveillance, smart homes/offices and 3D games.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-02-24T10:01:36Z
No. of bitstreams: 1
ijimai_5_5_9_pdf_48446.pdf: 578451 bytes, checksum: d0dd27c2ec3c4c07158deeb3ab61e9ba (MD5); Made available in DSpace on 2022-02-24T10:01:36Z (GMT). No. of bitstreams: 1
ijimai_5_5_9_pdf_48446.pdf: 578451 bytes, checksum: d0dd27c2ec3c4c07158deeb3ab61e9ba (MD5)
</description>
<guid isPermaLink="false">https://reunir.unir.net/handle/123456789/12503</guid>
</item>
<item>
<title>A Diversity-Accuracy Measure for Homogenous Ensemble Selection</title>
<link>https://reunir.unir.net/handle/123456789/12502</link>
<description>A Diversity-Accuracy Measure for Homogenous Ensemble Selection
Zouggar, Taleb; Adla, A
Several selection methods in the literature are essentially based on an evaluation function that determines whether a model M contributes positively to boost the performances of the whole ensemble. In this paper, we propose a method called DIversity and ACcuracy for Ensemble Selection (DIACES) using an evaluation function based on both diversity and accuracy. The method is applied on homogenous ensembles composed of C4.5 decision trees and based on a hill climbing strategy. This allows selecting ensembles with the best compromise between maximum diversity and minimum error rate. Comparative studies show that in most cases the proposed method generates reduced size ensembles with better performances than usual ensemble simplification methods.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-02-24T09:54:33Z
No. of bitstreams: 1
ijimai_5_5_8_pdf_37634.pdf: 644785 bytes, checksum: 99cc13fa30ac38ba5beec9ab85043c80 (MD5); Made available in DSpace on 2022-02-24T09:54:33Z (GMT). No. of bitstreams: 1
ijimai_5_5_8_pdf_37634.pdf: 644785 bytes, checksum: 99cc13fa30ac38ba5beec9ab85043c80 (MD5)
</description>
<guid isPermaLink="false">https://reunir.unir.net/handle/123456789/12502</guid>
</item>
<item>
<title>Deep Belief Network and Auto-Encoder for Face Classification</title>
<link>https://reunir.unir.net/handle/123456789/12501</link>
<description>Deep Belief Network and Auto-Encoder for Face Classification
Bouchra, Nassih; Mohammed, Ngadi; Nabil, Hmina; Aouatif, Amine
The Deep Learning models have drawn ever-increasing research interest owing to their intrinsic capability of overcoming the drawback of traditional algorithm. Hence, we have adopted the representative Deep Learning methods which are Deep Belief Network (DBN) and Stacked Auto-Encoder (SAE), to initialize deep supervised Neural Networks (NN), besides of Back Propagation Neural Networks (BPNN) applied to face classification task. Moreover, our contribution is to extract hierarchical representations of face image based on the Deep Learning models which are: DBN, SAE and BPNN. Then, the extracted feature vectors of each model are used as input of NN classifier. Next, to test our approach and evaluate its performance, a simulation series of experiments were performed on two facial databases: BOSS and MIT. Our proposed approach which is (DBN,NN) has a significant improvement on the classification error rate compared to (SAE,NN) and BPNN which we get 1.14% and 1.96% in terms of error rate with BOSS and MIT respectively.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-02-24T09:30:17Z
No. of bitstreams: 1
ijimai_5_5_3_pdf_64780.pdf: 1185160 bytes, checksum: fa2fcff4722df825ed0f384dc93fa8f9 (MD5); Made available in DSpace on 2022-02-24T09:30:17Z (GMT). No. of bitstreams: 1
ijimai_5_5_3_pdf_64780.pdf: 1185160 bytes, checksum: fa2fcff4722df825ed0f384dc93fa8f9 (MD5)
</description>
<guid isPermaLink="false">https://reunir.unir.net/handle/123456789/12501</guid>
</item>
<item>
<title>Handwritten Arabic Documents Segmentation into Text Lines using Seam Carving</title>
<link>https://reunir.unir.net/handle/123456789/12500</link>
<description>Handwritten Arabic Documents Segmentation into Text Lines using Seam Carving
Souhar, Abdelghani; Daldali, M
Inspired from human perception and common text documents characteristics based on readability constraints, an Arabic text line segmentation approach is proposed using seam carving. Taking the gray scale of the image as input data, this technique offers better results at extracting handwritten text lines without the need for the binary representation of the document image. In addition to its fast processing time, its versatility permits to process a multitude of document types, especially documents presenting low text-to-background contrast such as degraded historical manuscripts or complex writing styles like cursive handwriting. Even if our focus in this paper was on Arabic text segmentation, this method is language independent. Tests on a public database of 123 handwritten Arabic documents showed a line detection rate of 97.5% for a matching score of 90%.
Submitted by Susana Figueroa Navarro (susana.figueroa.n@unir.net) on 2022-02-24T08:50:00Z
No. of bitstreams: 1
ijimai_5_5_11_pdf_12351.pdf: 1591538 bytes, checksum: 5825c7c92e4aa7cb244d81430781128b (MD5); Made available in DSpace on 2022-02-24T08:50:00Z (GMT). No. of bitstreams: 1
ijimai_5_5_11_pdf_12351.pdf: 1591538 bytes, checksum: 5825c7c92e4aa7cb244d81430781128b (MD5)
</description>
<guid isPermaLink="false">https://reunir.unir.net/handle/123456789/12500</guid>
</item>
</channel>
</rss>
