Enric Sala. (2009). Off-line person-dependent signature verification (Vol. 146). Master's thesis, , Bellaterra, Barcelona.
|
Mohammad Momeny, Ali Asghar Neshat, Ahmad Jahanbakhshi, Majid Mahmoudi, Yiannis Ampatzidis, & Petia Radeva. (2023). Grading and fraud detection of saffron via learning-to-augment incorporated Inception-v4 CNN. FC - Food Control, 147, 109554.
Abstract: Saffron is a well-known product in the food industry. It is one of the spices that are sometimes adulterated with the sole motive of gaining more economic profit. Today, machine vision systems are widely used in controlling the quality of food and agricultural products as a new, non-destructive, and inexpensive approach. In this study, a machine vision system based on deep learning was used to detect fraud and saffron quality. A dataset of 1869 images was created and categorized in 6 classes including: dried saffron stigma using a dryer; dried saffron stigma using pressing method; pure stem of saffron; sunflower; saffron stem mixed with food coloring; and corn silk mixed with food coloring. A Learning-to-Augment incorporated Inception-v4 Convolutional Neural Network (LAII-v4 CNN) was developed for grading and fraud detection of saffron in images captured by smartphones. The best policies of data augmentation were selected with the proposed LAII-v4 CNN using images corrupted by Gaussian, speckle, and impulse noise to address overfitting the model. The proposed LAII-v4 CNN compared with regular CNN-based methods and traditional classifiers. Ensemble of Bagged Decision Trees, Ensemble of Boosted Decision Trees, k-Nearest Neighbor, Random Under-sampling Boosted Trees, and Support Vector Machine were used for classification of the features extracted by Histograms of Oriented Gradients and Local Binary Patterns, and selected by the Principal Component Analysis. The results showed that the proposed LAII-v4 CNN with an accuracy of 99.5% has achieved the best performance by employing batch normalization, Dropout, and leaky ReLU.
|
Diego Alejandro Cheda. (2009). Monocular egomotion estimation for ADAS application (Vol. 148). Ph.D. thesis, , Bellaterra, Barcelona.
|
Iban Berganzo-Besga, Hector A. Orengo, Felipe Lumbreras, Paloma Aliende, & Monica N. Ramsey. (2022). Automated detection and classification of multi-cell Phytoliths using Deep Learning-Based Algorithms. JArchSci - Journal of Archaeological Science, 148, 105654.
Abstract: This paper presents an algorithm for automated detection and classification of multi-cell phytoliths, one of the major components of many archaeological and paleoenvironmental deposits. This identification, based on phytolith wave pattern, is made using a pretrained VGG19 deep learning model. This approach has been tested in three key phytolith genera for the study of agricultural origins in Near East archaeology: Avena, Hordeum and Triticum. Also, this classification has been validated at species-level using Triticum boeoticum and dicoccoides images. Due to the diversity of microscopes, cameras and chemical treatments that can influence images of phytolith slides, three types of data augmentation techniques have been implemented: rotation of the images at 45-degree angles, random colour and brightness jittering, and random blur/sharpen. The implemented workflow has resulted in an overall accuracy of 93.68% for phytolith genera, improving previous attempts. The algorithm has also demonstrated its potential to automatize the classification of phytoliths species with an overall accuracy of 100%. The open code and platforms employed to develop the algorithm assure the method's accessibility, reproducibility and reusability.
|
David Vazquez, David Geronimo, & Antonio Lopez. (2009). The effect of the distance in pedestrian detection (Vol. 149). Master's thesis, , .
Abstract: Pedestrian accidents are one of the leading preventable causes of death. In order to reduce the number of accidents, in the last decade the pedestrian protection systems have been introduced, a special type of advanced driver assistance systems, in witch an on-board camera explores the road ahead for possible collisions with pedestrians in order to warn the driver or perform braking actions. As a result of the variability of the appearance, pose and size, pedestrian detection is a very challenging task. So many techniques, models and features have been proposed to solve the problem. As the appearance of pedestrians varies signicantly as a function of distance, a system based on multiple classiers specialized on diferent depths is likely to improve the overall performance with respect to a typical system based on a general detector. Accordingly, the main aim of this work is to explore the eect of the distance in pedestrian detection. We have evaluated three pedestrian detectors (HOG, HAAR and EOH) in two dierent databases (INRIA and Daimler09) for two dierent sizes (small and big). By a extensive set of experiments we answer to questions like which datasets and evaluation methods are the most adequate, which is the best method for each size of the pedestrians and why or how do the method optimum parameters vary with respect to the distance
Keywords: Pedestrian Detection
|
Maedeh Aghaei, Mariella Dimiccoli, & Petia Radeva. (2016). Multi-face tracking by extended bag-of-tracklets in egocentric photo-streams. CVIU - Computer Vision and Image Understanding, 149, 146–156.
Abstract: Wearable cameras offer a hands-free way to record egocentric images of daily experiences, where social events are of special interest. The first step towards detection of social events is to track the appearance of multiple persons involved in them. In this paper, we propose a novel method to find correspondences of multiple faces in low temporal resolution egocentric videos acquired through a wearable camera. This kind of photo-stream imposes additional challenges to the multi-tracking problem with respect to conventional videos. Due to the free motion of the camera and to its low temporal resolution, abrupt changes in the field of view, in illumination condition and in the target location are highly frequent. To overcome such difficulties, we propose a multi-face tracking method that generates a set of tracklets through finding correspondences along the whole sequence for each detected face and takes advantage of the tracklets redundancy to deal with unreliable ones. Similar tracklets are grouped into the so called extended bag-of-tracklets (eBoT), which is aimed to correspond to a specific person. Finally, a prototype tracklet is extracted for each eBoT, where the occurred occlusions are estimated by relying on a new measure of confidence. We validated our approach over an extensive dataset of egocentric photo-streams and compared it to state of the art methods, demonstrating its effectiveness and robustness.
|
Gerard Canal, Sergio Escalera, & Cecilio Angulo. (2016). A Real-time Human-Robot Interaction system based on gestures for assistive scenarios. CVIU - Computer Vision and Image Understanding, 149, 65–77.
Abstract: Natural and intuitive human interaction with robotic systems is a key point to develop robots assisting people in an easy and effective way. In this paper, a Human Robot Interaction (HRI) system able to recognize gestures usually employed in human non-verbal communication is introduced, and an in-depth study of its usability is performed. The system deals with dynamic gestures such as waving or nodding which are recognized using a Dynamic Time Warping approach based on gesture specific features computed from depth maps. A static gesture consisting in pointing at an object is also recognized. The pointed location is then estimated in order to detect candidate objects the user may refer to. When the pointed object is unclear for the robot, a disambiguation procedure by means of either a verbal or gestural dialogue is performed. This skill would lead to the robot picking an object in behalf of the user, which could present difficulties to do it by itself. The overall system — which is composed by a NAO and Wifibot robots, a KinectTM v2 sensor and two laptops — is firstly evaluated in a structured lab setup. Then, a broad set of user tests has been completed, which allows to assess correct performance in terms of recognition rates, easiness of use and response times.
Keywords: Gesture recognition; Human Robot Interaction; Dynamic Time Warping; Pointing location estimation
|
Arka Ujjal Dey, Suman Ghosh, Ernest Valveny, & Gaurav Harit. (2021). Beyond Visual Semantics: Exploring the Role of Scene Text in Image Understanding. PRL - Pattern Recognition Letters, 149, 164–171.
Abstract: Images with visual and scene text content are ubiquitous in everyday life. However, current image interpretation systems are mostly limited to using only the visual features, neglecting to leverage the scene text content. In this paper, we propose to jointly use scene text and visual channels for robust semantic interpretation of images. We do not only extract and encode visual and scene text cues, but also model their interplay to generate a contextual joint embedding with richer semantics. The contextual embedding thus generated is applied to retrieval and classification tasks on multimedia images, with scene text content, to demonstrate its effectiveness. In the retrieval framework, we augment our learned text-visual semantic representation with scene text cues, to mitigate vocabulary misses that may have occurred during the semantic embedding. To deal with irrelevant or erroneous recognition of scene text, we also apply query-based attention to our text channel. We show how the multi-channel approach, involving visual semantics and scene text, improves upon state of the art.
|
Javier Marin. (2009). Virtual learning for real testing (Vol. 150). Master's thesis, , bell.
|
Monica Piñol, Angel Sappa, & Ricardo Toledo. (2015). Adaptive Feature Descriptor Selection based on a Multi-Table Reinforcement Learning Strategy. NEUCOM - Neurocomputing, 150(A), 106–115.
Abstract: This paper presents and evaluates a framework to improve the performance of visual object classification methods, which are based on the usage of image feature descriptors as inputs. The goal of the proposed framework is to learn the best descriptor for each image in a given database. This goal is reached by means of a reinforcement learning process using the minimum information. The visual classification system used to demonstrate the proposed framework is based on a bag of features scheme, and the reinforcement learning technique is implemented through the Q-learning approach. The behavior of the reinforcement learning with different state definitions is evaluated. Additionally, a method that combines all these states is formulated in order to select the optimal state. Finally, the chosen actions are obtained from the best set of image descriptors in the literature: PHOW, SIFT, C-SIFT, SURF and Spin. Experimental results using two public databases (ETH and COIL) are provided showing both the validity of the proposed approach and comparisons with state of the art. In all the cases the best results are obtained with the proposed approach.
Keywords: Reinforcement learning; Q-learning; Bag of features; Descriptors
|
Francisco Alvaro, Francisco Cruz, Joan Andreu Sanchez, Oriol Ramos Terrades, & Jose Miguel Benedi. (2015). Structure Detection and Segmentation of Documents Using 2D Stochastic Context-Free Grammars. NEUCOM - Neurocomputing, 150(A), 147–154.
Abstract: In this paper we dene a bidimensional extension of Stochastic Context-Free Grammars for structure detection and segmentation of images of documents.
Two sets of text classication features are used to perform an initial classication of each zone of the page. Then, the document segmentation is obtained as the most likely hypothesis according to a stochastic grammar. We used a dataset of historical marriage license books to validate this approach. We also tested several inference algorithms for Probabilistic Graphical Models
and the results showed that the proposed grammatical model outperformed
the other methods. Furthermore, grammars also provide the document structure
along with its segmentation.
Keywords: document image analysis; stochastic context-free grammars; text classication features
|
Daniel Sanchez, Miguel Angel Bautista, & Sergio Escalera. (2015). HuPBA 8k+: Dataset and ECOC-GraphCut based Segmentation of Human Limbs. NEUCOM - Neurocomputing, 150(A), 173–188.
Abstract: Human multi-limb segmentation in RGB images has attracted a lot of interest in the research community because of the huge amount of possible applications in fields like Human-Computer Interaction, Surveillance, eHealth, or Gaming. Nevertheless, human multi-limb segmentation is a very hard task because of the changes in appearance produced by different points of view, clothing, lighting conditions, occlusions, and number of articulations of the human body. Furthermore, this huge pose variability makes the availability of large annotated datasets difficult. In this paper, we introduce the HuPBA8k+ dataset. The dataset contains more than 8000 labeled frames at pixel precision, including more than 120000 manually labeled samples of 14 different limbs. For completeness, the dataset is also labeled at frame-level with action annotations drawn from an 11 action dictionary which includes both single person actions and person-person interactive actions. Furthermore, we also propose a two-stage approach for the segmentation of human limbs. In a first stage, human limbs are trained using cascades of classifiers to be split in a tree-structure way, which is included in an Error-Correcting Output Codes (ECOC) framework to define a body-like probability map. This map is used to obtain a binary mask of the subject by means of GMM color modelling and GraphCuts theory. In a second stage, we embed a similar tree-structure in an ECOC framework to build a more accurate set of limb-like probability maps within the segmented user mask, that are fed to a multi-label GraphCut procedure to obtain final multi-limb segmentation. The methodology is tested on the novel HuPBA8k+ dataset, showing performance improvements in comparison to state-of-the-art approaches. In addition, a baseline of standard action recognition methods for the 11 actions categories of the novel dataset is also provided.
Keywords: Human limb segmentation; ECOC; Graph-Cuts
|
Marta Diez-Ferrer, Debora Gil, Elena Carreño, Susana Padrones, Samantha Aso, Vanesa Vicens, et al. (2016). Positive Airway Pressure-Enhanced CT to Improve Virtual Bronchoscopic Navigation. CHEST - Chest Journal, 150(4), 1003A.
|
Razieh Rastgoo, Kourosh Kiani, & Sergio Escalera. (2020). Hand sign language recognition using multi-view hand skeleton. ESWA - Expert Systems With Applications, 150, 113336.
Abstract: Hand sign language recognition from video is a challenging research area in computer vision, which performance is affected by hand occlusion, fast hand movement, illumination changes, or background complexity, just to mention a few. In recent years, deep learning approaches have achieved state-of-the-art results in the field, though previous challenges are not completely solved. In this work, we propose a novel deep learning-based pipeline architecture for efficient automatic hand sign language recognition using Single Shot Detector (SSD), 2D Convolutional Neural Network (2DCNN), 3D Convolutional Neural Network (3DCNN), and Long Short-Term Memory (LSTM) from RGB input videos. We use a CNN-based model which estimates the 3D hand keypoints from 2D input frames. After that, we connect these estimated keypoints to build the hand skeleton by using midpoint algorithm. In order to obtain a more discriminative representation of hands, we project 3D hand skeleton into three views surface images. We further employ the heatmap image of detected keypoints as input for refinement in a stacked fashion. We apply 3DCNNs on the stacked features of hand, including pixel level, multi-view hand skeleton, and heatmap features, to extract discriminant local spatio-temporal features from these stacked inputs. The outputs of the 3DCNNs are fused and fed to a LSTM to model long-term dynamics of hand sign gestures. Analyzing 2DCNN vs. 3DCNN using different number of stacked inputs into the network, we demonstrate that 3DCNN better capture spatio-temporal dynamics of hands. To the best of our knowledge, this is the first time that this multi-modal and multi-view set of hand skeleton features are applied for hand sign language recognition. Furthermore, we present a new large-scale hand sign language dataset, namely RKS-PERSIANSIGN, including 10′000 RGB videos of 100 Persian sign words. Evaluation results of the proposed model on three datasets, NYU, First-Person, and RKS-PERSIANSIGN, indicate that our model outperforms state-of-the-art models in hand sign language recognition, hand pose estimation, and hand action recognition.
Keywords: Multi-view hand skeleton; Hand sign language recognition; 3DCNN; Hand pose estimation; RGB video; Hand action recognition
|
Carola Figueroa Flores, David Berga, Joost Van de Weijer, & Bogdan Raducanu. (2021). Saliency for free: Saliency prediction as a side-effect of object recognition. PRL - Pattern Recognition Letters, 150, 1–7.
Abstract: Saliency is the perceptual capacity of our visual system to focus our attention (i.e. gaze) on relevant objects instead of the background. So far, computational methods for saliency estimation required the explicit generation of a saliency map, process which is usually achieved via eyetracking experiments on still images. This is a tedious process that needs to be repeated for each new dataset. In the current paper, we demonstrate that is possible to automatically generate saliency maps without ground-truth. In our approach, saliency maps are learned as a side effect of object recognition. Extensive experiments carried out on both real and synthetic datasets demonstrated that our approach is able to generate accurate saliency maps, achieving competitive results when compared with supervised methods.
Keywords: Saliency maps; Unsupervised learning; Object recognition
|