Jose Garcia-Rodriguez, Isabelle Guyon, Sergio Escalera, Alexandra Psarrou, Andrew Lewis, & Miguel Cazorla. (2017). Editorial: Special Issue on Computational Intelligence for Vision and Robotics. Neural Computing and Applications - Neural Computing and Applications, 28(5), 853–854.
|
Pejman Rasti, Tonis Uiboupin, Sergio Escalera, & Gholamreza Anbarjafari. (2016). Convolutional Neural Network Super Resolution for Face Recognition in Surveillance Monitoring. In 9th Conference on Articulated Motion and Deformable Objects.
|
Dennis H. Lundtoft, Kamal Nasrollahi, Thomas B. Moeslund, & Sergio Escalera. (2016). Spatiotemporal Facial Super-Pixels for Pain Detection. In 9th Conference on Articulated Motion and Deformable Objects.
Abstract: Best student paper award.
Pain detection using facial images is of critical importance in many Health applications. Since pain is a spatiotemporal process, recent works on this topic employ facial spatiotemporal features to detect pain. These systems extract such features from the entire area of the face. In this paper, we show that by employing super-pixels we can divide the face into three regions, in a way that only one of these regions (about one third of the face) contributes to the pain estimation and the other two regions can be discarded. The experimental results on the UNBCMcMaster database show that the proposed system using this single region outperforms state-of-the-art systems in detecting no-pain scenarios, while it reaches comparable results in detecting weak and severe pain scenarios.
Keywords: Facial images; Super-pixels; Spatiotemporal filters; Pain detection
|
Mark Philip Philipsen, Anders Jorgensen, Thomas B. Moeslund, & Sergio Escalera. (2016). RGB-D Segmentation of Poultry Entrails. In 9th Conference on Articulated Motion and Deformable Objects.
Abstract: Best commercial paper award.
|
Sergio Escalera, Mercedes Torres-Torres, Brais Martinez, Xavier Baro, Hugo Jair Escalante, Isabelle Guyon, et al. (2016). ChaLearn Looking at People and Faces of the World: Face AnalysisWorkshop and Challenge 2016. In 29th IEEE Conference on Computer Vision and Pattern Recognition Workshops.
Abstract: We present the 2016 ChaLearn Looking at People and Faces of the World Challenge and Workshop, which ran three competitions on the common theme of face analysis from still images. The first one, Looking at People, addressed age estimation, while the second and third competitions, Faces of the World, addressed accessory classification and smile and gender classification, respectively. We present two crowd-sourcing methodologies used to collect manual annotations. A custom-build application was used to collect and label data about the apparent age of people (as opposed to the real age). For the Faces of the World data, the citizen-science Zooniverse platform was used. This paper summarizes the three challenges and the data used, as well as the results achieved by the participants of the competitions. Details of the ChaLearn LAP FotW competitions can be found at http://gesture.chalearn.org.
|
Antonio Esteban Lansaque, Carles Sanchez, Agnes Borras, Marta Diez-Ferrer, Antoni Rosell, & Debora Gil. (2016). Stable Airway Center Tracking for Bronchoscopic Navigation. In 28th Conference of the international Society for Medical Innovation and Technology.
Abstract: Bronchoscopists use X‐ray fluoroscopy to guide bronchoscopes to the lesion to be biopsied without any kind of incisions. Reducing exposure to X‐ray is important for both patients and doctors but alternatives like electromagnetic navigation require specific equipment and increase the cost of the clinical procedure. We propose a guiding system based on the extraction of airway centers from intra‐operative videos. Such anatomical landmarks could be
matched to the airway centerline extracted from a pre‐planned CT to indicate the best path to the lesion. We present an extraction of lumen centers
from intra‐operative videos based on tracking of maximal stable regions of energy maps.
|
Sergio Escalera, Jordi Gonzalez, Xavier Baro, & Jamie Shotton. (2016). Guest Editor Introduction to the Special Issue on Multimodal Human Pose Recovery and Behavior Analysis. TPAMI - IEEE Transactions on Pattern Analysis and Machine Intelligence, 28, 1489–1491.
Abstract: The sixteen papers in this special section focus on human pose recovery and behavior analysis (HuPBA). This is one of the most challenging topics in computer vision, pattern analysis, and machine learning. It is of critical importance for application areas that include gaming, computer interaction, human robot interaction, security, commerce, assistive technologies and rehabilitation, sports, sign language recognition, and driver assistance technology, to mention just a few. In essence, HuPBA requires dealing with the articulated nature of the human body, changes in appearance due to clothing, and the inherent problems of clutter scenes, such as background artifacts, occlusions, and illumination changes. These papers represent the most recent research in this field, including new methods considering still images, image sequences, depth data, stereo vision, 3D vision, audio, and IMUs, among others.
|
Sergio Escalera, Jordi Gonzalez, Xavier Baro, Fernando Alonso, & Martha Mackay. (2016). Care Respite: a remote monitoring eHealth system for improving ambient assisted living. In Human Motion Analysis for Healthcare Applications.
Abstract: Advances in technology that capture human motion have been quite remarkable during the last five years. New sensors have been developed, such as the Microsoft Kinect, Asus Xtion Pro live, PrimeSense Carmine and Leap Motion. Their main advantages are their non-intrusive nature, low cost and widely available support for developers offered by large corporations or Open Communities. Although they were originally developed for computer games, they have inspired numerous healthcare related ideas and projects in areas such as Medical Disorder Diagnosis, Assisted Living, Rehabilitation and Surgery.
In Assisted Living, human motion analysis allows continuous monitoring of elderly and vulnerable people and their activities to potentially detect life-threatening events such as falls. Human motion analysis in rehabilitation provides the opportunity for motivating patients through gamification, evaluating prescribed programmes of exercises and assessing patients’ progress. In operating theatres, surgeons may use a gesture-based interface to access medical information or control a tele-surgery system. Human motion analysis may also be used to diagnose a range of mental and physical diseases and conditions.
This event will discuss recent advances in human motion sensing and provide an application to healthcare for networking and exploring potential synergies and collaborations.
|
Jose Ramirez Moreno, Juan R Revilla, Miguel Reyes, & Sergio Escalera. (2016). Validación del Software ADIBAS asociado al sensor Kinect de Microsoft para la evaluación de la posición corporal. In 4th Congreso WCPT-SAR.
|
Marc Oliu, Ciprian Corneanu, Kamal Nasrollahi, Olegs Nikisins, Sergio Escalera, Yunlian Sun, et al. (2016). Improved RGB-D-T based Face Recognition. BIO - IET Biometrics, 5(4), 297–303.
Abstract: Reliable facial recognition systems are of crucial importance in various applications from entertainment to security. Thanks to the deep-learning concepts introduced in the field, a significant improvement in the performance of the unimodal facial recognition systems has been observed in the recent years. At the same time a multimodal facial recognition is a promising approach. This study combines the latest successes in both directions by applying deep learning convolutional neural networks (CNN) to the multimodal RGB, depth, and thermal (RGB-D-T) based facial recognition problem outperforming previously published results. Furthermore, a late fusion of the CNN-based recognition block with various hand-crafted features (local binary patterns, histograms of oriented gradients, Haar-like rectangular features, histograms of Gabor ordinal measures) is introduced, demonstrating even better recognition performance on a benchmark RGB-D-T database. The obtained results in this study show that the classical engineered features and CNN-based features can complement each other for recognition purposes.
|
Fernando Alonso, Xavier Baro, Sergio Escalera, Jordi Gonzalez, Martha Mackay, & Anna Serrahima. (2016). CARE RESPITE: TAKING CARE OF THE CAREGIVERS, Theme 5 The Strategic use of Mobile and Digital Health and Care Solutions. In 16th International Conference for Integrated Care.
|
Antonio Esteban Lansaque, Carles Sanchez, Agnes Borras, Marta Diez-Ferrer, Antoni Rosell, & Debora Gil. (2016). Stable Anatomical Structure Tracking for video-bronchoscopy Navigation. In 19th International Conference on Medical Image Computing and Computer Assisted Intervention Workshops.
Abstract: Bronchoscopy allows to examine the patient airways for detection of lesions and sampling of tissues without surgery. A main drawback in lung cancer diagnosis is the diculty to check whether the exploration is following the correct path to the nodule that has to be biopsied. The most extended guidance uses uoroscopy which implies repeated radiation of clinical sta and patients. Alternatives such as virtual bronchoscopy or electromagnetic navigation are very expensive and not completely robust to blood, mocus or deformations as to be extensively used. We propose a method that extracts and tracks stable lumen regions at dierent levels of the bronchial tree. The tracked regions are stored in a tree that encodes the anatomical structure of the scene which can be useful to retrieve the path to the lesion that the clinician should follow to do the biopsy. We present a multi-expert validation of our anatomical landmark extraction in 3 intra-operative ultrathin explorations.
Keywords: Lung cancer diagnosis; video-bronchoscopy; airway lumen detection; region tracking
|
Arash Akbarinia, & Karl R. Gegenfurtner. (2017). Metameric Mismatching in Natural and Artificial Reflectances. JV - Journal of Vision, 17(10), 390.
Abstract: The human visual system and most digital cameras sample the continuous spectral power distribution through three classes of receptors. This implies that two distinct spectral reflectances can result in identical tristimulus values under one illuminant and differ under another – the problem of metamer mismatching. It is still debated how frequent this issue arises in the real world, using naturally occurring reflectance functions and common illuminants.
We gathered more than ten thousand spectral reflectance samples from various sources, covering a wide range of environments (e.g., flowers, plants, Munsell chips) and evaluated their responses under a number of natural and artificial source of lights. For each pair of reflectance functions, we estimated the perceived difference using the CIE-defined distance ΔE2000 metric in Lab color space.
The degree of metamer mismatching depended on the lower threshold value l when two samples would be considered to lead to equal sensor excitations (ΔE < l), and on the higher threshold value h when they would be considered different. For example, for l=h=1, we found that 43.129 comparisons out of a total of 6×107 pairs would be considered metameric (1 in 104). For l=1 and h=5, this number reduced to 705 metameric pairs (2 in 106). Extreme metamers, for instance l=1 and h=10, were rare (22 pairs or 6 in 108), as were instances where the two members of a metameric pair would be assigned to different color categories. Not unexpectedly, we observed variations among different reflectance databases and illuminant spectra with more frequency under artificial illuminants than natural ones.
Overall, our numbers are not very different from those obtained earlier (Foster et al, JOSA A, 2006). However, our results also show that the degree of metamerism is typically not very strong and that category switches hardly ever occur.
Keywords: Metamer; colour perception; spectral discrimination; photoreceptors
|
Lluis Gomez, & Dimosthenis Karatzas. (2016). A fast hierarchical method for multi‐script and arbitrary oriented scene text extraction. IJDAR - International Journal on Document Analysis and Recognition, 19(4), 335–349.
Abstract: Typography and layout lead to the hierarchical organisation of text in words, text lines, paragraphs. This inherent structure is a key property of text in any script and language, which has nonetheless been minimally leveraged by existing text detection methods. This paper addresses the problem of text
segmentation in natural scenes from a hierarchical perspective.
Contrary to existing methods, we make explicit use of text structure, aiming directly to the detection of region groupings corresponding to text within a hierarchy produced by an agglomerative similarity clustering process over individual regions. We propose an optimal way to construct such an hierarchy introducing a feature space designed to produce text group hypotheses with
high recall and a novel stopping rule combining a discriminative classifier and a probabilistic measure of group meaningfulness based in perceptual organization. Results obtained over four standard datasets, covering text in variable orientations and different languages, demonstrate that our algorithm, while being trained in a single mixed dataset, outperforms state of the art
methods in unconstrained scenarios.
Keywords: scene text; segmentation; detection; hierarchical grouping; perceptual organisation
|
Lluis Gomez, & Dimosthenis Karatzas. (2016). A fine-grained approach to scene text script identification. In 12th IAPR Workshop on Document Analysis Systems (pp. 192–197).
Abstract: This paper focuses on the problem of script identification in unconstrained scenarios. Script identification is an important prerequisite to recognition, and an indispensable condition for automatic text understanding systems designed for multi-language environments. Although widely studied for document images and handwritten documents, it remains an almost unexplored territory for scene text images. We detail a novel method for script identification in natural images that combines convolutional features and the Naive-Bayes Nearest Neighbor classifier. The proposed framework efficiently exploits the discriminative power of small stroke-parts, in a fine-grained classification framework. In addition, we propose a new public benchmark dataset for the evaluation of joint text detection and script identification in natural scenes. Experiments done in this new dataset demonstrate that the proposed method yields state of the art results, while it generalizes well to different datasets and variable number of scripts. The evidence provided shows that multi-lingual scene text recognition in the wild is a viable proposition. Source code of the proposed method is made available online.
|