|   | 
Details
   web
Records
Author Vitaliy Konovalov; Albert Clapes; Sergio Escalera
Title Automatic Hand Detection in RGB-Depth Data Sequences Type Conference Article
Year 2013 Publication 16th Catalan Conference on Artificial Intelligence Abbreviated Journal
Volume Issue Pages 91-100
Keywords
Abstract Detecting hands in multi-modal RGB-Depth visual data has become a challenging Computer Vision problem with several applications of interest. This task involves dealing with changes in illumination, viewpoint variations, the articulated nature of the human body, the high flexibility of the wrist articulation, and the deformability of the hand itself. In this work, we propose an accurate and efficient automatic hand detection scheme to be applied in Human-Computer Interaction (HCI) applications in which the user is seated at the desk and, thus, only the upper body is visible. Our main hypothesis is that hand landmarks remain at a nearly constant geodesic distance from an automatically located anatomical reference point.
In a given frame, the human body is segmented first in the depth image. Then, a
graph representation of the body is built in which the geodesic paths are computed from the reference point. The dense optical flow vectors on the corresponding RGB image are used to reduce ambiguities of the geodesic paths’ connectivity, allowing to eliminate false edges interconnecting different body parts. Finally, we are able to detect the position of both hands based on invariant geodesic distances and optical flow within the body region, without involving costly learning procedures.
Address (down) Vic; October 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CCIA
Notes HuPBA;MILAB Approved no
Call Number Admin @ si @ KCE2013 Serial 2323
Permanent link to this record
 

 
Author Mariella Dimiccoli; Petia Radeva
Title Lifelogging in the era of outstanding digitization Type Conference Article
Year 2015 Publication International Conference on Digital Presentation and Preservation of Cultural and Scientific Heritage Abbreviated Journal
Volume Issue Pages
Keywords
Abstract In this paper, we give an overview on the emerging trend of the digitized self, focusing on visual lifelogging through wearable cameras. This is about continuously recording our life from a first-person view by wearing a camera that passively captures images. On one hand, visual lifelogging has opened the door to a large number of applications, including health. On the other, it has also boosted new challenges in the field of data analysis as well as new ethical concerns. While currently increasing efforts are being devoted to exploit lifelogging data for the improvement of personal well-being, we believe there are still many interesting applications to explore, ranging from tourism to the digitization of human behavior.
Address (down) Verliko Tarmovo; Bulgaria; September 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference DiPP
Notes MILAB Approved no
Call Number Admin @ si @DiR2016 Serial 2792
Permanent link to this record
 

 
Author Alejandro Cartas; Mariella Dimiccoli; Petia Radeva
Title Batch-based activity recognition from egocentric photo-streams Type Conference Article
Year 2017 Publication 1st International workshop on Egocentric Perception, Interaction and Computing Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Activity recognition from long unstructured egocentric photo-streams has several applications in assistive technology such as health monitoring and frailty detection, just to name a few. However, one of its main technical challenges is to deal with the low frame rate of wearable photo-cameras, which causes abrupt appearance changes between consecutive frames. In consequence, important discriminatory low-level features from motion such as optical flow cannot be estimated. In this paper, we present a batch-driven approach for training a deep learning architecture that strongly rely on Long short-term units to tackle this problem. We propose two different implementations of the same approach that process a photo-stream sequence using batches of fixed size with the goal of capturing the temporal evolution of high-level features. The main difference between these implementations is that one explicitly models consecutive batches by overlapping them. Experimental results over a public dataset acquired by three users demonstrate the validity of the proposed architectures to exploit the temporal evolution of convolutional features over time without relying on event boundaries.
Address (down) Venice; Italy; October 2017;
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCV - EPIC
Notes MILAB; no menciona Approved no
Call Number Admin @ si @ CDR2017 Serial 3023
Permanent link to this record
 

 
Author Aitor Alvarez-Gila; Joost Van de Weijer; Estibaliz Garrote
Title Adversarial Networks for Spatial Context-Aware Spectral Image Reconstruction from RGB Type Conference Article
Year 2017 Publication 1st International Workshop on Physics Based Vision meets Deep Learning Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Hyperspectral signal reconstruction aims at recovering the original spectral input that produced a certain trichromatic (RGB) response from a capturing device or observer.
Given the heavily underconstrained, non-linear nature of the problem, traditional techniques leverage different statistical properties of the spectral signal in order to build informative priors from real world object reflectances for constructing such RGB to spectral signal mapping. However,
most of them treat each sample independently, and thus do not benefit from the contextual information that the spatial dimensions can provide. We pose hyperspectral natural image reconstruction as an image to image mapping learning problem, and apply a conditional generative adversarial framework to help capture spatial semantics. This is the first time Convolutional Neural Networks -and, particularly, Generative Adversarial Networks- are used to solve this task. Quantitative evaluation shows a Root Mean Squared Error (RMSE) drop of 44:7% and a Relative RMSE drop of 47:0% on the ICVL natural hyperspectral image dataset.
Address (down) Venice; Italy; October 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCV-PBDL
Notes LAMP; 600.109; 600.106; 600.120 Approved no
Call Number Admin @ si @ AWG2017 Serial 2969
Permanent link to this record
 

 
Author Ivet Rafegas; Maria Vanrell
Title Color representation in CNNs: parallelisms with biological vision Type Conference Article
Year 2017 Publication ICCV Workshop on Mutual Benefits ofr Cognitive and Computer Vision Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Convolutional Neural Networks (CNNs) trained for object recognition tasks present representational capabilities approaching to primate visual systems [1]. This provides a computational framework to explore how image features
are efficiently represented. Here, we dissect a trained CNN
[2] to study how color is represented. We use a classical methodology used in physiology that is measuring index of selectivity of individual neurons to specific features. We use ImageNet Dataset [20] images and synthetic versions
of them to quantify color tuning properties of artificial neurons to provide a classification of the network population.
We conclude three main levels of color representation showing some parallelisms with biological visual systems: (a) a decomposition in a circular hue space to represent single color regions with a wider hue sampling beyond the first
layer (V2), (b) the emergence of opponent low-dimensional spaces in early stages to represent color edges (V1); and (c) a strong entanglement between color and shape patterns representing object-parts (e.g. wheel of a car), objectshapes (e.g. faces) or object-surrounds configurations (e.g. blue sky surrounding an object) in deeper layers (V4 or IT).
Address (down) Venice; Italy; October 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCV-MBCC
Notes CIC; 600.087; 600.051 Approved no
Call Number Admin @ si @ RaV2017 Serial 2984
Permanent link to this record
 

 
Author Leonardo Galteri; Dena Bazazian; Lorenzo Seidenari; Marco Bertini; Andrew Bagdanov; Anguelos Nicolaou; Dimosthenis Karatzas; Alberto del Bimbo
Title Reading Text in the Wild from Compressed Images Type Conference Article
Year 2017 Publication 1st International workshop on Egocentric Perception, Interaction and Computing Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Reading text in the wild is gaining attention in the computer vision community. Images captured in the wild are almost always compressed to varying degrees, depending on application context, and this compression introduces artifacts
that distort image content into the captured images. In this paper we investigate the impact these compression artifacts have on text localization and recognition in the wild. We also propose a deep Convolutional Neural Network (CNN) that can eliminate text-specific compression artifacts and which leads to an improvement in text recognition. Experimental results on the ICDAR-Challenge4 dataset demonstrate that compression artifacts have a significant
impact on text localization and recognition and that our approach yields an improvement in both – especially at high compression rates.
Address (down) Venice; Italy; October 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCV - EPIC
Notes DAG; 600.084; 600.121 Approved no
Call Number Admin @ si @ GBS2017 Serial 3006
Permanent link to this record
 

 
Author Marc Masana; Joost Van de Weijer; Luis Herranz;Andrew Bagdanov; Jose Manuel Alvarez
Title Domain-adaptive deep network compression Type Conference Article
Year 2017 Publication 17th IEEE International Conference on Computer Vision Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Deep Neural Networks trained on large datasets can be easily transferred to new domains with far fewer labeled examples by a process called fine-tuning. This has the advantage that representations learned in the large source domain can be exploited on smaller target domains. However, networks designed to be optimal for the source task are often prohibitively large for the target task. In this work we address the compression of networks after domain transfer.
We focus on compression algorithms based on low-rank matrix decomposition. Existing methods base compression solely on learned network weights and ignore the statistics of network activations. We show that domain transfer leads to large shifts in network activations and that it is desirable to take this into account when compressing.
We demonstrate that considering activation statistics when compressing weights leads to a rank-constrained regression problem with a closed-form solution. Because our method takes into account the target domain, it can more optimally
remove the redundancy in the weights. Experiments show that our Domain Adaptive Low Rank (DALR) method significantly outperforms existing low-rank compression techniques. With our approach, the fc6 layer of VGG19 can be compressed more than 4x more than using truncated SVD alone – with only a minor or no loss in accuracy. When applied to domain-transferred networks it allows for compression down to only 5-20% of the original number of parameters with only a minor drop in performance.
Address (down) Venice; Italy; October 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCV
Notes LAMP; 601.305; 600.106; 600.120 Approved no
Call Number Admin @ si @ Serial 3034
Permanent link to this record
 

 
Author Xialei Liu; Joost Van de Weijer; Andrew Bagdanov
Title RankIQA: Learning from Rankings for No-reference Image Quality Assessment Type Conference Article
Year 2017 Publication 17th IEEE International Conference on Computer Vision Abbreviated Journal
Volume Issue Pages
Keywords
Abstract We propose a no-reference image quality assessment (NR-IQA) approach that learns from rankings (RankIQA). To address the problem of limited IQA dataset size, we train a Siamese Network to rank images in terms of image quality by using synthetically generated distortions for which relative image quality is known. These ranked image sets can be automatically generated without laborious human labeling. We then use fine-tuning to transfer the knowledge represented in the trained Siamese Network to a traditional CNN that estimates absolute image quality from single images. We demonstrate how our approach can be made significantly more efficient than traditional Siamese Networks by forward propagating a batch of images through a single network and backpropagating gradients derived from all pairs of images in the batch. Experiments on the TID2013 benchmark show that we improve the state-of-the-art by over 5%. Furthermore, on the LIVE benchmark we show that our approach is superior to existing NR-IQA techniques and that we even outperform the state-of-the-art in full-reference IQA (FR-IQA) methods without having to resort to high-quality reference images to infer IQA.
Address (down) Venice; Italy; October 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCV
Notes LAMP; 600.106; 600.109; 600.120 Approved no
Call Number Admin @ si @ LWB2017b Serial 3036
Permanent link to this record
 

 
Author Jun Wan; Sergio Escalera; Gholamreza Anbarjafari; Hugo Jair Escalante; Xavier Baro; Isabelle Guyon; Meysam Madadi; Juri Allik; Jelena Gorbova; Chi Lin; Yiliang Xie
Title Results and Analysis of ChaLearn LAP Multi-modal Isolated and ContinuousGesture Recognition, and Real versus Fake Expressed Emotions Challenges Type Conference Article
Year 2017 Publication Chalearn Workshop on Action, Gesture, and Emotion Recognition: Large Scale Multimodal Gesture Recognition and Real versus Fake expressed emotions at ICCV Abbreviated Journal
Volume Issue Pages
Keywords
Abstract We analyze the results of the 2017 ChaLearn Looking at People Challenge at ICCV. The challenge comprised three tracks: (1) large-scale isolated (2) continuous gesture recognition, and (3) real versus fake expressed emotions tracks. It is the second round for both gesture recognition challenges, which were held first in the context of the ICPR 2016 workshop on “multimedia challenges beyond visual analysis”. In this second round, more participants joined the competitions, and the performances considerably improved compared to the first round. Particularly, the best recognition accuracy of isolated gesture recognition has improved from 56.90% to 67.71% in the IsoGD test set, and Mean Jaccard Index (MJI) of continuous gesture recognition has improved from 0.2869 to 0.6103 in the ConGD test set. The third track is the first challenge on real versus fake expressed emotion classification, including six emotion categories, for which a novel database was introduced. The first place was shared between two teams who achieved 67.70% averaged recognition rate on the test set. The data of the three tracks, the participants' code and method descriptions are publicly available to allow researchers to keep making progress in the field.
Address (down) Venice; Italy; October 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCVW
Notes HUPBA; no menciona Approved no
Call Number Admin @ si @ WEA2017 Serial 3066
Permanent link to this record
 

 
Author Yagmur Gucluturk; Umut Guclu; Marc Perez; Hugo Jair Escalante; Xavier Baro; Isabelle Guyon; Carlos Andujar; Julio C. S. Jacques Junior; Meysam Madadi; Sergio Escalera
Title Visualizing Apparent Personality Analysis with Deep Residual Networks Type Conference Article
Year 2017 Publication Chalearn Workshop on Action, Gesture, and Emotion Recognition: Large Scale Multimodal Gesture Recognition and Real versus Fake expressed emotions at ICCV Abbreviated Journal
Volume Issue Pages 3101-3109
Keywords
Abstract Automatic prediction of personality traits is a subjective task that has recently received much attention. Specifically, automatic apparent personality trait prediction from multimodal data has emerged as a hot topic within the filed of computer vision and, more particularly, the so called “looking
at people” sub-field. Considering “apparent” personality traits as opposed to real ones considerably reduces the subjectivity of the task. The real world applications are encountered in a wide range of domains, including entertainment, health, human computer interaction, recruitment and security. Predictive models of personality traits are useful for individuals in many scenarios (e.g., preparing for job interviews, preparing for public speaking). However, these predictions in and of themselves might be deemed to be untrustworthy without human understandable supportive evidence. Through a series of experiments on a recently released benchmark dataset for automatic apparent personality trait prediction, this paper characterizes the audio and
visual information that is used by a state-of-the-art model while making its predictions, so as to provide such supportive evidence by explaining predictions made. Additionally, the paper describes a new web application, which gives feedback on apparent personality traits of its users by combining
model predictions with their explanations.
Address (down) Venice; Italy; October 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCVW
Notes HUPBA; 6002.143 Approved no
Call Number Admin @ si @ GGP2017 Serial 3067
Permanent link to this record
 

 
Author Maryam Asadi-Aghbolaghi; Hugo Bertiche; Vicent Roig; Shohreh Kasaei; Sergio Escalera
Title Action Recognition from RGB-D Data: Comparison and Fusion of Spatio-temporal Handcrafted Features and Deep Strategies Type Conference Article
Year 2017 Publication Chalearn Workshop on Action, Gesture, and Emotion Recognition: Large Scale Multimodal Gesture Recognition and Real versus Fake expressed emotions at ICCV Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address (down) Venice; Italy; October 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCVW
Notes HUPBA; no menciona Approved no
Call Number Admin @ si @ ABR2017 Serial 3068
Permanent link to this record
 

 
Author Debora Gil; Jaume Garcia; Mariano Vazquez; Ruth Aris; Guilleaume Houzeaux
Title Patient-Sensitive Anatomic and Functional 3D Model of the Left Ventricle Function Type Conference Article
Year 2008 Publication 8th World Congress on Computational Mechanichs (WCCM8) Abbreviated Journal
Volume Issue Pages
Keywords Left Ventricle, Electromechanical Models, Image Processing, Magnetic Resonance.
Abstract Early diagnosis and accurate treatment of Left Ventricle (LV) dysfunction significantly increases the patient survival. Impairment of LV contractility due to cardiovascular diseases is reflected in its motion patterns. Recent advances in medical imaging, such as Magnetic Resonance (MR), have encouraged research on 3D simulation and modelling of the LV dynamics. Most of the existing 3D models [1] consider just the gross anatomy of the LV and restore a truncated ellipse which deforms along the cardiac cycle. The contraction mechanics of any muscle strongly depends on the spatial orientation of its muscular fibers since the motion that the muscle undergoes mainly takes place along the fibers. It follows that such simplified models do not allow evaluation of the heart electro-mechanical function and coupling, which has recently risen as the key point for understanding the LV functionality [2]. In order to thoroughly understand the LV mechanics it is necessary to consider the complete anatomy of the LV given by the orientation of the myocardial fibres in 3D space as described by Torrent Guasp [3].
We propose developing a 3D patient-sensitive model of the LV integrating, for the first time, the ven- tricular band anatomy (fibers orientation), the LV gross anatomy and its functionality. Such model will represent the LV function as a natural consequence of its own ventricular band anatomy. This might be decisive in restoring a proper LV contraction in patients undergoing pace marker treatment.
The LV function is defined as soon as the propagation of the contractile electromechanical pulse has been modelled. In our experiments we have used the wave equation for the propagation of the electric pulse. The electromechanical wave moves on the myocardial surface and should have a conductivity tensor oriented along the muscular fibers. Thus, whatever mathematical model for electric pulse propa- gation [4] we consider, the complete anatomy of the LV should be extracted.
The LV gross anatomy is obtained by processing multi slice MR images recorded for each patient. Information about the myocardial fibers distribution can only be extracted by Diffusion Tensor Imag- ing (DTI), which can not provide in vivo information for each patient. As a first approach, we have
Figure 1: Scheme for the Left Ventricle Patient-Sensitive Model.
computed an average model of fibers from several DTI studies of canine hearts. This rough anatomy is the input for our electro-mechanical propagation model simulating LV dynamics. The average fiber orientation is updated until the simulated LV motion agrees with the experimental evidence provided by the LV motion observed in tagged MR (TMR) sequences. Experimental LV motion is recovered by applying image processing, differential geometry and interpolation techniques to 2D TMR slices [5]. The pipeline in figure 1 outlines the interaction between simulations and experimental data leading to our patient-tailored model.
Address (down) Venice; Italy
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 9788496736559 Medium
Area Expedition Conference
Notes IAM; Approved no
Call Number IAM @ iam @ GGV2008b Serial 993
Permanent link to this record
 

 
Author Miquel Ferrer; Dimosthenis Karatzas; Ernest Valveny; Horst Bunke
Title A Recursive Embedding Approach to Median Graph Computation Type Conference Article
Year 2009 Publication 7th IAPR – TC–15 Workshop on Graph–Based Representations in Pattern Recognition Abbreviated Journal
Volume 5534 Issue Pages 113–123
Keywords
Abstract The median graph has been shown to be a good choice to infer a representative of a set of graphs. It has been successfully applied to graph-based classification and clustering. Nevertheless, its computation is extremely complex. Several approaches have been presented up to now based on different strategies. In this paper we present a new approximate recursive algorithm for median graph computation based on graph embedding into vector spaces. Preliminary experiments on three databases show that this new approach is able to obtain better medians than the previous existing approaches.
Address (down) Venice, Italy
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-642-02123-7 Medium
Area Expedition Conference GBR
Notes DAG Approved no
Call Number DAG @ dag @ FKV2009 Serial 1173
Permanent link to this record
 

 
Author Xose M. Pardo; Petia Radeva; Juan J. Villanueva
Title Self-Training Statistic Snake for Image Segmentation and Tracking. Type Miscellaneous
Year 1999 Publication Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address (down) Venice
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB Approved no
Call Number BCNPCL @ bcnpcl @ PRV1999 Serial 26
Permanent link to this record
 

 
Author Petia Radeva; Maya Dimitrova; Ch. Roumenin; David Rotger; D. Nikolov; Juan J. Villanueva
Title Integration of Multiple Sensor Modalities in ActiveVessel Cardiology Workstation Type Miscellaneous
Year 2004 Publication Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address (down) Varna (Bulgaria)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB Approved no
Call Number BCNPCL @ bcnpcl @ RDR2004 Serial 473
Permanent link to this record