toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Sergio Escalera; Xavier Baro; Jordi Gonzalez; Miguel Angel Bautista; Meysam Madadi; Miguel Reyes; Victor Ponce; Hugo Jair Escalante; Jaime Shotton; Isabelle Guyon edit   pdf
doi  openurl
  Title (up) ChaLearn Looking at People Challenge 2014: Dataset and Results Type Conference Article
  Year 2014 Publication ECCV Workshop on ChaLearn Looking at People Abbreviated Journal  
  Volume 8925 Issue Pages 459-473  
  Keywords Human Pose Recovery; Behavior Analysis; Action and in- teractions; Multi-modal gestures; recognition  
  Abstract This paper summarizes the ChaLearn Looking at People 2014 challenge data and the results obtained by the participants. The competition was split into three independent tracks: human pose recovery from RGB data, action and interaction recognition from RGB data sequences, and multi-modal gesture recognition from RGB-Depth sequences. For all the tracks, the goal was to perform user-independent recognition in sequences of continuous images using the overlapping Jaccard index as the evaluation measure. In this edition of the ChaLearn challenge, two large novel data sets were made publicly available and the Microsoft Codalab platform were used to manage the competition. Outstanding results were achieved in the three challenge tracks, with accuracy results of 0.20, 0.50, and 0.85 for pose recovery, action/interaction recognition, and multi-modal gesture recognition, respectively.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ECCVW  
  Notes HuPBA; ISE; 600.063;MV Approved no  
  Call Number Admin @ si @ EBG2014 Serial 2529  
Permanent link to this record
 

 
Author Jun Wan; Yibing Zhao; Shuai Zhou; Isabelle Guyon; Sergio Escalera edit   pdf
doi  openurl
  Title (up) ChaLearn Looking at People RGB-D Isolated and Continuous Datasets for Gesture Recognition Type Conference Article
  Year 2016 Publication 29th IEEE Conference on Computer Vision and Pattern Recognition Worshops Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract In this paper, we present two large video multi-modal datasets for RGB and RGB-D gesture recognition: the ChaLearn LAP RGB-D Isolated Gesture Dataset (IsoGD)and the Continuous Gesture Dataset (ConGD). Both datasets are derived from the ChaLearn Gesture Dataset
(CGD) that has a total of more than 50000 gestures for the “one-shot-learning” competition. To increase the potential of the old dataset, we designed new well curated datasets composed of 249 gesture labels, and including 47933 gestures manually labeled the begin and end frames in sequences.Using these datasets we will open two competitions
on the CodaLab platform so that researchers can test and compare their methods for “user independent” gesture recognition. The first challenge is designed for gesture spotting
and recognition in continuous sequences of gestures while the second one is designed for gesture classification from segmented data. The baseline method based on the bag of visual words model is also presented.
 
  Address Las Vegas; USA; July 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes HuPBA;MILAB; Approved no  
  Call Number Admin @ si @ WZZ2016 Serial 2771  
Permanent link to this record
 

 
Author Sergio Escalera; Xavier Baro; Hugo Jair Escalante; Isabelle Guyon edit   pdf
openurl 
  Title (up) ChaLearn Looking at People: A Review of Events and Resources Type Conference Article
  Year 2017 Publication 30th International Joint Conference on Neural Networks Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract This paper reviews the historic of ChaLearn Looking at People (LAP) events. We started in 2011 (with the release of the first Kinect device) to run challenges related to human action/activity and gesture recognition. Since then we have regularly organized events in a series of competitions covering all aspects of visual analysis of humans. So far we have organized more than 10 international challenges and events in this field. This paper reviews associated events, and introduces the ChaLearn LAP platform where public resources (including code, data and preprints of papers) related to the organized events are available. We also provide a discussion on perspectives of ChaLearn LAP activities.  
  Address Anchorage; Alaska; USA; May 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IJCNN  
  Notes HuPBA; 602.143 Approved no  
  Call Number Admin @ si @ EBE2017 Serial 3012  
Permanent link to this record
 

 
Author Rain Eric Haamer; Kaustubh Kulkarni; Nasrin Imanpour; Mohammad Ahsanul Haque; Egils Avots; Michelle Breisch; Kamal Nasrollahi; Sergio Escalera; Cagri Ozcinar; Xavier Baro; Ahmad R. Naghsh-Nilchi; Thomas B. Moeslund; Gholamreza Anbarjafari edit   pdf
doi  openurl
  Title (up) Changes in Facial Expression as Biometric: A Database and Benchmarks of Identification Type Conference Article
  Year 2018 Publication 8th International Workshop on Human Behavior Understanding Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Facial dynamics can be considered as unique signatures for discrimination between people. These have started to become important topic since many devices have the possibility of unlocking using face recognition or verification. In this work, we evaluate the efficacy of the transition frames of video in emotion as compared to the peak emotion frames for identification. For experiments with transition frames we extract features from each frame of the video from a fine-tuned VGG-Face Convolutional Neural Network (CNN) and geometric features from facial landmark points. To model the temporal context of the transition frames we train a Long-Short Term Memory (LSTM) on the geometric and the CNN features. Furthermore, we employ two fusion strategies: first, an early fusion, in which the geometric and the CNN features are stacked and fed to the LSTM. Second, a late fusion, in which the prediction of the LSTMs, trained independently on the two features, are stacked and used with a Support Vector Machine (SVM). Experimental results show that the late fusion strategy gives the best results and the transition frames give better identification results as compared to the peak emotion frames.  
  Address Xian; China; May 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference FGW  
  Notes HUPBA; no proj Approved no  
  Call Number Admin @ si @ HKI2018 Serial 3118  
Permanent link to this record
 

 
Author Jaume Garcia; Debora Gil; Joel Barajas; Francesc Carreras; Sandra Pujades; Petia Radeva edit   pdf
openurl 
  Title (up) Characterization of ventricular torsion in healthy subjects using Gabor filters and a variational framework Type Conference Article
  Year 2006 Publication Proc. Computers in Cardiology Abbreviated Journal  
  Volume Issue Pages 877-880  
  Keywords  
  Abstract In this work, we present a fully automated method for tissue deformation estimation in tagged magnetic resonance images (TMRI). Gabor filter banks, tuned independently for each left ventricle level, provide optimally filtered complex images which phase remains constant along the cardiac cycle. This fact can be thought as the brightness constancy condition required by classical optical flow (OF) methods. Pairs of these filtered sequences, together with a variational formulation are used in a second step to obtain dense continuous deformation maps that we call Harmonic Phase Flow. This method has been used to determine reference values of ventricular torsion (VT) in a set of 8 healthy volunteers. The results encourage the use of VT as a useful parameter for ventricular function assessment in clinical routine.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes IAM;MILAB Approved no  
  Call Number IAM @ iam @ GGB2006a Serial 1509  
Permanent link to this record
 

 
Author D. Jayagopi; Bogdan Raducanu; D. Gatica-Perez edit  doi
isbn  openurl
  Title (up) Characterizing conversational group dynamics using nonverbal behaviour Type Conference Article
  Year 2009 Publication 10th IEEE International Conference on Multimedia and Expo Abbreviated Journal  
  Volume Issue Pages 370–373  
  Keywords  
  Abstract This paper addresses the novel problem of characterizing conversational group dynamics. It is well documented in social psychology that depending on the objectives a group, the dynamics are different. For example, a competitive meeting has a different objective from that of a collaborative meeting. We propose a method to characterize group dynamics based on the joint description of a group members' aggregated acoustical nonverbal behaviour to classify two meeting datasets (one being cooperative-type and the other being competitive-type). We use 4.5 hours of real behavioural multi-party data and show that our methodology can achieve a classification rate of upto 100%.  
  Address New York, USA  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1945-7871 ISBN 978-1-4244-4290-4 Medium  
  Area Expedition Conference ICME  
  Notes OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ JRG2009 Serial 1217  
Permanent link to this record
 

 
Author J. Garcia; J.M. Sanchez; X. Orriols; X. Binefa edit  openurl
  Title (up) Chromatic aberration and depth extraction. Type Conference Article
  Year 2000 Publication 15 th International Conference on Pattern Recognition Abbreviated Journal  
  Volume 1 Issue Pages 762-765  
  Keywords  
  Abstract  
  Address Barcelona.  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number Admin @ si @ GSO2000 Serial 226  
Permanent link to this record
 

 
Author Sandra Jimenez; Xavier Otazu; Valero Laparra; Jesus Malo edit   pdf
doi  openurl
  Title (up) Chromatic induction and contrast masking: similar models, different goals? Type Conference Article
  Year 2013 Publication Human Vision and Electronic Imaging XVIII Abbreviated Journal  
  Volume 8651 Issue Pages  
  Keywords  
  Abstract Normalization of signals coming from linear sensors is an ubiquitous mechanism of neural adaptation.1 Local interaction between sensors tuned to a particular feature at certain spatial position and neighbor sensors explains a wide range of psychophysical facts including (1) masking of spatial patterns, (2) non-linearities of motion sensors, (3) adaptation of color perception, (4) brightness and chromatic induction, and (5) image quality assessment. Although the above models have formal and qualitative similarities, it does not necessarily mean that the mechanisms involved are pursuing the same statistical goal. For instance, in the case of chromatic mechanisms (disregarding spatial information), different parameters in the normalization give rise to optimal discrimination or adaptation, and different non-linearities may give rise to error minimization or component independence. In the case of spatial sensors (disregarding color information), a number of studies have pointed out the benefits of masking in statistical independence terms. However, such statistical analysis has not been performed for spatio-chromatic induction models where chromatic perception depends on spatial configuration. In this work we investigate whether successful spatio-chromatic induction models,6 increase component independence similarly as previously reported for masking models. Mutual information analysis suggests that seeking an efficient chromatic representation may explain the prevalence of induction effects in spatially simple images. © (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.  
  Address San Francisco CA; USA; February 2013  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference HVEI  
  Notes CIC Approved no  
  Call Number Admin @ si @ JOL2013 Serial 2240  
Permanent link to this record
 

 
Author Mireia Sole; Joan Blanco; Debora Gil; Oliver Valero; G. Fonseka; M. Lawrie; Francesca Vidal; Zaida Sarrate edit   pdf
openurl 
  Title (up) Chromosome Territories in Mice Spermatogenesis: A new three-dimensional methodology of study Type Conference Article
  Year 2017 Publication 11th European CytoGenesis Conference Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Florencia; Italia; July 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ECA  
  Notes IAM; 600.096; 600.145 Approved no  
  Call Number Admin @ si @ SBG2017a Serial 2936  
Permanent link to this record
 

 
Author Sergio Escalera; Alicia Fornes; Oriol Pujol; Alberto Escudero; Petia Radeva edit  url
isbn  openurl
  Title (up) Circular Blurred Shape Model for Symbol Spotting in Documents Type Conference Article
  Year 2009 Publication 16th IEEE International Conference on Image Processing Abbreviated Journal  
  Volume Issue Pages 1985-1988  
  Keywords  
  Abstract Symbol spotting problem requires feature extraction strategies able to generalize from training samples and to localize the target object while discarding most part of the image. In the case of document analysis, symbol spotting techniques have to deal with a high variability of symbols' appearance. In this paper, we propose the Circular Blurred Shape Model descriptor. Feature extraction is performed capturing the spatial arrangement of significant object characteristics in a correlogram structure. Shape information from objects is shared among correlogram regions, being tolerant to the irregular deformations. Descriptors are learnt using a cascade of classifiers and Abadoost as the base classifier. Finally, symbol spotting is performed by means of a windowing strategy using the learnt cascade over plan and old musical score documents. Spotting and multi-class categorization results show better performance comparing with the state-of-the-art descriptors.  
  Address Cairo, Egypt  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4244-5653-6 Medium  
  Area Expedition Conference ICIP  
  Notes MILAB;HuPBA;DAG Approved no  
  Call Number BCNPCL @ bcnpcl @ EFP2009b Serial 1184  
Permanent link to this record
 

 
Author Fernando Vilariño edit  openurl
  Title (up) Citizen experience as a powerful communication tool: Open Innovation and the role of Living Labs in EU Type Conference Article
  Year 2017 Publication European Conference of Science Journalists Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract The Open Innovation 2.0 model spearheaded by the European Commission introduces conceptual changes in how innovation processes should be developed. The notion of an innovation ecosystem, and the active participation of the citizens (and all the different actors of the quadruple helix) in innovation processes, opens up new channels for scientific communication, where the citizens (and all actors) can be naturally reached and facilitate the spread of the scientific message in their communities. Unleashing the power of such mechanisms, while maintaining control over the scientific communication done through such channels presents an opportunity and a challenge at the same time.

This workshop will look into key concepts that the Open Innovation 2.0 EU model introduces, and what new opportunities for communication they bring about. Specifically, we will focus on Living Labs, as a key instrument for implementing this innovation model at the regional level, and their potential in creating scientific dissemination spaces.
 
  Address Copenhagen; June 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ECSJ  
  Notes MV; 600.097;SIAI Approved no  
  Call Number Admin @ si @ Vil2017a Serial 3032  
Permanent link to this record
 

 
Author Javad Zolfaghari Bengar; Joost Van de Weijer; Laura Lopez-Fuentes; Bogdan Raducanu edit   pdf
url  doi
openurl 
  Title (up) Class-Balanced Active Learning for Image Classification Type Conference Article
  Year 2022 Publication Winter Conference on Applications of Computer Vision Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Active learning aims to reduce the labeling effort that is required to train algorithms by learning an acquisition function selecting the most relevant data for which a label should be requested from a large unlabeled data pool. Active learning is generally studied on balanced datasets where an equal amount of images per class is available. However, real-world datasets suffer from severe imbalanced classes, the so called long-tail distribution. We argue that this further complicates the active learning process, since the imbalanced data pool can result in suboptimal classifiers. To address this problem in the context of active learning, we proposed a general optimization framework that explicitly takes class-balancing into account. Results on three datasets showed that the method is general (it can be combined with most existing active learning algorithms) and can be effectively applied to boost the performance of both informative and representative-based active learning methods. In addition, we showed that also on balanced datasets
our method 1 generally results in a performance gain.
 
  Address Virtual; Waikoloa; Hawai; USA; January 2022  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference WACV  
  Notes LAMP; 602.200; 600.147; 600.120 Approved no  
  Call Number Admin @ si @ ZWL2022 Serial 3703  
Permanent link to this record
 

 
Author Eduardo Aguilar; Petia Radeva edit  url
openurl 
  Title (up) Class-Conditional Data Augmentation Applied to Image Classification Type Conference Article
  Year 2019 Publication 18th International Conference on Computer Analysis of Images and Patterns Abbreviated Journal  
  Volume 11679 Issue Pages 182-192  
  Keywords CNNs; Data augmentation; Deep learning; Epistemic uncertainty; Image classification; Food recognition  
  Abstract Image classification is widely researched in the literature, where models based on Convolutional Neural Networks (CNNs) have provided better results. When data is not enough, CNN models tend to be overfitted. To deal with this, often, traditional techniques of data augmentation are applied, such as: affine transformations, adjusting the color balance, among others. However, we argue that some techniques of data augmentation may be more appropriate for some of the classes. In order to select the techniques that work best for particular class, we propose to explore the epistemic uncertainty for the samples within each class. From our experiments, we can observe that when the data augmentation is applied class-conditionally, we improve the results in terms of accuracy and also reduce the overall epistemic uncertainty. To summarize, in this paper we propose a class-conditional data augmentation procedure that allows us to obtain better results and improve robustness of the classification in the face of model uncertainty.  
  Address Salermo; Italy; September 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CAIP  
  Notes MILAB; no proj Approved no  
  Call Number Admin @ si @ AgR2019 Serial 3366  
Permanent link to this record
 

 
Author Bhalaji Nagarajan; Ricardo Marques; Marcos Mejia; Petia Radeva edit  url
doi  openurl
  Title (up) Class-conditional Importance Weighting for Deep Learning with Noisy Labels Type Conference Article
  Year 2022 Publication 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications Abbreviated Journal  
  Volume 5 Issue Pages 679-686  
  Keywords Noisy Labeling; Loss Correction; Class-conditional Importance Weighting; Learning with Noisy Labels  
  Abstract Large-scale accurate labels are very important to the Deep Neural Networks to train them and assure high performance. However, it is very expensive to create a clean dataset since usually it relies on human interaction. To this purpose, the labelling process is made cheap with a trade-off of having noisy labels. Learning with Noisy Labels is an active area of research being at the same time very challenging. The recent advances in Self-supervised learning and robust loss functions have helped in advancing noisy label research. In this paper, we propose a loss correction method that relies on dynamic weights computed based on the model training. We extend the existing Contrast to Divide algorithm coupled with DivideMix using a new class-conditional weighted scheme. We validate the method using the standard noise experiments and achieved encouraging results.  
  Address Virtual; February 2022  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference VISAPP  
  Notes MILAB; no menciona Approved no  
  Call Number Admin @ si @ NMM2022 Serial 3798  
Permanent link to this record
 

 
Author Jaume Amores; N. Sebe; Petia Radeva edit  openurl
  Title (up) Class-Specific Binaryy Correlograms for Object Recognition Type Conference Article
  Year 2007 Publication British Machine Vision Conference Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Warwick (UK)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference BMVC’07  
  Notes ADAS;MILAB Approved no  
  Call Number ADAS @ adas @ ASR2007a Serial 923  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: