toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Maryam Asadi-Aghbolaghi; Hugo Bertiche; Vicent Roig; Shohreh Kasaei; Sergio Escalera edit   pdf
openurl 
  Title (up) Action Recognition from RGB-D Data: Comparison and Fusion of Spatio-temporal Handcrafted Features and Deep Strategies Type Conference Article
  Year 2017 Publication Chalearn Workshop on Action, Gesture, and Emotion Recognition: Large Scale Multimodal Gesture Recognition and Real versus Fake expressed emotions at ICCV Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Venice; Italy; October 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICCVW  
  Notes HUPBA; no menciona Approved no  
  Call Number Admin @ si @ ABR2017 Serial 3068  
Permanent link to this record
 

 
Author Yi Xiao; Felipe Codevilla; Christopher Pal; Antonio Lopez edit   pdf
openurl 
  Title (up) Action-Based Representation Learning for Autonomous Driving Type Conference Article
  Year 2020 Publication Conference on Robot Learning Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Human drivers produce a vast amount of data which could, in principle, be used to improve autonomous driving systems. Unfortunately, seemingly straightforward approaches for creating end-to-end driving models that map sensor data directly into driving actions are problematic in terms of interpretability, and typically have significant difficulty dealing with spurious correlations. Alternatively, we propose to use this kind of action-based driving data for learning representations. Our experiments show that an affordance-based driving model pre-trained with this approach can leverage a relatively small amount of weakly annotated imagery and outperform pure end-to-end driving models, while being more interpretable. Further, we demonstrate how this strategy outperforms previous methods based on learning inverse dynamics models as well as other methods based on heavy human supervision (ImageNet).  
  Address virtual; November 2020  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CORL  
  Notes ADAS; 600.118 Approved no  
  Call Number Admin @ si @ XCP2020 Serial 3487  
Permanent link to this record
 

 
Author Alex Pardo; Albert Clapes; Sergio Escalera; Oriol Pujol edit   pdf
doi  isbn
openurl 
  Title (up) Actions in Context: System for people with Dementia Type Conference Article
  Year 2013 Publication 2nd International Workshop on Citizen Sensor Networks (Citisen2013) at the European Conference on Complex Systems Abbreviated Journal  
  Volume Issue Pages 3-14  
  Keywords Multi-modal data Fusion; Computer vision; Wearable sensors; Gesture recognition; Dementia  
  Abstract In the next forty years, the number of people living with dementia is expected to triple. In the last stages, people affected by this disease become dependent. This hinders the autonomy of the patient and has a huge social impact in time, money and effort. Given this scenario, we propose an ubiquitous system capable of recognizing daily specific actions. The system fuses and synchronizes data obtained from two complementary modalities – ambient and egocentric. The ambient approach consists in a fixed RGB-Depth camera for user and object recognition and user-object interaction, whereas the egocentric point of view is given by a personal area network (PAN) formed by a few wearable sensors and a smartphone, used for gesture recognition. The system processes multi-modal data in real-time, performing paralleled task recognition and modality synchronization, showing high performance recognizing subjects, objects, and interactions, showing its reliability to be applied in real case scenarios.  
  Address Barcelona; September 2013  
  Corporate Author Thesis  
  Publisher Springer International Publishing Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-319-04177-3 Medium  
  Area Expedition Conference ECCS  
  Notes HUPBA;MILAB Approved no  
  Call Number Admin @ si @ PCE2013 Serial 2354  
Permanent link to this record
 

 
Author Q. Xue; Laura Igual; A. Berenguel; M. Guerrieri; L. Garrido edit   pdf
openurl 
  Title (up) Active Contour Segmentation with Affine Coordinate-Based Parametrization Type Conference Article
  Year 2014 Publication 9th International Conference on Computer Vision Theory and Applications Abbreviated Journal  
  Volume 1 Issue Pages 5-14  
  Keywords Active Contours; Affine Coordinates; Mean Value Coordinates  
  Abstract In this paper, we present a new framework for image segmentation based on parametrized active contours. The contour and the points of the image space are parametrized using a set of reduced control points that have to form a closed polygon in two dimensional problems and a closed surface in three dimensional problems. By moving the control points, the active contour evolves. We use mean value coordinates as the parametrization tool for the interface, which allows to parametrize any point of the space, inside or outside the closed polygon
or surface. Region-based energies such as the one proposed by Chan and Vese can be easily implemented in both two and three dimensional segmentation problems. We show the usefulness of our approach with several experiments.
 
  Address Lisboa; January 2014  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference VISAPP  
  Notes OR;MILAB Approved no  
  Call Number Admin @ si @ XIB2014 Serial 2452  
Permanent link to this record
 

 
Author Marc Bolaños; Maite Garolera; Petia Radeva edit  doi
openurl 
  Title (up) Active labeling application applied to food-related object recognition Type Conference Article
  Year 2013 Publication 5th International Workshop on Multimedia for Cooking & Eating Activities Abbreviated Journal  
  Volume Issue Pages 45-50  
  Keywords  
  Abstract Every day, lifelogging devices, available for recording different aspects of our daily life, increase in number, quality and functions, just like the multiple applications that we give to them. Applying wearable devices to analyse the nutritional habits of people is a challenging application based on acquiring and analyzing life records in long periods of time. However, to extract the information of interest related to the eating patterns of people, we need automatic methods to process large amount of life-logging data (e.g. recognition of food-related objects). Creating a rich set of manually labeled samples to train the algorithms is slow, tedious and subjective. To address this problem, we propose a novel method in the framework of Active Labeling for construct- ing a training set of thousands of images. Inspired by the hierarchical sampling method for active learning [6], we propose an Active forest that organizes hierarchically the data for easy and fast labeling. Moreover, introducing a classifier into the hierarchical structures, as well as transforming the feature space for better data clustering, additionally im- prove the algorithm. Our method is successfully tested to label 89.700 food-related objects and achieves significant reduction in expert time labelling.

Active labeling application applied to food-related object recognition ResearchGate. Available from: http://www.researchgate.net/publication/262252017Activelabelingapplicationappliedtofood-relatedobjectrecognition [accessed Jul 14, 2015].
 
  Address Barcelona; October 2013  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ACM-CEA  
  Notes MILAB Approved no  
  Call Number Admin @ si @ BGR2013b Serial 2637  
Permanent link to this record
 

 
Author Petia Radeva; Michal Drozdzal; Santiago Segui; Laura Igual; Carolina Malagelada; Fernando Azpiroz; Jordi Vitria edit   pdf
doi  isbn
openurl 
  Title (up) Active labeling: Application to wireless endoscopy analysis Type Conference Article
  Year 2012 Publication High Performance Computing and Simulation, International Conference on Abbreviated Journal  
  Volume Issue Pages 174-181  
  Keywords  
  Abstract Today, robust learners trained in a real supervised machine learning application should count with a rich collection of positive and negative examples. Although in many applications, it is not difficult to obtain huge amount of data, labeling those data can be a very expensive process, especially when dealing with data of high variability and complexity. A good example of such cases are data from medical imaging applications where annotating anomalies like tumors, polyps, atherosclerotic plaque or informative frames in wireless endoscopy need highly trained experts. Building a representative set of training data from medical videos (e.g. Wireless Capsule Endoscopy) means that thousands of frames to be labeled by an expert. It is quite normal that data in new videos come different and thus are not represented by the training set. In this paper, we review the main approaches on active learning and illustrate how active learning can help to reduce expert effort in constructing the training sets. We show that applying active learning criteria, the number of human interventions can be significantly reduced. The proposed system allows the annotation of informative/non-informative frames of Wireless Capsule Endoscopy video containing more than 30000 frames each one with less than 100 expert ”clicks”.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4673-2359-8 Medium  
  Area Expedition Conference HPCS  
  Notes MILAB; OR;MV Approved no  
  Call Number Admin @ si @ RDS2012 Serial 2152  
Permanent link to this record
 

 
Author Hamed H. Aghdam; Abel Gonzalez-Garcia; Joost Van de Weijer; Antonio Lopez edit   pdf
url  doi
openurl 
  Title (up) Active Learning for Deep Detection Neural Networks Type Conference Article
  Year 2019 Publication 18th IEEE International Conference on Computer Vision Abbreviated Journal  
  Volume Issue Pages 3672-3680  
  Keywords  
  Abstract The cost of drawing object bounding boxes (ie labeling) for millions of images is prohibitively high. For instance, labeling pedestrians in a regular urban image could take 35 seconds on average. Active learning aims to reduce the cost of labeling by selecting only those images that are informative to improve the detection network accuracy. In this paper, we propose a method to perform active learning of object detectors based on convolutional neural networks. We propose a new image-level scoring process to rank unlabeled images for their automatic selection, which clearly outperforms classical scores. The proposed method can be applied to videos and sets of still images. In the former case, temporal selection rules can complement our scoring process. As a relevant use case, we extensively study the performance of our method on the task of pedestrian detection. Overall, the experiments show that the proposed method performs better than random selection.  
  Address Seul; Korea; October 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICCV  
  Notes ADAS; LAMP; 600.124; 600.109; 600.141; 600.120; 600.118 Approved no  
  Call Number Admin @ si @ AGW2019 Serial 3321  
Permanent link to this record
 

 
Author Gemma Roig; Xavier Boix; R. de Nijs; Sebastian Ramos; K. Kühnlenz; Luc Van Gool edit   pdf
doi  openurl
  Title (up) Active MAP Inference in CRFs for Efficient Semantic Segmentation Type Conference Article
  Year 2013 Publication 15th IEEE International Conference on Computer Vision Abbreviated Journal  
  Volume Issue Pages 2312 - 2319  
  Keywords Semantic Segmentation  
  Abstract Most MAP inference algorithms for CRFs optimize an energy function knowing all the potentials. In this paper, we focus on CRFs where the computational cost of instantiating the potentials is orders of magnitude higher than MAP inference. This is often the case in semantic image segmentation, where most potentials are instantiated by slow classifiers fed with costly features. We introduce Active MAP inference 1) to on-the-fly select a subset of potentials to be instantiated in the energy function, leaving the rest of the parameters of the potentials unknown, and 2) to estimate the MAP labeling from such incomplete energy function. Results for semantic segmentation benchmarks, namely PASCAL VOC 2010 [5] and MSRC-21 [19], show that Active MAP inference achieves similar levels of accuracy but with major efficiency gains.  
  Address Sydney; Australia; December 2013  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1550-5499 ISBN Medium  
  Area Expedition Conference ICCV  
  Notes ADAS; 600.057 Approved no  
  Call Number ADAS @ adas @ RBN2013 Serial 2377  
Permanent link to this record
 

 
Author M. Ivasic-Kos; M. Pobar; Jordi Gonzalez edit   pdf
doi  openurl
  Title (up) Active Player Detection in Handball Videos Using Optical Flow and STIPs Based Measures Type Conference Article
  Year 2019 Publication 13th International Conference on Signal Processing and Communication Systems Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract In handball videos recorded during the training, multiple players are present in the scene at the same time. Although they all might move and interact, not all players contribute to the currently relevant exercise nor practice the given handball techniques. The goal of this experiment is to automatically determine players on training footage that perform given handball techniques and are therefore considered active. It is a very challenging task for which a precise object detector is needed that can handle cluttered scenes with poor illumination, with many players present in different sizes and distances from the camera, partially occluded, moving fast. To determine which of the detected players are active, additional information is needed about the level of player activity. Since many handball actions are characterized by considerable changes in speed, position, and variations in the player's appearance, we propose using spatio-temporal interest points (STIPs) and optical flow (OF). Therefore, we propose an active player detection method combining the YOLO object detector and two activity measures based on STIPs and OF. The performance of the proposed method and activity measures are evaluated on a custom handball video dataset acquired during handball training lessons.  
  Address Gold Coast; Australia; December 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICSPCS2  
  Notes ISE; 600.098; 600.119 Approved no  
  Call Number Admin @ si @ IPG2019 Serial 3415  
Permanent link to this record
 

 
Author Oscar Amoros; Sergio Escalera; Anna Puig edit  openurl
  Title (up) Adaboost GPU-based Classifier for Direct Volume Rendering Type Conference Article
  Year 2011 Publication International Conference on Computer Graphics Theory and Applications Abbreviated Journal  
  Volume Issue Pages 215-219  
  Keywords  
  Abstract In volume visualization, the voxel visibitity and materials are carried out through an interactive editing of Transfer Function. In this paper, we present a two-level GPU-based labeling method that computes in times of rendering a set of labeled structures using the Adaboost machine learning classifier. In a pre-processing step, Adaboost trains a binary classifier from a pre-labeled dataset and, in each sample, takes into account a set of features. This binary classifier is a weighted combination of weak classifiers, which can be expressed as simple decision functions estimated on a single feature values. Then, at the testing stage, each weak classifier is independently applied on the features of a set of unlabeled samples. We propose an alternative representation of these classifiers that allow a GPU-based parallelizated testing stage embedded into the visualization pipeline. The empirical results confirm the OpenCL-based classification of biomedical datasets as a tough problem where an opportunity for further research emerges.  
  Address Algarve, Portugal  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference GRAPP  
  Notes MILAB; HuPBA Approved no  
  Call Number Admin @ si @ AEP2011 Serial 1774  
Permanent link to this record
 

 
Author Filip Szatkowski; Mateusz Pyla; Marcin Przewięzlikowski; Sebastian Cygert; Bartłomiej Twardowski; Tomasz Trzcinski edit   pdf
url  openurl
  Title (up) Adapt Your Teacher: Improving Knowledge Distillation for Exemplar-Free Continual Learning Type Conference Article
  Year 2023 Publication Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops Abbreviated Journal  
  Volume Issue Pages 3512-3517  
  Keywords  
  Abstract In this work, we investigate exemplar-free class incremental learning (CIL) with knowledge distillation (KD) as a regularization strategy, aiming to prevent forgetting. KD-based methods are successfully used in CIL, but they often struggle to regularize the model without access to exemplars of the training data from previous tasks. Our analysis reveals that this issue originates from substantial representation shifts in the teacher network when dealing with out-of-distribution data. This causes large errors in the KD loss component, leading to performance degradation in CIL. Inspired by recent test-time adaptation methods, we introduce Teacher Adaptation (TA), a method that concurrently updates the teacher and the main model during incremental training. Our method seamlessly integrates with KD-based CIL approaches and allows for consistent enhancement of their performance across multiple exemplar-free CIL benchmarks.  
  Address Paris; France; October 2023  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICCVW  
  Notes LAMP Approved no  
  Call Number Admin @ si @ Serial 3944  
Permanent link to this record
 

 
Author Jiaolong Xu; David Vazquez; Sebastian Ramos; Antonio Lopez; Daniel Ponsa edit   pdf
doi  openurl
  Title (up) Adapting a Pedestrian Detector by Boosting LDA Exemplar Classifiers Type Conference Article
  Year 2013 Publication CVPR Workshop on Ground Truth – What is a good dataset? Abbreviated Journal  
  Volume Issue Pages 688 - 693  
  Keywords Pedestrian Detection; Domain Adaptation  
  Abstract Training vision-based pedestrian detectors using synthetic datasets (virtual world) is a useful technique to collect automatically the training examples with their pixel-wise ground truth. However, as it is often the case, these detectors must operate in real-world images, experiencing a significant drop of their performance. In fact, this effect also occurs among different real-world datasets, i.e. detectors' accuracy drops when the training data (source domain) and the application scenario (target domain) have inherent differences. Therefore, in order to avoid this problem, it is required to adapt the detector trained with synthetic data to operate in the real-world scenario. In this paper, we propose a domain adaptation approach based on boosting LDA exemplar classifiers from both virtual and real worlds. We evaluate our proposal on multiple real-world pedestrian detection datasets. The results show that our method can efficiently adapt the exemplar classifiers from virtual to real world, avoiding drops in average precision over the 15%.  
  Address Portland; oregon; June 2013  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes ADAS; 600.054; 600.057; 601.217 Approved yes  
  Call Number XVR2013; ADAS @ adas @ xvr2013a Serial 2220  
Permanent link to this record
 

 
Author Yainuvis Socarras; Sebastian Ramos; David Vazquez; Antonio Lopez; Theo Gevers edit   pdf
openurl 
  Title (up) Adapting Pedestrian Detection from Synthetic to Far Infrared Images Type Conference Article
  Year 2013 Publication ICCV Workshop on Visual Domain Adaptation and Dataset Bias Abbreviated Journal  
  Volume Issue Pages  
  Keywords Domain Adaptation; Far Infrared; Pedestrian Detection  
  Abstract We present different techniques to adapt a pedestrian classifier trained with synthetic images and the corresponding automatically generated annotations to operate with far infrared (FIR) images. The information contained in this kind of images allow us to develop a robust pedestrian detector invariant to extreme illumination changes.  
  Address Sydney; Australia; December 2013  
  Corporate Author Thesis  
  Publisher Place of Publication Sydney, Australy Editor  
  Language English Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICCVW-VisDA  
  Notes ADAS; 600.054; 600.055; 600.057; 601.217;ISE Approved no  
  Call Number ADAS @ adas @ SRV2013 Serial 2334  
Permanent link to this record
 

 
Author M. Danelljan; Fahad Shahbaz Khan; Michael Felsberg; Joost Van de Weijer edit   pdf
doi  openurl
  Title (up) Adaptive color attributes for real-time visual tracking Type Conference Article
  Year 2014 Publication 27th IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 1090 - 1097  
  Keywords  
  Abstract Visual tracking is a challenging problem in computer vision. Most state-of-the-art visual trackers either rely on luminance information or use simple color representations for image description. Contrary to visual tracking, for object
recognition and detection, sophisticated color features when combined with luminance have shown to provide excellent performance. Due to the complexity of the tracking problem, the desired color feature should be computationally
efficient, and possess a certain amount of photometric invariance while maintaining high discriminative power.
This paper investigates the contribution of color in a tracking-by-detection framework. Our results suggest that color attributes provides superior performance for visual tracking. We further propose an adaptive low-dimensional
variant of color attributes. Both quantitative and attributebased evaluations are performed on 41 challenging benchmark color sequences. The proposed approach improves the baseline intensity-based tracker by 24% in median distance precision. Furthermore, we show that our approach outperforms
state-of-the-art tracking methods while running at more than 100 frames per second.
 
  Address Nottingham; UK; September 2014  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPR  
  Notes CIC; LAMP; 600.074; 600.079 Approved no  
  Call Number Admin @ si @ DKF2014 Serial 2509  
Permanent link to this record
 

 
Author David Geronimo; Angel Sappa; Antonio Lopez; Daniel Ponsa edit   pdf
url  openurl
  Title (up) Adaptive Image Sampling and Windows Classification for On-board Pedestrian Detection Type Conference Article
  Year 2007 Publication Proceedings of the 5th International Conference on Computer Vision Systems Abbreviated Journal ICVS  
  Volume Issue Pages  
  Keywords Pedestrian Detection  
  Abstract On–board pedestrian detection is in the frontier of the state–of–the–art since it implies processing outdoor scenarios from a mobile platform and searching for aspect–changing objects in cluttered urban environments. Most promising approaches include the development of classifiers based on feature selection and machine learning. However, they use a large number of features which compromises real–time. Thus, methods for running the classifiers in only a few image windows must be provided. In this paper we contribute in both aspects, proposing a camera
pose estimation method for adaptive sparse image sampling, as well as a classifier for pedestrian detection based on Haar wavelets and edge orientation histograms as features and AdaBoost as learning machine. Both proposals are compared with relevant approaches in the literature, showing comparable results but reducing processing time by four for the sampling tasks and by ten for the classification one.
 
  Address Bielefeld (Germany)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ gsl2007a Serial 786  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: