|   | 
Details
   web
Records
Author Mohammad Rouhani
Title Shape Representation and Registration using Implicit Functions Type Book Whole
Year 2012 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal
Volume (up) Issue Pages
Keywords
Abstract Shape representation and registration are two important problems in computer vision and graphics. Representing the given cloud of points through an implicit function provides a higher level information describing the data. This representation can be more compact more robust to noise and outliers, hence it can be exploited in different computer vision application. In the first part of this thesis implicit shape representations, including both implicit B-spline and polynomial, are tackled. First, an approximation of a geometric distance is proposed to measure the closeness of the given cloud of points and the implicit surface. The analysis of the proposed distance shows an accurate estimation with smooth behavior. The distance by itself is used in a RANSAC based quadratic fitting method. Moreover, since the gradient information of the distance with respect to the surface parameters can be analytically computed, it is used in Levenberg-Marquadt algorithm to refine the surface parameters. In a different approach, an algebraic fitting method is used to represent an object through implicit B-splines. The outcome is a smooth flexible surface and can be represented in different levels from coarse to fine. This property has been exploited to solve the registration problem in the second part of the thesis. In the proposed registration technique the model set is replaced with an implicit representation provided in the first part; then, the point-to-point registration is converted to a point-to-model one in a higher level. This registration error can benefit from different distance estimations to speed up the registration process even without need of correspondence search. Finally, the non-rigid registration problem is tackled through a quadratic distance approximation that is based on the curvature information of the model set. This approximation is used in a free form deformation model to update its control lattice. Then it is shown how an accurate distance approximation can benefit non-rigid registration problems.
Address
Corporate Author Thesis Ph.D. thesis
Publisher Ediciones Graficas Rey Place of Publication Editor Angel Sappa
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS Approved no
Call Number Admin @ si @ Rou2012 Serial 2205
Permanent link to this record
 

 
Author Jose Carlos Rubio
Title Many-to-Many High Order Matching. Applications to Tracking and Object Segmentation Type Book Whole
Year 2012 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal
Volume (up) Issue Pages
Keywords
Abstract Feature matching is a fundamental problem in Computer Vision, having multiple applications such as tracking, image classification and retrieval, shape recognition and stereo fusion. In numerous domains, it is useful to represent the local structure of the matching features to increase the matching accuracy or to make the correspondence invariant to certain transformations (affine, homography, etc. . . ). However, encoding this knowledge requires complicating the model by establishing high-order relationships between the model elements, and therefore increasing the complexity of the optimization problem.

The importance of many-to-many matching is sometimes dismissed in the literature. Most methods are restricted to perform one-to-one matching, and are usually validated on synthetic, or non-realistic datasets. In a real challenging environment, with scale, pose and illumination variations of the object of interest, as well as the presence of occlusions, clutter, and noisy observations, many-to-many matching is necessary to achieve satisfactory results. As a consequence, finding the most likely many-to-many correspondence often involves a challenging combinatorial optimization process.

In this work, we design and demonstrate matching algorithms that compute many-to-many correspondences, applied to several challenging problems. Our goal is to make use of high-order representations to improve the expressive power of the matching, at the same time that we make feasible the process of inference or optimization of such models. We effectively use graphical models as our preferred representation because they provide an elegant probabilistic framework to tackle structured prediction problems.

We introduce a matching-based tracking algorithm which performs matching between frames of a video sequence in order to solve the difficult problem of headlight tracking at night-time. We also generalise this algorithm to solve the problem of data association applied to various tracking scenarios. We demonstrate the effectiveness of such approach in real video sequences and we show that our tracking algorithm can be used to improve the accuracy of a headlight classification system.

In the second part of this work, we move from single (point) matching to dense (region) matching and we introduce a new hierarchical image representation. We make use of such model to develop a high-order many-to-many matching between pairs of images. We show that the use of high-order models in comparison to simpler models improves not only the accuracy of the results, but also the convergence speed of the inference algorithm.

Finally, we keep exploiting the idea of region matching to design a fully unsupervised image co-segmentation algorithm that is able to perform competitively with state-of-the-art supervised methods. Our method also overcomes the typical drawbacks of some of the past works, such as avoiding the necessity of variate appearances on the image backgrounds. The region matching in this case is applied to effectively exploit inter-image information. We also extend this work to perform co-segmentation of videos, being the first time that such problem is addressed, as a way to perform video object segmentation
Address
Corporate Author Thesis Ph.D. thesis
Publisher Ediciones Graficas Rey Place of Publication Editor Joan Serrat
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS Approved no
Call Number Admin @ si @ Rub2012 Serial 2206
Permanent link to this record
 

 
Author Bhaskar Chakraborty
Title Model free approach to human action recognition Type Book Whole
Year 2012 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal
Volume (up) Issue Pages
Keywords
Abstract Automatic understanding of human activity and action is very important and challenging research area of Computer Vision with wide applications in video surveillance, motion analysis, virtual reality interfaces, video indexing, content based video retrieval, HCI and health care. This thesis presents a series of techniques to solve the problem of human action recognition in video. First approach towards this goal is based on a probabilistic optimization model of body parts using Hidden Markov Model. This strong model based approach is able to distinguish between similar actions by only considering the body parts having major contributions to the actions. In next approach, we apply a weak model based human detector and actions are represented by Bag-of-key poses model to capture the human pose changes during the actions. To tackle the problem of human action recognition in complex scenes, a selective spatio-temporal interest point (STIP) detector is proposed by using a mechanism similar to that of the non-classical receptive field inhibition that is exhibited by most oriented selective neuron in the primary visual cortex. An extension of the selective STIP detector is applied to multi-view action recognition system by introducing a novel 4D STIPs (3D space + time). Finally, we use our STIP detector on large scale continuous visual event recognition problem and propose a novel generalized max-margin Hough transformation framework for activity detection
Address
Corporate Author Thesis Ph.D. thesis
Publisher Ediciones Graficas Rey Place of Publication Editor Jordi Gonzalez;Xavier Roca
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE Approved no
Call Number Admin @ si @ Cha2012 Serial 2207
Permanent link to this record
 

 
Author Josep M. Gonfaus
Title Towards Deep Image Understanding: From pixels to semantics Type Book Whole
Year 2012 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal
Volume (up) Issue Pages
Keywords
Abstract Understanding the content of the images is one of the greatest challenges of computer vision. Recognition of objects appearing in images, identifying and interpreting their actions are the main purposes of Image Understanding. This thesis seeks to identify what is present in a picture by categorizing and locating all the objects in the scene.
Images are composed by pixels, and one possibility consists of assigning to each pixel an object category, which is commonly known as semantic segmentation. By incorporating information as a contextual cue, we are able to resolve the ambiguity within categories at the pixel-level. We propose three levels of scale in order to resolve such ambiguity.
Another possibility to represent the objects is the object detection task. In this case, the aim is to recognize and localize the whole object by accurately placing a bounding box around it. We present two new approaches. The first one is focused on improving the object representation of deformable part models with the concept of factorized appearances. The second approach addresses the issue of reducing the computational cost for multi-class recognition. The results given have been validated on several commonly used datasets, reaching international recognition and state-of-the-art within the field
Address
Corporate Author Thesis Ph.D. thesis
Publisher Ediciones Graficas Rey Place of Publication Editor Jordi Gonzalez;Theo Gevers
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE Approved no
Call Number Admin @ si @ Gon2012 Serial 2208
Permanent link to this record
 

 
Author Fernando Barrera
Title Multimodal Stereo from Thermal Infrared and Visible Spectrum Type Book Whole
Year 2012 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal
Volume (up) Issue Pages
Keywords
Abstract Recent advances in thermal infrared imaging (LWIR) has allowed its use in applications beyond of the military domain. Nowadays, this new family of sensors is included in different technical and scientific applications. They offer features that facilitate tasks, such as detection of pedestrians, hot spots, differences in temperature, among others, which can significantly improve the performance of a system where the persons are expected to play the principal role. For instance, video surveillance applications, monitoring, and pedestrian detection.
During this dissertation the next question is stated: Could a couple of sensors measuring different bands of the electromagnetic spectrum, as the visible and thermal infrared, be used to extract depth information? Although it is a complex question, we shows that a system of these characteristics is possible as well as their advantages, drawbacks, and potential opportunities.
The matching and fusion of data coming from different sensors, as the emissions registered at visible and infrared bands, represents a special challenge, because it has been showed that theses signals are weak correlated. Therefore, many traditional techniques of image processing and computer vision are not helpful, requiring adjustments for their correct performance in every modality.
In this research an experimental study that compares different cost functions and matching approaches is performed, in order to build a multimodal stereovision system. Furthermore, the common problems in infrared/visible stereo, specially in the outdoor scenes are identified. Our framework summarizes the architecture of a generic stereo algorithm, at different levels: computational, functional, and structural, which can be extended toward high-level fusion (semantic) and high-order (prior).The proposed framework is intended to explore novel multimodal stereo matching approaches, going from sparse to dense representations (both disparity and depth maps). Moreover, context information is added in form of priors and assumptions. Finally, this dissertation shows a promissory way toward the integration of multiple sensors for recovering three-dimensional information.
Address
Corporate Author Thesis Ph.D. thesis
Publisher Ediciones Graficas Rey Place of Publication Editor Felipe Lumbreras;Angel Sappa
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS Approved no
Call Number Admin @ si @ Bar2012 Serial 2209
Permanent link to this record
 

 
Author Diego Alejandro Cheda
Title Monocular Depth Cues in Computer Vision Applications Type Book Whole
Year 2012 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal
Volume (up) Issue Pages
Keywords
Abstract Depth perception is a key aspect of human vision. It is a routine and essential visual task that the human do effortlessly in many daily activities. This has often been associated with stereo vision, but humans have an amazing ability to perceive depth relations even from a single image by using several monocular cues.

In the computer vision field, if image depth information were available, many tasks could be posed from a different perspective for the sake of higher performance and robustness. Nevertheless, given a single image, this possibility is usually discarded, since obtaining depth information has frequently been performed by three-dimensional reconstruction techniques, requiring two or more images of the same scene taken from different viewpoints. Recently, some proposals have shown the feasibility of computing depth information from single images. In essence, the idea is to take advantage of a priori knowledge of the acquisition conditions and the observed scene to estimate depth from monocular pictorial cues. These approaches try to precisely estimate the scene depth maps by employing computationally demanding techniques. However, to assist many computer vision algorithms, it is not really necessary computing a costly and detailed depth map of the image. Indeed, just a rough depth description can be very valuable in many problems.

In this thesis, we have demonstrated how coarse depth information can be integrated in different tasks following alternative strategies to obtain more precise and robust results. In that sense, we have proposed a simple, but reliable enough technique, whereby image scene regions are categorized into discrete depth ranges to build a coarse depth map. Based on this representation, we have explored the potential usefulness of our method in three application domains from novel viewpoints: camera rotation parameters estimation, background estimation and pedestrian candidate generation. In the first case, we have computed camera rotation mounted in a moving vehicle applying two novels methods based on distant elements in the image, where the translation component of the image flow vectors is negligible. In background estimation, we have proposed a novel method to reconstruct the background by penalizing close regions in a cost function, which integrates color, motion, and depth terms. Finally, we have benefited of geometric and depth information available on single images for pedestrian candidate generation to significantly reduce the number of generated windows to be further processed by a pedestrian classifier. In all cases, results have shown that our approaches contribute to better performances.
Address
Corporate Author Thesis Ph.D. thesis
Publisher Ediciones Graficas Rey Place of Publication Editor Daniel Ponsa;Antonio Lopez
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS Approved no
Call Number Admin @ si @ Che2012 Serial 2210
Permanent link to this record
 

 
Author Jorge Bernal
Title Polyp Localization and Segmentation in Colonoscopy Images by Means of a Model of Appearance for Polyps Type Book Whole
Year 2012 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal
Volume (up) Issue Pages
Keywords
Abstract Colorectal cancer is the fourth most common cause of cancer death worldwide and its survival rate depends on the stage in which it is detected on hence the necessity for an early colon screening. There are several screening techniques but colonoscopy is still nowadays the gold standard, although it has some drawbacks such as the miss rate. Our contribution, in the field of intelligent systems for colonoscopy, aims at providing a polyp localization and a polyp segmentation system based on a model of appearance for polyps. To develop both methods we define a model of appearance for polyps, which describes a polyp as enclosed by intensity valleys. The novelty of our contribution resides on the fact that we include in our model aspects of the image formation and we also consider the presence of other elements from the endoluminal scene such as specular highlights and blood vessels, which have an impact on the performance of our methods. In order to develop our polyp localization method we accumulate valley information in order to generate energy maps, which are also used to guide the polyp segmentation. Our methods achieve promising results in polyp localization and segmentation. As we want to explore the usability of our methods we present a comparative analysis between physicians fixations obtained via an eye tracking device and our polyp localization method. The results show that our method is indistinguishable to novice physicians although it is far from expert physicians.
Address
Corporate Author Thesis Ph.D. thesis
Publisher Ediciones Graficas Rey Place of Publication Editor F. Javier Sanchez;Fernando Vilariño
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area 800 Expedition Conference
Notes MV Approved no
Call Number Admin @ si @ Ber2012 Serial 2211
Permanent link to this record
 

 
Author Naila Murray
Title Predicting Saliency and Aesthetics in Images: A Bottom-up Perspective Type Book Whole
Year 2012 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal
Volume (up) Issue Pages
Keywords
Abstract In Part 1 of the thesis, we hypothesize that salient and non-salient image regions can be estimated to be the regions which are enhanced or assimilated in standard low-level color image representations. We prove this hypothesis by adapting a low-level model of color perception into a saliency estimation model. This model shares the three main steps found in many successful models for predicting attention in a scene: convolution with a set of filters, a center-surround mechanism and spatial pooling to construct a saliency map. For such models, integrating spatial information and justifying the choice of various parameter values remain open problems. Our saliency model inherits a principled selection of parameters as well as an innate spatial pooling mechanism from the perception model on which it is based. This pooling mechanism has been fitted using psychophysical data acquired in color-luminance setting experiments. The proposed model outperforms the state-of-the-art at the task of predicting eye-fixations from two datasets. After demonstrating the effectiveness of our basic saliency model, we introduce an improved image representation, based on geometrical grouplets, that enhances complex low-level visual features such as corners and terminations, and suppresses relatively simpler features such as edges. With this improved image representation, the performance of our saliency model in predicting eye-fixations increases for both datasets.

In Part 2 of the thesis, we investigate the problem of aesthetic visual analysis. While a great deal of research has been conducted on hand-crafting image descriptors for aesthetics, little attention so far has been dedicated to the collection, annotation and distribution of ground truth data. Because image aesthetics is complex and subjective, existing datasets, which have few images and few annotations, have significant limitations. To address these limitations, we have introduced a new large-scale database for conducting Aesthetic Visual Analysis, which we call AVA. AVA contains more than 250,000 images, along with a rich variety of annotations. We investigate how the wealth of data in AVA can be used to tackle the challenge of understanding and assessing visual aesthetics by looking into several problems relevant for aesthetic analysis. We demonstrate that by leveraging the data in AVA, and using generic low-level features such as SIFT and color histograms, we can exceed state-of-the-art performance in aesthetic quality prediction tasks.

Finally, we entertain the hypothesis that low-level visual information in our saliency model can also be used to predict visual aesthetics by capturing local image characteristics such as feature contrast, grouping and isolation, characteristics thought to be related to universal aesthetic laws. We use the weighted center-surround responses that form the basis of our saliency model to create a feature vector that describes aesthetics. We also introduce a novel color space for fine-grained color representation. We then demonstrate that the resultant features achieve state-of-the-art performance on aesthetic quality classification.

As such, a promising contribution of this thesis is to show that several vision experiences – low-level color perception, visual saliency and visual aesthetics estimation – may be successfully modeled using a unified framework. This suggests a similar architecture in area V1 for both color perception and saliency and adds evidence to the hypothesis that visual aesthetics appreciation is driven in part by low-level cues.
Address
Corporate Author Thesis Ph.D. thesis
Publisher Ediciones Graficas Rey Place of Publication Editor Xavier Otazu;Maria Vanrell
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes CIC Approved no
Call Number Admin @ si @ Mur2012 Serial 2212
Permanent link to this record
 

 
Author Jiaolong Xu; David Vazquez; Antonio Lopez; Javier Marin; Daniel Ponsa
Title Learning a Multiview Part-based Model in Virtual World for Pedestrian Detection Type Conference Article
Year 2013 Publication IEEE Intelligent Vehicles Symposium Abbreviated Journal
Volume (up) Issue Pages 467 - 472
Keywords Pedestrian Detection; Virtual World; Part based
Abstract State-of-the-art deformable part-based models based on latent SVM have shown excellent results on human detection. In this paper, we propose to train a multiview deformable part-based model with automatically generated part examples from virtual-world data. The method is efficient as: (i) the part detectors are trained with precisely extracted virtual examples, thus no latent learning is needed, (ii) the multiview pedestrian detector enhances the performance of the pedestrian root model, (iii) a top-down approach is used for part detection which reduces the searching space. We evaluate our model on Daimler and Karlsruhe Pedestrian Benchmarks with publicly available Caltech pedestrian detection evaluation framework and the result outperforms the state-of-the-art latent SVM V4.0, on both average miss rate and speed (our detector is ten times faster).
Address Gold Coast; Australia; June 2013
Corporate Author Thesis
Publisher IEEE Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1931-0587 ISBN 978-1-4673-2754-1 Medium
Area Expedition Conference IV
Notes ADAS; 600.054; 600.057 Approved no
Call Number XVL2013; ADAS @ adas @ xvl2013a Serial 2214
Permanent link to this record
 

 
Author Marina Alberti
Title Detection and Alignment of Vascular Structures in Intravascular Ultrasound using Pattern Recognition Techniques Type Book Whole
Year 2013 Publication PhD Thesis, Universitat de Barcelona-CVC Abbreviated Journal
Volume (up) Issue Pages
Keywords
Abstract In this thesis, several methods for the automatic analysis of Intravascular Ultrasound
(IVUS) sequences are presented, aimed at assisting physicians in the diagnosis, the assessment of the intervention and the monitoring of the patients with coronary disease.
The basis for the developed frameworks are machine learning, pattern recognition and
image processing techniques.
First, a novel approach for the automatic detection of vascular bifurcations in
IVUS is presented. The task is addressed as a binary classication problem (identifying bifurcation and non-bifurcation angular sectors in the sequence images). The
multiscale stacked sequential learning algorithm is applied, to take into account the
spatial and temporal context in IVUS sequences, and the results are rened using
a-priori information about branching dimensions and geometry. The achieved performance is comparable to intra- and inter-observer variability.
Then, we propose a novel method for the automatic non-rigid alignment of IVUS
sequences of the same patient, acquired at dierent moments (before and after percutaneous coronary intervention, or at baseline and follow-up examinations). The
method is based on the description of the morphological content of the vessel, obtained by extracting temporal morphological proles from the IVUS acquisitions, by
means of methods for segmentation, characterization and detection in IVUS. A technique for non-rigid sequence alignment – the Dynamic Time Warping algorithm -
is applied to the proles and adapted to the specic clinical problem. Two dierent robust strategies are proposed to address the partial overlapping between frames
of corresponding sequences, and a regularization term is introduced to compensate
for possible errors in the prole extraction. The benets of the proposed strategy
are demonstrated by extensive validation on synthetic and in-vivo data. The results
show the interest of the proposed non-linear alignment and the clinical value of the
method.
Finally, a novel automatic approach for the extraction of the luminal border in
IVUS images is presented. The method applies the multiscale stacked sequential
learning algorithm and extends it to 2-D+T, in a rst classication phase (the identi-
cation of lumen and non-lumen regions of the images), while an active contour model
is used in a second phase, to identify the lumen contour. The method is extended
to the longitudinal dimension of the sequences and it is validated on a challenging
data-set.
Address Barcelona
Corporate Author Thesis Ph.D. thesis
Publisher Ediciones Graficas Rey Place of Publication Editor Simone Balocco;Petia Radeva
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB Approved no
Call Number Admin @ si @ Alb2013 Serial 2215
Permanent link to this record
 

 
Author Sergio Escalera
Title Coding and Decoding Design of ECOCs for Multi-class Pattern and Object Recognition A Type Book Whole
Year 2008 Publication PhD Thesis, Universitat de Barcelona-CVC Abbreviated Journal
Volume (up) Issue Pages
Keywords
Abstract Many real problems require multi-class decisions. In the Pattern Recognition field,
many techniques have been proposed to deal with the binary problem. However,
the extension of many 2-class classifiers to the multi-class case is a hard task. In
this sense, Error-Correcting Output Codes (ECOC) demonstrated to be a powerful
tool to combine any number of binary classifiers to model multi-class problems. But
there are still many open issues about the capabilities of the ECOC framework. In
this thesis, the two main stages of an ECOC design are analyzed: the coding and
the decoding steps. We present different problem-dependent designs. These designs
take advantage of the knowledge of the problem domain to minimize the number
of classifiers, obtaining a high classification performance. On the other hand, we
analyze the ECOC codification in order to define new decoding rules that take full
benefit from the information provided at the coding step. Moreover, as a successful
classification requires a rich feature set, new feature detection/extraction techniques
are presented and evaluated on the new ECOC designs. The evaluation of the new
methodology is performed on different real and synthetic data sets: UCI Machine
Learning Repository, handwriting symbols, traffic signs from a Mobile Mapping System, Intravascular Ultrasound images, Caltech Repository data set or Chaga’s disease
data set. The results of this thesis show that significant performance improvements
are obtained on both traditional coding and decoding ECOC designs when the new
coding and decoding rules are taken into account.
Address
Corporate Author Thesis Ph.D. thesis
Publisher Ediciones Graficas Rey Place of Publication Editor Petia Radeva;Oriol Pujol
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB; HuPBA Approved no
Call Number Admin @ si @ Esc2008b Serial 2217
Permanent link to this record
 

 
Author David Vazquez; Jiaolong Xu; Sebastian Ramos; Antonio Lopez; Daniel Ponsa
Title Weakly Supervised Automatic Annotation of Pedestrian Bounding Boxes Type Conference Article
Year 2013 Publication CVPR Workshop on Ground Truth – What is a good dataset? Abbreviated Journal
Volume (up) Issue Pages 706 - 711
Keywords Pedestrian Detection; Domain Adaptation
Abstract Among the components of a pedestrian detector, its trained pedestrian classifier is crucial for achieving the desired performance. The initial task of the training process consists in collecting samples of pedestrians and background, which involves tiresome manual annotation of pedestrian bounding boxes (BBs). Thus, recent works have assessed the use of automatically collected samples from photo-realistic virtual worlds. However, learning from virtual-world samples and testing in real-world images may suffer the dataset shift problem. Accordingly, in this paper we assess an strategy to collect samples from the real world and retrain with them, thus avoiding the dataset shift, but in such a way that no BBs of real-world pedestrians have to be provided. In particular, we train a pedestrian classifier based on virtual-world samples (no human annotation required). Then, using such a classifier we collect pedestrian samples from real-world images by detection. After, a human oracle rejects the false detections efficiently (weak annotation). Finally, a new classifier is trained with the accepted detections. We show that this classifier is competitive with respect to the counterpart trained with samples collected by manually annotating hundreds of pedestrian BBs.
Address Portland; Oregon; June 2013
Corporate Author Thesis
Publisher IEEE Place of Publication Editor
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPRW
Notes ADAS; 600.054; 600.057; 601.217 Approved no
Call Number ADAS @ adas @ VXR2013a Serial 2219
Permanent link to this record
 

 
Author Jiaolong Xu; David Vazquez; Sebastian Ramos; Antonio Lopez; Daniel Ponsa
Title Adapting a Pedestrian Detector by Boosting LDA Exemplar Classifiers Type Conference Article
Year 2013 Publication CVPR Workshop on Ground Truth – What is a good dataset? Abbreviated Journal
Volume (up) Issue Pages 688 - 693
Keywords Pedestrian Detection; Domain Adaptation
Abstract Training vision-based pedestrian detectors using synthetic datasets (virtual world) is a useful technique to collect automatically the training examples with their pixel-wise ground truth. However, as it is often the case, these detectors must operate in real-world images, experiencing a significant drop of their performance. In fact, this effect also occurs among different real-world datasets, i.e. detectors' accuracy drops when the training data (source domain) and the application scenario (target domain) have inherent differences. Therefore, in order to avoid this problem, it is required to adapt the detector trained with synthetic data to operate in the real-world scenario. In this paper, we propose a domain adaptation approach based on boosting LDA exemplar classifiers from both virtual and real worlds. We evaluate our proposal on multiple real-world pedestrian detection datasets. The results show that our method can efficiently adapt the exemplar classifiers from virtual to real world, avoiding drops in average precision over the 15%.
Address Portland; oregon; June 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPRW
Notes ADAS; 600.054; 600.057; 601.217 Approved yes
Call Number XVR2013; ADAS @ adas @ xvr2013a Serial 2220
Permanent link to this record
 

 
Author Wenjuan Gong
Title Action priors for human pose tracking by particle filter Type Report
Year 2009 Publication CVC Technical Report Abbreviated Journal
Volume (up) Issue Pages
Keywords
Abstract
Address
Corporate Author Computer Vision Center Thesis Master's thesis
Publisher Place of Publication Bellaterra, Barcelona Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE Approved no
Call Number Admin @ si @ Gon2009 Serial 2401
Permanent link to this record
 

 
Author A. Ruiz; Joost Van de Weijer; Xavier Binefa
Title Regularized Multi-Concept MIL for weakly-supervised facial behavior categorization Type Conference Article
Year 2014 Publication 25th British Machine Vision Conference Abbreviated Journal
Volume (up) Issue Pages
Keywords
Abstract We address the problem of estimating high-level semantic labels for videos of recorded people by means of analysing their facial expressions. This problem, to which we refer as facial behavior categorization, is a weakly-supervised learning problem where we do not have access to frame-by-frame facial gesture annotations but only weak-labels at the video level are available. Therefore, the goal is to learn a set of discriminative expressions and how they determine the video weak-labels. Facial behavior categorization can be posed as a Multi-Instance-Learning (MIL) problem and we propose a novel MIL method called Regularized Multi-Concept MIL to solve it. In contrast to previous approaches applied in facial behavior analysis, RMC-MIL follows a Multi-Concept assumption which allows different facial expressions (concepts) to contribute differently to the video-label. Moreover, to handle with the high-dimensional nature of facial-descriptors, RMC-MIL uses a discriminative approach to model the concepts and structured sparsity regularization to discard non-informative features. RMC-MIL is posed as a convex-constrained optimization problem where all the parameters are jointly learned using the Projected-Quasi-Newton method. In our experiments, we use two public data-sets to show the advantages of the Regularized Multi-Concept approach and its improvement compared to existing MIL methods. RMC-MIL outperforms state-of-the-art results in the UNBC data-set for pain detection.
Address Nottingham; UK; September 2014
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference BMVC
Notes LAMP; CIC; 600.074; 600.079 Approved no
Call Number Admin @ si @ RWB2014 Serial 2508
Permanent link to this record