|
Naila Murray. (2012). Predicting Saliency and Aesthetics in Images: A Bottom-up Perspective (Xavier Otazu, & Maria Vanrell, Eds.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: In Part 1 of the thesis, we hypothesize that salient and non-salient image regions can be estimated to be the regions which are enhanced or assimilated in standard low-level color image representations. We prove this hypothesis by adapting a low-level model of color perception into a saliency estimation model. This model shares the three main steps found in many successful models for predicting attention in a scene: convolution with a set of filters, a center-surround mechanism and spatial pooling to construct a saliency map. For such models, integrating spatial information and justifying the choice of various parameter values remain open problems. Our saliency model inherits a principled selection of parameters as well as an innate spatial pooling mechanism from the perception model on which it is based. This pooling mechanism has been fitted using psychophysical data acquired in color-luminance setting experiments. The proposed model outperforms the state-of-the-art at the task of predicting eye-fixations from two datasets. After demonstrating the effectiveness of our basic saliency model, we introduce an improved image representation, based on geometrical grouplets, that enhances complex low-level visual features such as corners and terminations, and suppresses relatively simpler features such as edges. With this improved image representation, the performance of our saliency model in predicting eye-fixations increases for both datasets.
In Part 2 of the thesis, we investigate the problem of aesthetic visual analysis. While a great deal of research has been conducted on hand-crafting image descriptors for aesthetics, little attention so far has been dedicated to the collection, annotation and distribution of ground truth data. Because image aesthetics is complex and subjective, existing datasets, which have few images and few annotations, have significant limitations. To address these limitations, we have introduced a new large-scale database for conducting Aesthetic Visual Analysis, which we call AVA. AVA contains more than 250,000 images, along with a rich variety of annotations. We investigate how the wealth of data in AVA can be used to tackle the challenge of understanding and assessing visual aesthetics by looking into several problems relevant for aesthetic analysis. We demonstrate that by leveraging the data in AVA, and using generic low-level features such as SIFT and color histograms, we can exceed state-of-the-art performance in aesthetic quality prediction tasks.
Finally, we entertain the hypothesis that low-level visual information in our saliency model can also be used to predict visual aesthetics by capturing local image characteristics such as feature contrast, grouping and isolation, characteristics thought to be related to universal aesthetic laws. We use the weighted center-surround responses that form the basis of our saliency model to create a feature vector that describes aesthetics. We also introduce a novel color space for fine-grained color representation. We then demonstrate that the resultant features achieve state-of-the-art performance on aesthetic quality classification.
As such, a promising contribution of this thesis is to show that several vision experiences – low-level color perception, visual saliency and visual aesthetics estimation – may be successfully modeled using a unified framework. This suggests a similar architecture in area V1 for both color perception and saliency and adds evidence to the hypothesis that visual aesthetics appreciation is driven in part by low-level cues.
|
|
|
Jiaolong Xu, David Vazquez, Antonio Lopez, Javier Marin, & Daniel Ponsa. (2013). Learning a Multiview Part-based Model in Virtual World for Pedestrian Detection. In IEEE Intelligent Vehicles Symposium (pp. 467–472). IEEE.
Abstract: State-of-the-art deformable part-based models based on latent SVM have shown excellent results on human detection. In this paper, we propose to train a multiview deformable part-based model with automatically generated part examples from virtual-world data. The method is efficient as: (i) the part detectors are trained with precisely extracted virtual examples, thus no latent learning is needed, (ii) the multiview pedestrian detector enhances the performance of the pedestrian root model, (iii) a top-down approach is used for part detection which reduces the searching space. We evaluate our model on Daimler and Karlsruhe Pedestrian Benchmarks with publicly available Caltech pedestrian detection evaluation framework and the result outperforms the state-of-the-art latent SVM V4.0, on both average miss rate and speed (our detector is ten times faster).
Keywords: Pedestrian Detection; Virtual World; Part based
|
|
|
Marina Alberti. (2013). Detection and Alignment of Vascular Structures in Intravascular Ultrasound using Pattern Recognition Techniques (Simone Balocco, & Petia Radeva, Eds.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: In this thesis, several methods for the automatic analysis of Intravascular Ultrasound
(IVUS) sequences are presented, aimed at assisting physicians in the diagnosis, the assessment of the intervention and the monitoring of the patients with coronary disease.
The basis for the developed frameworks are machine learning, pattern recognition and
image processing techniques.
First, a novel approach for the automatic detection of vascular bifurcations in
IVUS is presented. The task is addressed as a binary classication problem (identifying bifurcation and non-bifurcation angular sectors in the sequence images). The
multiscale stacked sequential learning algorithm is applied, to take into account the
spatial and temporal context in IVUS sequences, and the results are rened using
a-priori information about branching dimensions and geometry. The achieved performance is comparable to intra- and inter-observer variability.
Then, we propose a novel method for the automatic non-rigid alignment of IVUS
sequences of the same patient, acquired at dierent moments (before and after percutaneous coronary intervention, or at baseline and follow-up examinations). The
method is based on the description of the morphological content of the vessel, obtained by extracting temporal morphological proles from the IVUS acquisitions, by
means of methods for segmentation, characterization and detection in IVUS. A technique for non-rigid sequence alignment – the Dynamic Time Warping algorithm -
is applied to the proles and adapted to the specic clinical problem. Two dierent robust strategies are proposed to address the partial overlapping between frames
of corresponding sequences, and a regularization term is introduced to compensate
for possible errors in the prole extraction. The benets of the proposed strategy
are demonstrated by extensive validation on synthetic and in-vivo data. The results
show the interest of the proposed non-linear alignment and the clinical value of the
method.
Finally, a novel automatic approach for the extraction of the luminal border in
IVUS images is presented. The method applies the multiscale stacked sequential
learning algorithm and extends it to 2-D+T, in a rst classication phase (the identi-
cation of lumen and non-lumen regions of the images), while an active contour model
is used in a second phase, to identify the lumen contour. The method is extended
to the longitudinal dimension of the sequences and it is validated on a challenging
data-set.
|
|
|
Sergio Escalera. (2008). Coding and Decoding Design of ECOCs for Multi-class Pattern and Object Recognition A (Petia Radeva, & Oriol Pujol, Eds.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: Many real problems require multi-class decisions. In the Pattern Recognition field,
many techniques have been proposed to deal with the binary problem. However,
the extension of many 2-class classifiers to the multi-class case is a hard task. In
this sense, Error-Correcting Output Codes (ECOC) demonstrated to be a powerful
tool to combine any number of binary classifiers to model multi-class problems. But
there are still many open issues about the capabilities of the ECOC framework. In
this thesis, the two main stages of an ECOC design are analyzed: the coding and
the decoding steps. We present different problem-dependent designs. These designs
take advantage of the knowledge of the problem domain to minimize the number
of classifiers, obtaining a high classification performance. On the other hand, we
analyze the ECOC codification in order to define new decoding rules that take full
benefit from the information provided at the coding step. Moreover, as a successful
classification requires a rich feature set, new feature detection/extraction techniques
are presented and evaluated on the new ECOC designs. The evaluation of the new
methodology is performed on different real and synthetic data sets: UCI Machine
Learning Repository, handwriting symbols, traffic signs from a Mobile Mapping System, Intravascular Ultrasound images, Caltech Repository data set or Chaga’s disease
data set. The results of this thesis show that significant performance improvements
are obtained on both traditional coding and decoding ECOC designs when the new
coding and decoding rules are taken into account.
|
|
|
David Vazquez, Jiaolong Xu, Sebastian Ramos, Antonio Lopez, & Daniel Ponsa. (2013). Weakly Supervised Automatic Annotation of Pedestrian Bounding Boxes. In CVPR Workshop on Ground Truth – What is a good dataset? (pp. 706–711). IEEE.
Abstract: Among the components of a pedestrian detector, its trained pedestrian classifier is crucial for achieving the desired performance. The initial task of the training process consists in collecting samples of pedestrians and background, which involves tiresome manual annotation of pedestrian bounding boxes (BBs). Thus, recent works have assessed the use of automatically collected samples from photo-realistic virtual worlds. However, learning from virtual-world samples and testing in real-world images may suffer the dataset shift problem. Accordingly, in this paper we assess an strategy to collect samples from the real world and retrain with them, thus avoiding the dataset shift, but in such a way that no BBs of real-world pedestrians have to be provided. In particular, we train a pedestrian classifier based on virtual-world samples (no human annotation required). Then, using such a classifier we collect pedestrian samples from real-world images by detection. After, a human oracle rejects the false detections efficiently (weak annotation). Finally, a new classifier is trained with the accepted detections. We show that this classifier is competitive with respect to the counterpart trained with samples collected by manually annotating hundreds of pedestrian BBs.
Keywords: Pedestrian Detection; Domain Adaptation
|
|
|
Jiaolong Xu, David Vazquez, Sebastian Ramos, Antonio Lopez, & Daniel Ponsa. (2013). Adapting a Pedestrian Detector by Boosting LDA Exemplar Classifiers. In CVPR Workshop on Ground Truth – What is a good dataset? (pp. 688–693).
Abstract: Training vision-based pedestrian detectors using synthetic datasets (virtual world) is a useful technique to collect automatically the training examples with their pixel-wise ground truth. However, as it is often the case, these detectors must operate in real-world images, experiencing a significant drop of their performance. In fact, this effect also occurs among different real-world datasets, i.e. detectors' accuracy drops when the training data (source domain) and the application scenario (target domain) have inherent differences. Therefore, in order to avoid this problem, it is required to adapt the detector trained with synthetic data to operate in the real-world scenario. In this paper, we propose a domain adaptation approach based on boosting LDA exemplar classifiers from both virtual and real worlds. We evaluate our proposal on multiple real-world pedestrian detection datasets. The results show that our method can efficiently adapt the exemplar classifiers from virtual to real world, avoiding drops in average precision over the 15%.
Keywords: Pedestrian Detection; Domain Adaptation
|
|
|
Wenjuan Gong. (2009). Action priors for human pose tracking by particle filter. Master's thesis, , Bellaterra, Barcelona.
|
|
|
Adria Ruiz, Joost Van de Weijer, & Xavier Binefa. (2014). Regularized Multi-Concept MIL for weakly-supervised facial behavior categorization. In 25th British Machine Vision Conference.
Abstract: We address the problem of estimating high-level semantic labels for videos of recorded people by means of analysing their facial expressions. This problem, to which we refer as facial behavior categorization, is a weakly-supervised learning problem where we do not have access to frame-by-frame facial gesture annotations but only weak-labels at the video level are available. Therefore, the goal is to learn a set of discriminative expressions and how they determine the video weak-labels. Facial behavior categorization can be posed as a Multi-Instance-Learning (MIL) problem and we propose a novel MIL method called Regularized Multi-Concept MIL to solve it. In contrast to previous approaches applied in facial behavior analysis, RMC-MIL follows a Multi-Concept assumption which allows different facial expressions (concepts) to contribute differently to the video-label. Moreover, to handle with the high-dimensional nature of facial-descriptors, RMC-MIL uses a discriminative approach to model the concepts and structured sparsity regularization to discard non-informative features. RMC-MIL is posed as a convex-constrained optimization problem where all the parameters are jointly learned using the Projected-Quasi-Newton method. In our experiments, we use two public data-sets to show the advantages of the Regularized Multi-Concept approach and its improvement compared to existing MIL methods. RMC-MIL outperforms state-of-the-art results in the UNBC data-set for pain detection.
|
|
|
Ferran Poveda. (2013). Computer Graphics and Vision Techniques for the Study of the Muscular Fiber Architecture of the Myocardium (Debora Gil, & Enric Marti, Eds.). Ph.D. thesis, , .
|
|
|
Miquel Ferrer, I. Bardaji, Ernest Valveny, Dimosthenis Karatzas, & Horst Bunke. (2013). Median Graph Computation by Means of Graph Embedding into Vector Spaces. In Yun Fu, & Yungian Ma (Eds.), Graph Embedding for Pattern Analysis (pp. 45–72). Springer New York.
Abstract: In pattern recognition [8, 14], a key issue to be addressed when designing a system is how to represent input patterns. Feature vectors is a common option. That is, a set of numerical features describing relevant properties of the pattern are computed and arranged in a vector form. The main advantages of this kind of representation are computational simplicity and a well sound mathematical foundation. Thus, a large number of operations are available to work with vectors and a large repository of algorithms for pattern analysis and classification exist. However, the simple structure of feature vectors might not be the best option for complex patterns where nonnumerical features or relations between different parts of the pattern become relevant.
|
|
|
Mirko Arnold, Anarta Ghosh, Gerard Lacey, Stephen Patchett, & Hugh Mulcahy. (2009). Indistinct frame detection in colonoscopy videos. In Machine Vision and Image Processing Conference (pp. 47–52).
|
|
|
A.Kesidis, & Dimosthenis Karatzas. (2014). Logo and Trademark Recognition. In D. Doermann, & K. Tombre (Eds.), Handbook of Document Image Processing and Recognition (Vol. D, pp. 591–646). Springer London.
Abstract: The importance of logos and trademarks in nowadays society is indisputable, variably seen under a positive light as a valuable service for consumers or a negative one as a catalyst of ever-increasing consumerism. This chapter discusses the technical approaches for enabling machines to work with logos, looking into the latest methodologies for logo detection, localization, representation, recognition, retrieval, and spotting in a variety of media. This analysis is presented in the context of three different applications covering the complete depth and breadth of state of the art techniques. These are trademark retrieval systems, logo recognition in document images, and logo detection and removal in images and videos. This chapter, due to the very nature of logos and trademarks, brings together various facets of document image analysis spanning graphical and textual content, while it links document image analysis to other computer vision domains, especially when it comes to the analysis of real-scene videos and images.
Keywords: Logo recognition; Logo removal; Logo spotting; Trademark registration; Trademark retrieval systems
|
|
|
Mirko Arnold, Anarta Ghosh, Glen Doherty, Hugh Mulcahy, Stephen Patchett, & Gerard Lacey. (2013). Towards Automatic Direct Observation of Procedure and Skill (DOPS) in Colonoscopy. In Proceedings of the International Conference on Computer Vision Theory and Applications (pp. 48–53).
|
|
|
Stefan Ameling, Stephan Wirth, Dietrich Paulus, Gerard Lacey, & Fernando Vilariño. (2009). Texture-based Polyp Detection in Colonoscopy. Proc. BILDVERARBEITUNG FÜR DIE MEDIZIN, .
|
|
|
Christophe Rigaud, Dimosthenis Karatzas, Jean-Christophe Burie, & Jean-Marc Ogier. (2013). Speech balloon contour classification in comics. In 10th IAPR International Workshop on Graphics Recognition.
Abstract: Comic books digitization combined with subsequent comic book understanding create a variety of new applications, including mobile reading and data mining. Document understanding in this domain is challenging as comics are semi-structured documents, combining semantically important graphical and textual parts. In this work we detail a novel approach for classifying speech balloon in scanned comics book pages based on their contour time series.
|
|