|
Records |
Links |
|
Author |
Cesar Isaza; Joaquin Salas; Bogdan Raducanu |
|
|
Title |
Toward the Detection of Urban Infrastructures Edge Shadows |
Type |
Conference Article |
|
Year |
2010 |
Publication |
12th International Conference on Advanced Concepts for Intelligent Vision Systems |
Abbreviated Journal |
|
|
|
Volume |
6474 |
Issue |
I |
Pages |
30–37 |
|
|
Keywords |
|
|
|
Abstract |
In this paper, we propose a novel technique to detect the shadows cast by urban infrastructure, such as buildings, billboards, and traffic signs, using a sequence of images taken from a fixed camera. In our approach, we compute two different background models in parallel: one for the edges and one for the reflected light intensity. An algorithm is proposed to train the system to distinguish between moving edges in general and edges that belong to static objects, creating an edge background model. Then, during operation, a background intensity model allow us to separate between moving and static objects. Those edges included in the moving objects and those that belong to the edge background model are subtracted from the current image edges. The remaining edges are the ones cast by urban infrastructure. Our method is tested on a typical crossroad scene and the results show that the approach is sound and promising. |
|
|
Address |
Sydney, Australia |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Berlin Heidelberg |
Place of Publication |
|
Editor |
eds. Blanc–Talon et al |
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN |
978-3-642-17687-6 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ACIVS |
|
|
Notes |
OR;MV |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ ISR2010 |
Serial |
1458 |
|
Permanent link to this record |
|
|
|
|
Author |
Muhammad Muzzamil Luqman; Thierry Brouard; Jean-Yves Ramel; Josep Llados |
|
|
Title |
Vers une approche foue of encapsulation de graphes: application a la reconnaissance de symboles |
Type |
Conference Article |
|
Year |
2010 |
Publication |
Colloque International Francophone sur l'Écrit et le Document |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
169-184 |
|
|
Keywords |
Fuzzy interval; Graph embedding; Bayesian network; Symbol recognition |
|
|
Abstract |
We present a new methodology for symbol recognition, by employing a structural approach for representing visual associations in symbols and a statistical classifier for recognition. A graphic symbol is vectorized, its topological and geometrical details are encoded by an attributed relational graph and a signature is computed for it. Data adapted fuzzy intervals have been introduced for addressing the sensitivity of structural representations to noise. The joint probability distribution of signatures is encoded by a Bayesian network, which serves as a mechanism for pruning irrelevant features and choosing a subset of interesting features from structural signatures of underlying symbol set, and is deployed in a supervised learning scenario for recognizing query symbols. Experimental results on pre-segmented 2D linear architectural and electronic symbols from GREC databases are presented. |
|
|
Address |
Sousse, Tunisia |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CIFED |
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
DAG @ dag @ LBR2010a |
Serial |
1293 |
|
Permanent link to this record |
|
|
|
|
Author |
Herve Locteau; Sebastien Mace; Ernest Valveny; Salvatore Tabbone |
|
|
Title |
Extraction des pieces de un plan de habitation |
Type |
Conference Article |
|
Year |
2010 |
Publication |
Colloque Internacional Francophone de l´Ecrit et le Document |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
1–12 |
|
|
Keywords |
|
|
|
Abstract |
In this article, a method to extract the rooms of an architectural floor plan image is described. We first present a line detection algorithm to extract long lines in the image. Those lines are analyzed to identify the existing walls. From this point, room extraction can be seen as a classical segmentation task for which each region corresponds to a room. The chosen resolution strategy consists in recursively decomposing the image until getting nearly convex regions. The notion of convexity is difficult to quantify, and the selection of separation lines can also be rough. Thus, we take advantage of knowledge associated to architectural floor plans in order to obtain mainly rectangular rooms. Preliminary tests on a set of real documents show promising results. |
|
|
Address |
Sousse, Tunisia |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CIFED |
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
DAG @ dag @ LMV2010 |
Serial |
1440 |
|
Permanent link to this record |
|
|
|
|
Author |
Partha Pratim Roy; Umapada Pal; Josep Llados |
|
|
Title |
Seal Object Detection in Document Images using GHT of Local Component Shapes |
Type |
Conference Article |
|
Year |
2010 |
Publication |
10th ACM Symposium On Applied Computing |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
23–27 |
|
|
Keywords |
|
|
|
Abstract |
Due to noise, overlapped text/signature and multi-oriented nature, seal (stamp) object detection involves a difficult challenge. This paper deals with automatic detection of seal from documents with cluttered background. Here, a seal object is characterized by scale and rotation invariant spatial feature descriptors (distance and angular position) computed from recognition result of individual connected components (characters). Recognition of multi-scale and multi-oriented component is done using Support Vector Machine classifier. Generalized Hough Transform (GHT) is used to detect the seal and a voting is casted for finding possible location of the seal object in a document based on these spatial feature descriptor of components pairs. The peak of votes in GHT accumulator validates the hypothesis to locate the seal object in a document. Experimental results show that, the method is efficient to locate seal instance of arbitrary shape and orientation in documents. |
|
|
Address |
Sierre, Switzerland |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
SAC |
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
DAG @ dag @ RPL2010a |
Serial |
1291 |
|
Permanent link to this record |
|
|
|
|
Author |
Jaume Gibert; Ernest Valveny; Horst Bunke |
|
|
Title |
Graph of Words Embedding for Molecular Structure-Activity Relationship Analysis |
Type |
Conference Article |
|
Year |
2010 |
Publication |
15th Iberoamerican Congress on Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
6419 |
Issue |
|
Pages |
30–37 |
|
|
Keywords |
|
|
|
Abstract |
Structure-Activity relationship analysis aims at discovering chemical activity of molecular compounds based on their structure. In this article we make use of a particular graph representation of molecules and propose a new graph embedding procedure to solve the problem of structure-activity relationship analysis. The embedding is essentially an arrangement of a molecule in the form of a vector by considering frequencies of appearing atoms and frequencies of covalent bonds between them. Results on two benchmark databases show the effectiveness of the proposed technique in terms of recognition accuracy while avoiding high operational costs in the transformation. |
|
|
Address |
Sao Paulo, Brazil |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN |
978-3-642-16686-0 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CIARP |
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
DAG @ dag @ GVB2010 |
Serial |
1462 |
|
Permanent link to this record |
|
|
|
|
Author |
Jose Manuel Alvarez; Theo Gevers; Antonio Lopez |
|
|
Title |
3D Scene Priors for Road Detection |
Type |
Conference Article |
|
Year |
2010 |
Publication |
23rd IEEE Conference on Computer Vision and Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
57–64 |
|
|
Keywords |
road detection |
|
|
Abstract |
Vision-based road detection is important in different areas of computer vision such as autonomous driving, car collision warning and pedestrian crossing detection. However, current vision-based road detection methods are usually based on low-level features and they assume structured roads, road homogeneity, and uniform lighting conditions. Therefore, in this paper, contextual 3D information is used in addition to low-level cues. Low-level photometric invariant cues are derived from the appearance of roads. Contextual cues used include horizon lines, vanishing points, 3D scene layout and 3D road stages. Moreover, temporal road cues are included. All these cues are sensitive to different imaging conditions and hence are considered as weak cues. Therefore, they are combined to improve the overall performance of the algorithm. To this end, the low-level, contextual and temporal cues are combined in a Bayesian framework to classify road sequences. Large scale experiments on road sequences show that the road detection method is robust to varying imaging conditions, road types, and scenarios (tunnels, urban and highway). Further, using the combined cues outperforms all other individual cues. Finally, the proposed method provides highest road detection accuracy when compared to state-of-the-art methods. |
|
|
Address |
San Francisco; CA; USA; June 2010 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1063-6919 |
ISBN |
978-1-4244-6984-0 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CVPR |
|
|
Notes |
ADAS;ISE |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ AGL2010a |
Serial |
1302 |
|
Permanent link to this record |
|
|
|
|
Author |
Mohammad Rouhani; Angel Sappa |
|
|
Title |
Relaxing the 3L Algorithm for an Accurate Implicit Polynomial Fitting |
Type |
Conference Article |
|
Year |
2010 |
Publication |
23rd IEEE Conference on Computer Vision and Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
3066-3072 |
|
|
Keywords |
|
|
|
Abstract |
This paper presents a novel method to increase the accuracy of linear fitting of implicit polynomials. The proposed method is based on the 3L algorithm philosophy. The novelty lies on the relaxation of the additional constraints, already imposed by the 3L algorithm. Hence, the accuracy of the final solution is increased due to the proper adjustment of the expected values in the aforementioned additional constraints. Although iterative, the proposed approach solves the fitting problem within a linear framework, which is independent of the threshold tuning. Experimental results, both in 2D and 3D, showing improvements in the accuracy of the fitting are presented. Comparisons with both state of the art algorithms and a geometric based one (non-linear fitting), which is used as a ground truth, are provided. |
|
|
Address |
San Francisco; CA; USA; June 2010 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1063-6919 |
ISBN |
978-1-4244-6984-0 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CVPR |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ RoS2010a |
Serial |
1303 |
|
Permanent link to this record |
|
|
|
|
Author |
Javier Marin; David Vazquez; David Geronimo; Antonio Lopez |
|
|
Title |
Learning Appearance in Virtual Scenarios for Pedestrian Detection |
Type |
Conference Article |
|
Year |
2010 |
Publication |
23rd IEEE Conference on Computer Vision and Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
137–144 |
|
|
Keywords |
Pedestrian Detection; Domain Adaptation |
|
|
Abstract |
Detecting pedestrians in images is a key functionality to avoid vehicle-to-pedestrian collisions. The most promising detectors rely on appearance-based pedestrian classifiers trained with labelled samples. This paper addresses the following question: can a pedestrian appearance model learnt in virtual scenarios work successfully for pedestrian detection in real images? (Fig. 1). Our experiments suggest a positive answer, which is a new and relevant conclusion for research in pedestrian detection. More specifically, we record training sequences in virtual scenarios and then appearance-based pedestrian classifiers are learnt using HOG and linear SVM. We test such classifiers in a publicly available dataset provided by Daimler AG for pedestrian detection benchmarking. This dataset contains real world images acquired from a moving car. The obtained result is compared with the one given by a classifier learnt using samples coming from real images. The comparison reveals that, although virtual samples were not specially selected, both virtual and real based training give rise to classifiers of similar performance. |
|
|
Address |
San Francisco; CA; USA; June 2010 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
English |
Summary Language |
English |
Original Title |
Learning Appearance in Virtual Scenarios for Pedestrian Detection |
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1063-6919 |
ISBN |
978-1-4244-6984-0 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CVPR |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ MVG2010 |
Serial |
1304 |
|
Permanent link to this record |
|
|
|
|
Author |
Fadi Dornaika; Bogdan Raducanu |
|
|
Title |
Single Snapshot 3D Head Pose Initialization for Tracking in Human Robot Interaction Scenario |
Type |
Conference Article |
|
Year |
2010 |
Publication |
1st International Workshop on Computer Vision for Human-Robot Interaction |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
32–39 |
|
|
Keywords |
1st International Workshop on Computer Vision for Human-Robot Interaction, in conjunction with IEEE CVPR 2010 |
|
|
Abstract |
This paper presents an automatic 3D head pose initialization scheme for a real-time face tracker with application to human-robot interaction. It has two main contributions. First, we propose an automatic 3D head pose and person specific face shape estimation, based on a 3D deformable model. The proposed approach serves to initialize our realtime 3D face tracker. What makes this contribution very attractive is that the initialization step can cope with faces
under arbitrary pose, so it is not limited only to near-frontal views. Second, the previous framework is used to develop an application in which the orientation of an AIBO’s camera can be controlled through the imitation of user’s head pose.
In our scenario, this application is used to build panoramic images from overlapping snapshots. Experiments on real videos confirm the robustness and usefulness of the proposed methods. |
|
|
Address |
San Francisco; CA; USA; June 2010 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
2160-7508 |
ISBN |
978-1-4244-7029-7 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CVPRW |
|
|
Notes |
OR;MV |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ DoR2010a |
Serial |
1309 |
|
Permanent link to this record |
|
|
|
|
Author |
David Aldavert; Arnau Ramisa; Ramon Lopez de Mantaras; Ricardo Toledo |
|
|
Title |
Fast and Robust Object Segmentation with the Integral Linear Classifier |
Type |
Conference Article |
|
Year |
2010 |
Publication |
23rd IEEE Conference on Computer Vision and Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
1046–1053 |
|
|
Keywords |
|
|
|
Abstract |
We propose an efficient method, built on the popular Bag of Features approach, that obtains robust multiclass pixel-level object segmentation of an image in less than 500ms, with results comparable or better than most state of the art methods. We introduce the Integral Linear Classifier (ILC), that can readily obtain the classification score for any image sub-window with only 6 additions and 1 product by fusing the accumulation and classification steps in a single operation. In order to design a method as efficient as possible, our building blocks are carefully selected from the quickest in the state of the art. More precisely, we evaluate the performance of three popular local descriptors, that can be very efficiently computed using integral images, and two fast quantization methods: the Hierarchical K-Means, and the Extremely Randomized Forest. Finally, we explore the utility of adding spatial bins to the Bag of Features histograms and that of cascade classifiers to improve the obtained segmentation. Our method is compared to the state of the art in the difficult Graz-02 and PASCAL 2007 Segmentation Challenge datasets. |
|
|
Address |
San Francisco; CA; USA; June 2010 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1063-6919 |
ISBN |
978-1-4244-6984-0 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CVPR |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ ARL2010a |
Serial |
1311 |
|
Permanent link to this record |
|
|
|
|
Author |
Michal Drozdzal; Laura Igual; Petia Radeva; Jordi Vitria; Carolina Malagelada; Fernando Azpiroz |
|
|
Title |
Aligning Endoluminal Scene Sequences in Wireless Capsule Endoscopy |
Type |
Conference Article |
|
Year |
2010 |
Publication |
IEEE Computer Society Workshop on Mathematical Methods in Biomedical Image Analysis |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
117–124 |
|
|
Keywords |
|
|
|
Abstract |
Intestinal motility analysis is an important examination in detection of various intestinal malfunctions. One of the big challenges of automatic motility analysis is how to compare sequence of images and extract dynamic paterns taking into account the high deformability of the intestine wall as well as the capsule motion. From clinical point of view the ability to align endoluminal scene sequences will help to find regions of similar intestinal activity and in this way will provide a valuable information on intestinal motility problems. This work, for first time, addresses the problem of aligning endoluminal sequences taking into account motion and structure of the intestine. To describe motility in the sequence, we propose different descriptors based on the Sift Flow algorithm, namely: (1) Histograms of Sift Flow Directions to describe the flow course, (2) Sift Descriptors to represent image intestine structure and (3) Sift Flow Magnitude to quantify intestine deformation. We show that the merge of all three descriptors provides robust information on sequence description in terms of motility. Moreover, we develop a novel methodology to rank the intestinal sequences based on the expert feedback about relevance of the results. The experimental results show that the selected descriptors are useful in the alignment and similarity description and the proposed method allows the analysis of the WCE. |
|
|
Address |
San Francisco; CA; USA; June 2010 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
2160-7508 |
ISBN |
978-1-4244-7029-7 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
MMBIA |
|
|
Notes |
OR;MILAB;MV |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ DIR2010 |
Serial |
1316 |
|
Permanent link to this record |
|
|
|
|
Author |
Antonio Hernandez; Miguel Reyes; Sergio Escalera; Petia Radeva |
|
|
Title |
Spatio-Temporal GrabCut human segmentation for face and pose recovery |
Type |
Conference Article |
|
Year |
2010 |
Publication |
IEEE International Workshop on Analysis and Modeling of Faces and Gestures |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
33–40 |
|
|
Keywords |
|
|
|
Abstract |
In this paper, we present a full-automatic Spatio-Temporal GrabCut human segmentation methodology. GrabCut initialization is performed by a HOG-based subject detection, face detection, and skin color model for seed initialization. Spatial information is included by means of Mean Shift clustering whereas temporal coherence is considered by the historical of Gaussian Mixture Models. Moreover, human segmentation is combined with Shape and Active Appearance Models to perform full face and pose recovery. Results over public data sets as well as proper human action base show a robust segmentation and recovery of both face and pose using the presented methodology. |
|
|
Address |
San Francisco; CA; USA; June 2010 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
2160-7508 |
ISBN |
978-1-4244-7029-7 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
AMFG |
|
|
Notes |
MILAB;HUPBA |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ HRE2010 |
Serial |
1362 |
|
Permanent link to this record |
|
|
|
|
Author |
Mario Rojas; David Masip; A. Todorov; Jordi Vitria |
|
|
Title |
Automatic Point-based Facial Trait Judgments Evaluation |
Type |
Conference Article |
|
Year |
2010 |
Publication |
23rd IEEE Conference on Computer Vision and Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
2715–2720 |
|
|
Keywords |
|
|
|
Abstract |
Humans constantly evaluate the personalities of other people using their faces. Facial trait judgments have been studied in the psychological field, and have been determined to influence important social outcomes of our lives, such as elections outcomes and social relationships. Recent work on textual descriptions of faces has shown that trait judgments are highly correlated. Further, behavioral studies suggest that two orthogonal dimensions, valence and dominance, can describe the basis of the human judgments from faces. In this paper, we used a corpus of behavioral data of judgments on different trait dimensions to automatically learn a trait predictor from facial pixel images. We study whether trait evaluations performed by humans can be learned using machine learning classifiers, and used later in automatic evaluations of new facial images. The experiments performed using local point-based descriptors show promising results in the evaluation of the main traits. |
|
|
Address |
San Francisco CA, USA |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1063-6919 |
ISBN |
978-1-4244-6984-0 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CVPR |
|
|
Notes |
OR;MV |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ RMT2010 |
Serial |
1282 |
|
Permanent link to this record |
|
|
|
|
Author |
Josep M. Gonfaus; Xavier Boix; Joost Van de Weijer; Andrew Bagdanov; Joan Serrat; Jordi Gonzalez |
|
|
Title |
Harmony Potentials for Joint Classification and Segmentation |
Type |
Conference Article |
|
Year |
2010 |
Publication |
23rd IEEE Conference on Computer Vision and Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
3280–3287 |
|
|
Keywords |
|
|
|
Abstract |
Hierarchical conditional random fields have been successfully applied to object segmentation. One reason is their ability to incorporate contextual information at different scales. However, these models do not allow multiple labels to be assigned to a single node. At higher scales in the image, this yields an oversimplified model, since multiple classes can be reasonable expected to appear within one region. This simplified model especially limits the impact that observations at larger scales may have on the CRF model. Neglecting the information at larger scales is undesirable since class-label estimates based on these scales are more reliable than at smaller, noisier scales. To address this problem, we propose a new potential, called harmony potential, which can encode any possible combination of class labels. We propose an effective sampling strategy that renders tractable the underlying optimization problem. Results show that our approach obtains state-of-the-art results on two challenging datasets: Pascal VOC 2009 and MSRC-21. |
|
|
Address |
San Francisco CA, USA |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1063-6919 |
ISBN |
978-1-4244-6984-0 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CVPR |
|
|
Notes |
ADAS;CIC;ISE |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ GBW2010 |
Serial |
1296 |
|
Permanent link to this record |
|
|
|
|
Author |
Jose Manuel Alvarez; Felipe Lumbreras; Theo Gevers; Antonio Lopez |
|
|
Title |
Geographic Information for vision-based Road Detection |
Type |
Conference Article |
|
Year |
2010 |
Publication |
IEEE Intelligent Vehicles Symposium |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
621–626 |
|
|
Keywords |
road detection |
|
|
Abstract |
Road detection is a vital task for the development of autonomous vehicles. The knowledge of the free road surface ahead of the target vehicle can be used for autonomous driving, road departure warning, as well as to support advanced driver assistance systems like vehicle or pedestrian detection. Using vision to detect the road has several advantages in front of other sensors: richness of features, easy integration, low cost or low power consumption. Common vision-based road detection approaches use low-level features (such as color or texture) as visual cues to group pixels exhibiting similar properties. However, it is difficult to foresee a perfect clustering algorithm since roads are in outdoor scenarios being imaged from a mobile platform. In this paper, we propose a novel high-level approach to vision-based road detection based on geographical information. The key idea of the algorithm is exploiting geographical information to provide a rough detection of the road. Then, this segmentation is refined at low-level using color information to provide the final result. The results presented show the validity of our approach. |
|
|
Address |
San Diego; CA; USA |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
IV |
|
|
Notes |
ADAS;ISE |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ ALG2010 |
Serial |
1428 |
|
Permanent link to this record |