Home | [71–80] << 81 82 83 84 85 86 87 88 89 90 >> [91–100] |
Records | |||||
---|---|---|---|---|---|
Author | Salvatore Tabbone; Oriol Ramos Terrades; S. Barrat | ||||
Title | Histogram of radon transform. A useful descriptor for shape retrieval | Type | Conference Article | ||
Year | 2008 | Publication | 19th International Conference on Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 1-4 | ||
Keywords | |||||
Abstract | |||||
Address | Tampa, Florida | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICPR | ||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ TRB2008 | Serial | 1876 | ||
Permanent link to this record | |||||
Author | Jose Carlos Rubio; Joan Serrat; Antonio Lopez; Daniel Ponsa | ||||
Title | Multiple target tracking for intelligent headlights control | Type | Journal Article | ||
Year | 2012 | Publication | IEEE Transactions on Intelligent Transportation Systems | Abbreviated Journal | TITS |
Volume | 13 | Issue | 2 | Pages | 594-605 |
Keywords | Intelligent Headlights | ||||
Abstract | Intelligent vehicle lighting systems aim at automatically regulating the headlights' beam to illuminate as much of the road ahead as possible while avoiding dazzling other drivers. A key component of such a system is computer vision software that is able to distinguish blobs due to vehicles' headlights and rear lights from those due to road lamps and reflective elements such as poles and traffic signs. In a previous work, we have devised a set of specialized supervised classifiers to make such decisions based on blob features related to its intensity and shape. Despite the overall good performance, there remain challenging that have yet to be solved: notably, faint and tiny blobs corresponding to quite distant vehicles. In fact, for such distant blobs, classification decisions can be taken after observing them during a few frames. Hence, incorporating tracking could improve the overall lighting system performance by enforcing the temporal consistency of the classifier decision. Accordingly, this paper focuses on the problem of constructing blob tracks, which is actually one of multiple-target tracking (MTT), but under two special conditions: We have to deal with frequent occlusions, as well as blob splits and merges. We approach it in a novel way by formulating the problem as a maximum a posteriori inference on a Markov random field. The qualitative (in video form) and quantitative evaluation of our new MTT method shows good tracking results. In addition, we will also see that the classification performance of the problematic blobs improves due to the proposed MTT algorithm. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1524-9050 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS | Approved | no | ||
Call Number | Admin @ si @ RLP2012; ADAS @ adas @ rsl2012g | Serial | 1877 | ||
Permanent link to this record | |||||
Author | Yainuvis Socarras; Sebastian Ramos; David Vazquez; Antonio Lopez; Theo Gevers | ||||
Title | Adapting Pedestrian Detection from Synthetic to Far Infrared Images | Type | Conference Article | ||
Year | 2013 | Publication | ICCV Workshop on Visual Domain Adaptation and Dataset Bias | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | Domain Adaptation; Far Infrared; Pedestrian Detection | ||||
Abstract | We present different techniques to adapt a pedestrian classifier trained with synthetic images and the corresponding automatically generated annotations to operate with far infrared (FIR) images. The information contained in this kind of images allow us to develop a robust pedestrian detector invariant to extreme illumination changes. | ||||
Address | Sydney; Australia; December 2013 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Sydney, Australy | Editor | ||
Language | English | Summary Language | Original Title | ||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICCVW-VisDA | ||
Notes | ADAS; 600.054; 600.055; 600.057; 601.217;ISE | Approved | no | ||
Call Number | ADAS @ adas @ SRV2013 | Serial | 2334 | ||
Permanent link to this record | |||||
Author | V.C.Kieu; Alicia Fornes; M. Visani; N.Journet ; Anjan Dutta | ||||
Title | The ICDAR/GREC 2013 Music Scores Competition on Staff Removal | Type | Conference Article | ||
Year | 2013 | Publication | 10th IAPR International Workshop on Graphics Recognition | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | Competition; Music scores; Staff Removal | ||||
Abstract | The first competition on music scores that was organized at ICDAR and GREC in 2011 awoke the interest of researchers, who participated both at staff removal and writer identification tasks. In this second edition, we propose a staff removal competition where we simulate old music scores. Thus, we have created a new set of images, which contain noise and 3D distortions. This paper describes the distortion methods, metrics, the participant’s methods and the obtained results. | ||||
Address | Bethlehem; PA; USA; August 2013 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | GREC | ||
Notes | DAG; 600.045; 600.061 | Approved | no | ||
Call Number | Admin @ si @ KFV2013 | Serial | 2337 | ||
Permanent link to this record | |||||
Author | M. Visani; V.C.Kieu; Alicia Fornes; N.Journet | ||||
Title | The ICDAR 2013 Music Scores Competition: Staff Removal | Type | Conference Article | ||
Year | 2013 | Publication | 12th International Conference on Document Analysis and Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 1439-1443 | ||
Keywords | |||||
Abstract | The first competition on music scores that was organized at ICDAR in 2011 awoke the interest of researchers, who participated both at staff removal and writer identification tasks. In this second edition, we focus on the staff removal task and simulate a real case scenario: old music scores. For this purpose, we have generated a new set of images using two kinds of degradations: local noise and 3D distortions. This paper describes the dataset, distortion methods, evaluation metrics, the participant's methods and the obtained results. | ||||
Address | Washington; USA; August 2013 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1520-5363 | ISBN | Medium | ||
Area | Expedition | Conference | ICDAR | ||
Notes | DAG; 600.045; 600.061 | Approved | no | ||
Call Number | Admin @ si @ VKF2013 | Serial | 2338 | ||
Permanent link to this record | |||||
Author | Sergio Escalera; Ana Puig; Oscar Amoros; Maria Salamo | ||||
Title | Intelligent GPGPU Classification in Volume Visualization: a framework based on Error-Correcting Output Codes | Type | Journal Article | ||
Year | 2011 | Publication | Computer Graphics Forum | Abbreviated Journal | CGF |
Volume | 30 | Issue | 7 | Pages | 2107-2115 |
Keywords | |||||
Abstract | IF JCR 1.455 2010 25/99
In volume visualization, the definition of the regions of interest is inherently an iterative trial-and-error process finding out the best parameters to classify and render the final image. Generally, the user requires a lot of expertise to analyze and edit these parameters through multi-dimensional transfer functions. In this paper, we present a framework of intelligent methods to label on-demand multiple regions of interest. These methods can be split into a two-level GPU-based labelling algorithm that computes in time of rendering a set of labelled structures using the Machine Learning Error-Correcting Output Codes (ECOC) framework. In a pre-processing step, ECOC trains a set of Adaboost binary classifiers from a reduced pre-labelled data set. Then, at the testing stage, each classifier is independently applied on the features of a set of unlabelled samples and combined to perform multi-class labelling. We also propose an alternative representation of these classifiers that allows to highly parallelize the testing stage. To exploit that parallelism we implemented the testing stage in GPU-OpenCL. The empirical results on different data sets for several volume structures shows high computational performance and classification accuracy. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB; HuPBA | Approved | no | ||
Call Number | Admin @ si @ EPA2011 | Serial | 1881 | ||
Permanent link to this record | |||||
Author | Laura Igual; Joan Carles Soliva; Antonio Hernandez; Sergio Escalera; Xavier Jimenez ; Oscar Vilarroya; Petia Radeva | ||||
Title | A fully-automatic caudate nucleus segmentation of brain MRI: Application in volumetric analysis of pediatric attention-deficit/hyperactivity disorder | Type | Journal Article | ||
Year | 2011 | Publication | BioMedical Engineering Online | Abbreviated Journal | BEO |
Volume | 10 | Issue | 105 | Pages | 1-23 |
Keywords | Brain caudate nucleus; segmentation; MRI; atlas-based strategy; Graph Cut framework | ||||
Abstract | Background
Accurate automatic segmentation of the caudate nucleus in magnetic resonance images (MRI) of the brain is of great interest in the analysis of developmental disorders. Segmentation methods based on a single atlas or on multiple atlases have been shown to suitably localize caudate structure. However, the atlas prior information may not represent the structure of interest correctly. It may therefore be useful to introduce a more flexible technique for accurate segmentations. Method We present Cau-dateCut: a new fully-automatic method of segmenting the caudate nucleus in MRI. CaudateCut combines an atlas-based segmentation strategy with the Graph Cut energy-minimization framework. We adapt the Graph Cut model to make it suitable for segmenting small, low-contrast structures, such as the caudate nucleus, by defining new energy function data and boundary potentials. In particular, we exploit information concerning the intensity and geometry, and we add supervised energies based on contextual brain structures. Furthermore, we reinforce boundary detection using a new multi-scale edgeness measure. Results We apply the novel CaudateCut method to the segmentation of the caudate nucleus to a new set of 39 pediatric attention-deficit/hyperactivity disorder (ADHD) patients and 40 control children, as well as to a public database of 18 subjects. We evaluate the quality of the segmentation using several volumetric and voxel by voxel measures. Our results show improved performance in terms of segmentation compared to state-of-the-art approaches, obtaining a mean overlap of 80.75%. Moreover, we present a quantitative volumetric analysis of caudate abnormalities in pediatric ADHD, the results of which show strong correlation with expert manual analysis. Conclusion CaudateCut generates segmentation results that are comparable to gold-standard segmentations and which are reliable in the analysis of differentiating neuroanatomical abnormalities between healthy controls and pediatric ADHD. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1475-925X | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | MILAB;HuPBA | Approved | no | ||
Call Number | Admin @ si @ ISH2011 | Serial | 1882 | ||
Permanent link to this record | |||||
Author | Fadi Dornaika; Alireza Bosaghzadeh; Bogdan Raducanu | ||||
Title | LSDA Solution Schemes for Modelless 3D Head Pose Estimation | Type | Conference Article | ||
Year | 2012 | Publication | IEEE Workshop on the Applications of Computer Vision | Abbreviated Journal | |
Volume | Issue | Pages | 393-398 | ||
Keywords | |||||
Abstract | |||||
Address | Breckenridge; USA; | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | WACV | ||
Notes | OR;MV | Approved | no | ||
Call Number | Admin @ si @ DBR2012 | Serial | 1889 | ||
Permanent link to this record | |||||
Author | Antonio Hernandez; Carlos Primo; Sergio Escalera | ||||
Title | Automatic user interaction correction via Multi-label Graph cuts | Type | Conference Article | ||
Year | 2011 | Publication | In ICCV 2011 1st IEEE International Workshop on Human Interaction in Computer Vision HICV | Abbreviated Journal | |
Volume | Issue | Pages | 1276-1281 | ||
Keywords | |||||
Abstract | Most applications in image segmentation requires from user interaction in order to achieve accurate results. However, user wants to achieve the desired segmentation accuracy reducing effort of manual labelling. In this work, we extend standard multi-label α-expansion Graph Cut algorithm so that it analyzes the interaction of the user in order to modify the object model and improve final segmentation of objects. The approach is inspired in the fact that fast user interactions may introduce some pixel errors confusing object and background. Our results with different degrees of user interaction and input errors show high performance of the proposed approach on a multi-label human limb segmentation problem compared with classical α-expansion algorithm. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-4673-0062-9 | Medium | ||
Area | Expedition | Conference | HICV | ||
Notes | MILAB; HuPBA | Approved | no | ||
Call Number | Admin @ si @ HPE2011 | Serial | 1892 | ||
Permanent link to this record | |||||
Author | Miguel Reyes; Gabriel Dominguez; Sergio Escalera | ||||
Title | Feature Weighting in Dynamic Time Warping for Gesture Recognition in Depth Data | Type | Conference Article | ||
Year | 2011 | Publication | 1st IEEE Workshop on Consumer Depth Cameras for Computer Vision | Abbreviated Journal | |
Volume | Issue | Pages | 1182-1188 | ||
Keywords | |||||
Abstract | We present a gesture recognition approach for depth video data based on a novel Feature Weighting approach within the Dynamic Time Warping framework. Depth features from human joints are compared through video sequences using Dynamic Time Warping, and weights are assigned to features based on inter-intra class gesture variability. Feature Weighting in Dynamic Time Warping is then applied for recognizing begin-end of gestures in data sequences. The obtained results recognizing several gestures in depth data show high performance compared with classical Dynamic Time Warping approach. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-4673-0062-9 | Medium | ||
Area | Expedition | Conference | CDC4CV | ||
Notes | HuPBA;MILAB | Approved | no | ||
Call Number | Admin @ si @ RDE2011 | Serial | 1893 | ||
Permanent link to this record | |||||
Author | Jorge Bernal; David Vazquez (eds) | ||||
Title | Computer vision Trends and Challenges | Type | Book Whole | ||
Year | 2013 | Publication | Computer vision Trends and Challenges | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | CVCRD; Computer Vision | ||||
Abstract | This book contains the papers presented at the Eighth CVC Workshop on Computer Vision Trends and Challenges (CVCR&D'2013). The workshop was held at the Computer Vision Center (Universitat Autònoma de Barcelona), the October 25th, 2013. The CVC workshops provide an excellent opportunity for young researchers and project engineers to share new ideas and knowledge about the progress of their work, and also, to discuss about challenges and future perspectives. In addition, the workshop is the welcome event for new people that recently have joined the institute.
The program of CVCR&D is organized in a single-track single-day workshop. It comprises several sessions dedicated to specific topics. For each session, a doctor working on the topic introduces the general research lines. The PhD students expose their specific research. A poster session will be held for open questions. Session topics cover the current research lines and development projects of the CVC: Medical Imaging, Medical Imaging, Color & Texture Analysis, Object Recognition, Image Sequence Evaluation, Advanced Driver Assistance Systems, Machine Vision, Document Analysis, Pattern Recognition and Applications. We want to thank all paper authors and Program Committee members. Their contribution shows that the CVC has a dynamic, active, and promising scientific community. We hope you all enjoy this Eighth workshop and we are looking forward to meeting you and new people next year in the Ninth CVCR&D. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | Jorge Bernal; David Vazquez | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-84-940902-2-6 | Medium | ||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | ADAS @ adas @ BeV2013 | Serial | 2339 | ||
Permanent link to this record | |||||
Author | David Vazquez; Javier Marin; Antonio Lopez; Daniel Ponsa; David Geronimo | ||||
Title | Virtual and Real World Adaptation for Pedestrian Detection | Type | Journal Article | ||
Year | 2014 | Publication | IEEE Transactions on Pattern Analysis and Machine Intelligence | Abbreviated Journal | TPAMI |
Volume | 36 | Issue | 4 | Pages | 797-809 |
Keywords | Domain Adaptation; Pedestrian Detection | ||||
Abstract | Pedestrian detection is of paramount interest for many applications. Most promising detectors rely on discriminatively learnt classifiers, i.e., trained with annotated samples. However, the annotation step is a human intensive and subjective task worth to be minimized. By using virtual worlds we can automatically obtain precise and rich annotations. Thus, we face the question: can a pedestrian appearance model learnt in realistic virtual worlds work successfully for pedestrian detection in realworld images?. Conducted experiments show that virtual-world based training can provide excellent testing accuracy in real world, but it can also suffer the dataset shift problem as real-world based training does. Accordingly, we have designed a domain adaptation framework, V-AYLA, in which we have tested different techniques to collect a few pedestrian samples from the target domain (real world) and combine them with the many examples of the source domain (virtual world) in order to train a domain adapted pedestrian classifier that will operate in the target domain. V-AYLA reports the same detection accuracy than when training with many human-provided pedestrian annotations and testing with real-world images of the same domain. To the best of our knowledge, this is the first work demonstrating adaptation of virtual and real worlds for developing an object detector. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0162-8828 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS; 600.057; 600.054; 600.076 | Approved | no | ||
Call Number | ADAS @ adas @ VML2014 | Serial | 2275 | ||
Permanent link to this record | |||||
Author | Michal Drozdzal; Santiago Segui; Petia Radeva; Jordi Vitria; Laura Igual | ||||
Title | System and Method for Displaying Motility Events in an in Vivo Image Stream | Type | Patent | ||
Year | 2011 | Publication | US 61/592,786 | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address | Given Imaging | ||||
Corporate Author | US Patent Office | Thesis | |||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB; OR;MV | Approved | no | ||
Call Number | Admin @ si @ DSR2011 | Serial | 1897 | ||
Permanent link to this record | |||||
Author | Patricia Marquez; Debora Gil; Aura Hernandez-Sabate | ||||
Title | Evaluation of the Capabilities of Confidence Measures for Assessing Optical Flow Quality | Type | Conference Article | ||
Year | 2013 | Publication | ICCV Workshop on Computer Vision in Vehicle Technology: From Earth to Mars | Abbreviated Journal | |
Volume | Issue | Pages | 624-631 | ||
Keywords | |||||
Abstract | Assessing Optical Flow (OF) quality is essential for its further use in reliable decision support systems. The absence of ground truth in such situations leads to the computation of OF Confidence Measures (CM) obtained from either input or output data. A fair comparison across the capabilities of the different CM for bounding OF error is required in order to choose the best OF-CM pair for discarding points where OF computation is not reliable. This paper presents a statistical probabilistic framework for assessing the quality of a given CM. Our quality measure is given in terms of the percentage of pixels whose OF error bound can not be determined by CM values. We also provide statistical tools for the computation of CM values that ensures a given accuracy of the flow field. | ||||
Address | Sydney; Australia; December 2013 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVTT:E2M | ||
Notes | IAM; ADAS; 600.044; 600.057; 601.145 | Approved | no | ||
Call Number | Admin @ si @ MGH2013b | Serial | 2351 | ||
Permanent link to this record | |||||
Author | Laura Igual; Xavier Perez Sala; Sergio Escalera; Cecilio Angulo; Fernando De la Torre | ||||
Title | Continuous Generalized Procrustes Analysis | Type | Journal Article | ||
Year | 2014 | Publication | Pattern Recognition | Abbreviated Journal | PR |
Volume | 47 | Issue | 2 | Pages | 659–671 |
Keywords | Procrustes analysis; 2D shape model; Continuous approach | ||||
Abstract | PR4883, PII: S0031-3203(13)00327-0
Two-dimensional shape models have been successfully applied to solve many problems in computer vision, such as object tracking, recognition, and segmentation. Typically, 2D shape models are learned from a discrete set of image landmarks (corresponding to projection of 3D points of an object), after applying Generalized Procustes Analysis (GPA) to remove 2D rigid transformations. However, the standard GPA process suffers from three main limitations. Firstly, the 2D training samples do not necessarily cover a uniform sampling of all the 3D transformations of an object. This can bias the estimate of the shape model. Secondly, it can be computationally expensive to learn the shape model by sampling 3D transformations. Thirdly, standard GPA methods use only one reference shape, which can might be insufficient to capture large structural variability of some objects. To address these drawbacks, this paper proposes continuous generalized Procrustes analysis (CGPA). CGPA uses a continuous formulation that avoids the need to generate 2D projections from all the rigid 3D transformations. It builds an efficient (in space and time) non-biased 2D shape model from a set of 3D model of objects. A major challenge in CGPA is the need to integrate over the space of 3D rotations, especially when the rotations are parameterized with Euler angles. To address this problem, we introduce the use of the Haar measure. Finally, we extended CGPA to incorporate several reference shapes. Experimental results on synthetic and real experiments show the benefits of CGPA over GPA. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | OR; HuPBA; 605.203; 600.046;MILAB | Approved | no | ||
Call Number | Admin @ si @ IPE2014 | Serial | 2352 | ||
Permanent link to this record |