toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Jose Manuel Alvarez; Theo Gevers; Antonio Lopez edit   pdf
doi  openurl
  Title Learning photometric invariance for object detection Type Journal Article
  Year 2010 Publication International Journal of Computer Vision Abbreviated Journal IJCV  
  Volume 90 Issue 1 Pages 45-61  
  Keywords road detection  
  Abstract Impact factor: 3.508 (the last available from JCR2009SCI). Position 4/103 in the category Computer Science, Artificial Intelligence. Quartile
Color is a powerful visual cue in many computer vision applications such as image segmentation and object recognition. However, most of the existing color models depend on the imaging conditions that negatively affect the performance of the task at hand. Often, a reflection model (e.g., Lambertian or dichromatic reflectance) is used to derive color invariant models. However, this approach may be too restricted to model real-world scenes in which different reflectance mechanisms can hold simultaneously.
Therefore, in this paper, we aim to derive color invariance by learning from color models to obtain diversified color invariant ensembles. First, a photometrical orthogonal and non-redundant color model set is computed composed of both color variants and invariants. Then, the proposed method combines these color models to arrive at a diversified color ensemble yielding a proper balance between invariance (repeatability) and discriminative power (distinctiveness). To achieve this, our fusion method uses a multi-view approach to minimize the estimation error. In this way, the proposed method is robust to data uncertainty and produces properly diversified color invariant ensembles. Further, the proposed method is extended to deal with temporal data by predicting the evolution of observations over time.
Experiments are conducted on three different image datasets to validate the proposed method. Both the theoretical and experimental results show that the method is robust against severe variations in imaging conditions. The method is not restricted to a certain reflection model or parameter tuning, and outperforms state-of-the-art detection techniques in the field of object, skin and road recognition. Considering sequential data, the proposed method (extended to deal with future observations) outperforms the other methods
 
  Address  
  Corporate Author Thesis  
  Publisher (up) Springer US Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0920-5691 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS;ISE Approved no  
  Call Number ADAS @ adas @ AGL2010c Serial 1451  
Permanent link to this record
 

 
Author Dani Rowe; Jordi Gonzalez; Marco Pedersoli; Juan J. Villanueva edit   pdf
doi  openurl
  Title On Tracking Inside Groups Type Journal Article
  Year 2010 Publication Machine Vision and Applications Abbreviated Journal MVA  
  Volume 21 Issue 2 Pages 113–127  
  Keywords  
  Abstract This work develops a new architecture for multiple-target tracking in unconstrained dynamic scenes, which consists of a detection level which feeds a two-stage tracking system. A remarkable characteristic of the system is its ability to track several targets while they group and split, without using 3D information. Thus, special attention is given to the feature-selection and appearance-computation modules, and to those modules involved in tracking through groups. The system aims to work as a stand-alone application in complex and dynamic scenarios. No a-priori knowledge about either the scene or the targets, based on a previous training period, is used. Hence, the scenario is completely unknown beforehand. Successful tracking has been demonstrated in well-known databases of both indoor and outdoor scenarios. Accurate and robust localisations have been yielded during long-term target merging and occlusions.  
  Address  
  Corporate Author Thesis  
  Publisher (up) Springer-Verlag Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0932-8092 ISBN Medium  
  Area Expedition Conference  
  Notes ISE Approved no  
  Call Number ISE @ ise @ RGP2010 Serial 1158  
Permanent link to this record
 

 
Author Hamdi Dibeklioglu; M.O. Hortas; I. Kosunen; P. Zuzánek; Albert Ali Salah; Theo Gevers edit  doi
openurl 
  Title Design and implementation of an affect-responsive interactive photo frame Type Journal
  Year 2011 Publication Journal on Multimodal User Interfaces Abbreviated Journal JMUI  
  Volume 4 Issue 2 Pages 81-95  
  Keywords  
  Abstract This paper describes an affect-responsive interactive photo-frame application that offers its user a different experience with every use. It relies on visual analysis of activity levels and facial expressions of its users to select responses from a database of short video segments. This ever-growing database is automatically prepared by an offline analysis of user-uploaded videos. The resulting system matches its user’s affect along dimensions of valence and arousal, and gradually adapts its response to each specific user. In an extended mode, two such systems are coupled and feed each other with visual content. The strengths and weaknesses of the system are assessed through a usability study, where a Wizard-of-Oz response logic is contrasted with the fully automatic system that uses affective and activity-based features, either alone, or in tandem.  
  Address  
  Corporate Author Thesis  
  Publisher (up) Springer–Verlag Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1783-7677 ISBN Medium  
  Area Expedition Conference  
  Notes ALTRES;ISE Approved no  
  Call Number Admin @ si @ DHK2011 Serial 1842  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: