toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Diego Cheda; Daniel Ponsa; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title Pedestrian Candidates Generation using Monocular Cues Type Conference Article
  Year 2012 Publication IEEE Intelligent Vehicles Symposium Abbreviated Journal  
  Volume Issue Pages 7-12  
  Keywords pedestrian detection  
  Abstract Common techniques for pedestrian candidates generation (e.g., sliding window approaches) are based on an exhaustive search over the image. This implies that the number of windows produced is huge, which translates into a significant time consumption in the classification stage. In this paper, we propose a method that significantly reduces the number of windows to be considered by a classifier. Our method is a monocular one that exploits geometric and depth information available on single images. Both representations of the world are fused together to generate pedestrian candidates based on an underlying model which is focused only on objects standing vertically on the ground plane and having certain height, according with their depths on the scene. We evaluate our algorithm on a challenging dataset and demonstrate its application for pedestrian detection, where a considerable reduction in the number of candidate windows is reached.  
  Address (up)  
  Corporate Author Thesis  
  Publisher IEEE Xplore Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1931-0587 ISBN 978-1-4673-2119-8 Medium  
  Area Expedition Conference IV  
  Notes ADAS Approved no  
  Call Number Admin @ si @ CPL2012c; ADAS @ adas @ cpl2012d Serial 2013  
Permanent link to this record
 

 
Author Fernando Barrera; Felipe Lumbreras; Angel Sappa edit   pdf
doi  isbn
openurl 
  Title Evaluation of Similarity Functions in Multimodal Stereo Type Conference Article
  Year 2012 Publication 9th International Conference on Image Analysis and Recognition Abbreviated Journal  
  Volume 7324 Issue I Pages 320-329  
  Keywords Aveiro, Portugal  
  Abstract This paper presents an evaluation framework for multimodal stereo matching, which allows to compare the performance of four similarity functions. Additionally, it presents details of a multimodal stereo head that supply thermal infrared and color images, as well as, aspects of its calibration and rectification. The pipeline includes a novel method for the disparity selection, which is suitable for evaluating the similarity functions. Finally, a benchmark for comparing different initializations of the proposed framework is presented. Similarity functions are based on mutual information, gradient orientation and scale space representations. Their evaluation is performed using two metrics: i) disparity error, and ii) number of correct matches on planar regions. In addition to the proposed evaluation, the current paper also shows that 3D sparse representations can be recovered from such a multimodal stereo head.  
  Address (up)  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-31294-6 Medium  
  Area Expedition Conference ICIAR  
  Notes ADAS Approved no  
  Call Number BLS2012a Serial 2014  
Permanent link to this record
 

 
Author Miguel Oliveira; Angel Sappa; V. Santos edit   pdf
doi  isbn
openurl 
  Title Color Correction using 3D Gaussian Mixture Models Type Conference Article
  Year 2012 Publication 9th International Conference on Image Analysis and Recognition Abbreviated Journal  
  Volume 7324 Issue I Pages 97-106  
  Keywords  
  Abstract The current paper proposes a novel color correction approach based on a probabilistic segmentation framework by using 3D Gaussian Mixture Models. Regions are used to compute local color correction functions, which are then combined to obtain the final corrected image. The proposed approach is evaluated using both a recently published metric and two large data sets composed of seventy images. The evaluation is performed by comparing our algorithm with eight well known color correction algorithms. Results show that the proposed approach is the highest scoring color correction method. Also, the proposed single step 3D color space probabilistic segmentation reduces processing time over similar approaches.  
  Address (up)  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 10.1007/978-3-642-31295-3_12 Medium  
  Area Expedition Conference ICIAR  
  Notes ADAS Approved no  
  Call Number Admin @ si @ OSS2012a Serial 2015  
Permanent link to this record
 

 
Author Sergio Escalera; Josep Moya; Laura Igual; Veronica Violant; Maria Teresa Anguera edit  openurl
  Title Análisis Comportamental Automatizado de TDAH: la Influencia de la Variable Motivación Type Conference Article
  Year 2012 Publication IPSI – Cosmocaixa, Jornadas "Empremtes del present, efectes en la psicoanàlisi, la cultura i la societat Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Poster  
  Address (up)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IPSI  
  Notes MILAB; HuPBA; OR Approved no  
  Call Number Admin @ si @ EMI2012b Serial 2065  
Permanent link to this record
 

 
Author S.Grau; Anna Puig; Sergio Escalera; Maria Salamo; Oscar Amoros edit  isbn
openurl 
  Title Efficient complementary viewpoint selection in volume rendering Type Conference Article
  Year 2013 Publication 21st WSCG Conference on Computer Graphics, Abbreviated Journal  
  Volume Issue Pages  
  Keywords Dual camera; Visualization; Interactive Interfaces; Dynamic Time Warping.  
  Abstract A major goal of visualization is to appropriately express knowledge of scientific data. Generally, gathering visual information contained in the volume data often requires a lot of expertise from the final user to setup the parameters of the visualization. One way of alleviating this problem is to provide the position of inner structures with different viewpoint locations to enhance the perception and construction of the mental image. To this end, traditional illustrations use two or three different views of the regions of interest. Similarly, with the aim of assisting the users to easily place a good viewpoint location, this paper proposes an automatic and interactive method that locates different complementary viewpoints from a reference camera in volume datasets. Specifically, the proposed method combines the quantity of information each camera provides for each structure and the shape similarity of the projections of the remaining viewpoints based on Dynamic Time Warping. The selected complementary viewpoints allow a better understanding of the focused structure in several applications. Thus, the user interactively receives feedback based on several viewpoints that helps him to understand the visual information. A live-user evaluation on different data sets show a good convergence to useful complementary viewpoints.  
  Address (up)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-808694374-9 Medium  
  Area Expedition Conference WSCG  
  Notes HuPBA; 600.046;MILAB Approved no  
  Call Number Admin @ si @ GPE2013a Serial 2255  
Permanent link to this record
 

 
Author Miguel Angel Bautista; Antonio Hernandez; Victor Ponce; Xavier Perez Sala; Xavier Baro; Oriol Pujol; Cecilio Angulo; Sergio Escalera edit   pdf
doi  isbn
openurl 
  Title Probability-based Dynamic TimeWarping for Gesture Recognition on RGB-D data Type Conference Article
  Year 2012 Publication 21st International Conference on Pattern Recognition International Workshop on Depth Image Analysis Abbreviated Journal  
  Volume 7854 Issue Pages 126-135  
  Keywords  
  Abstract Dynamic Time Warping (DTW) is commonly used in gesture recognition tasks in order to tackle the temporal length variability of gestures. In the DTW framework, a set of gesture patterns are compared one by one to a maybe infinite test sequence, and a query gesture category is recognized if a warping cost below a certain threshold is found within the test sequence. Nevertheless, either taking one single sample per gesture category or a set of isolated samples may not encode the variability of such gesture category. In this paper, a probability-based DTW for gesture recognition is proposed. Different samples of the same gesture pattern obtained from RGB-Depth data are used to build a Gaussian-based probabilistic model of the gesture. Finally, the cost of DTW has been adapted accordingly to the new model. The proposed approach is tested in a challenging scenario, showing better performance of the probability-based DTW in comparison to state-of-the-art approaches for gesture recognition on RGB-D data.  
  Address (up)  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-40302-6 Medium  
  Area Expedition Conference WDIA  
  Notes MILAB; OR;HuPBA;MV Approved no  
  Call Number Admin @ si @ BHP2012 Serial 2120  
Permanent link to this record
 

 
Author Miguel Reyes; Albert Clapes; Luis Felipe Mejia; Jose Ramirez; Juan R Revilla; Sergio Escalera edit   pdf
doi  isbn
openurl 
  Title Posture Analysis and Range of Movement Estimation using Depth Maps Type Conference Article
  Year 2012 Publication 21st International Conference on Pattern Recognition International Workshop on Depth Image Analysis Abbreviated Journal  
  Volume 7854 Issue Pages 97-105  
  Keywords  
  Abstract World Health Organization estimates that 80% of the world population is affected of back pain during his life. Current practices to analyze back problems are expensive, subjective, and invasive. In this work, we propose a novel tool for posture and range of movement estimation based on the analysis of 3D information from depth maps. Given a set of keypoints defined by the user, RGB and depth data are aligned, depth surface is reconstructed, keypoints are matching using a novel point-to-point fitting procedure, and accurate measurements about posture, spinal curvature, and range of movement are computed. The system shows high precision and reliable measurements, being useful for posture reeducation purposes to prevent musculoskeletal disorders, such as back pain, as well as tracking the posture evolution of patients in rehabilitation treatments.  
  Address (up)  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-40302-6 Medium  
  Area Expedition Conference WDIA  
  Notes HuPBA;MILAB Approved no  
  Call Number Admin @ si @ RCM2012 Serial 2121  
Permanent link to this record
 

 
Author Antonio Hernandez; Miguel Angel Bautista; Xavier Perez Sala; Victor Ponce; Xavier Baro; Oriol Pujol; Cecilio Angulo; Sergio Escalera edit   pdf
isbn  openurl
  Title BoVDW: Bag-of-Visual-and-Depth-Words for Gesture Recognition Type Conference Article
  Year 2012 Publication 21st International Conference on Pattern Recognition Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract We present a Bag-of-Visual-and-Depth-Words (BoVDW) model for gesture recognition, an extension of the Bag-of-Visual-Words (BoVW) model, that benefits from the multimodal fusion of visual and depth features. State-of-the-art RGB and depth features, including a new proposed depth descriptor, are analysed and combined in a late fusion fashion. The method is integrated in a continuous gesture recognition pipeline, where Dynamic Time Warping (DTW) algorithm is used to perform prior segmentation of gestures. Results of the method in public data sets, within our gesture recognition pipeline, show better performance in comparison to a standard BoVW model.  
  Address (up)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1051-4651 ISBN 978-1-4673-2216-4 Medium  
  Area Expedition Conference ICPR  
  Notes HuPBA;MV Approved no  
  Call Number Admin @ si @ HBP2012 Serial 2122  
Permanent link to this record
 

 
Author Javier Vazquez; Robert Benavente; Maria Vanrell edit   pdf
url  openurl
  Title Naming constraints constancy Type Conference Article
  Year 2012 Publication 2nd Joint AVA / BMVA Meeting on Biological and Machine Vision Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Different studies have shown that languages from industrialized cultures
share a set of 11 basic colour terms: red, green, blue, yellow, pink, purple, brown, orange, black, white, and grey (Berlin & Kay, 1969, Basic Color Terms, University of California Press)( Kay & Regier, 2003, PNAS, 100, 9085-9089). Some of these studies have also reported the best representatives or focal values of each colour (Boynton and Olson, 1990, Vision Res. 30,1311–1317), (Sturges and Whitfield, 1995, CRA, 20:6, 364–376). Some further studies have provided us with fuzzy datasets for color naming by asking human observers to rate colours in terms of membership values (Benavente -et al-, 2006, CRA. 31:1, 48–56,). Recently, a computational model based on these human ratings has been developed (Benavente -et al-, 2008, JOSA-A, 25:10, 2582-2593). This computational model follows a fuzzy approach to assign a colour name to a particular RGB value. For example, a pixel with a value (255,0,0) will be named 'red' with membership 1, while a cyan pixel with a RGB value of (0, 200, 200) will be considered to be 0.5 green and 0.5 blue. In this work, we show how this colour naming paradigm can be applied to different computer vision tasks. In particular, we report results in colour constancy (Vazquez-Corral -et al-, 2012, IEEE TIP, in press) showing that the classical constraints on either illumination or surface reflectance can be substituted by
the statistical properties encoded in the colour names. [Supported by projects TIN2010-21771-C02-1, CSD2007-00018].
 
  Address (up)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference AV A  
  Notes CIC Approved no  
  Call Number Admin @ si @ VBV2012 Serial 2131  
Permanent link to this record
 

 
Author Xavier Otazu; Olivier Penacchio; Laura Dempere-Marco edit   pdf
url  openurl
  Title An investigation into plausible neural mechanisms related to the the CIWaM computational model for brightness induction Type Conference Article
  Year 2012 Publication 2nd Joint AVA / BMVA Meeting on Biological and Machine Vision Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Brightness induction is the modulation of the perceived intensity of an area by the luminance of surrounding areas. From a purely computational perspective, we built a low-level computational model (CIWaM) of early sensory processing based on multi-resolution wavelets with the aim of replicating brightness and colour (Otazu et al., 2010, Journal of Vision, 10(12):5) induction effects. Furthermore, we successfully used the CIWaM architecture to define a computational saliency model (Murray et al, 2011, CVPR, 433-440; Vanrell et al, submitted to AVA/BMVA'12). From a biological perspective, neurophysiological evidence suggests that perceived brightness information may be explicitly represented in V1. In this work we investigate possible neural mechanisms that offer a plausible explanation for such effects. To this end, we consider the model by Z.Li (Li, 1999, Network:Comput. Neural Syst., 10, 187-212) which is based on biological data and focuses on the part of V1 responsible for contextual influences, namely, layer 2-3 pyramidal cells, interneurons, and horizontal intracortical connections. This model has proven to account for phenomena such as visual saliency, which share with brightness induction the relevant effect of contextual influences (the ones modelled by CIWaM). In the proposed model, the input to the network is derived from a complete multiscale and multiorientation wavelet decomposition taken from the computational model (CIWaM).
This model successfully accounts for well known pyschophysical effects (among them: the White's and modied White's effects, the Todorovic, Chevreul, achromatic ring patterns, and grating induction effects) for static contexts and also for brigthness induction in dynamic contexts defined by modulating the luminance of surrounding areas. From a methodological point of view, we conclude that the results obtained by the computational model (CIWaM) are compatible with the ones obtained by the neurodynamical model proposed here.
 
  Address (up)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference AV A  
  Notes CIC Approved no  
  Call Number Admin @ si @ OPD2012a Serial 2132  
Permanent link to this record
 

 
Author Pedro Martins; Carlo Gatta; Paulo Carvalho edit   pdf
url  openurl
  Title Feature-driven Maximally Stable Extremal Regions Type Conference Article
  Year 2012 Publication 7th International Conference on Computer Vision Theory and Applications Abbreviated Journal  
  Volume Issue Pages 490-497  
  Keywords  
  Abstract  
  Address (up)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference VISAPP  
  Notes MILAB Approved no  
  Call Number Admin @ si @ MGC2012 Serial 2139  
Permanent link to this record
 

 
Author Pedro Martins; Paulo Carvalho; Carlo Gatta edit   pdf
doi  openurl
  Title Context Aware Keypoint Extraction for Robust Image Representation Type Conference Article
  Year 2012 Publication 23rd British Machine Vision Conference Abbreviated Journal  
  Volume Issue Pages 100.1 - 100.12  
  Keywords  
  Abstract  
  Address (up)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference BMVC  
  Notes MILAB Approved no  
  Call Number Admin @ si @ MCG2012a Serial 2140  
Permanent link to this record
 

 
Author Josep M. Gonfaus; Theo Gevers; Arjan Gijsenij; Xavier Roca; Jordi Gonzalez edit   pdf
url  isbn
openurl 
  Title Edge Classification using Photo-Geo metric features Type Conference Article
  Year 2012 Publication 21st International Conference on Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 1497 - 1500  
  Keywords  
  Abstract Edges are caused by several imaging cues such as shadow, material and illumination transitions. Classification methods have been proposed which are solely based on photometric information, ignoring geometry to classify the physical nature of edges in images. In this paper, the aim is to present a novel strategy to handle both photometric and geometric information for edge classification. Photometric information is obtained through the use of quasi-invariants while geometric information is derived from the orientation and contrast of edges. Different combination frameworks are compared with a new principled approach that captures both information into the same descriptor. From large scale experiments on different datasets, it is shown that, in addition to photometric information, the geometry of edges is an important visual cue to distinguish between different edge types. It is concluded that by combining both cues the performance improves by more than 7% for shadows and highlights.  
  Address (up)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1051-4651 ISBN 978-1-4673-2216-4 Medium  
  Area Expedition Conference ICPR  
  Notes ISE Approved no  
  Call Number Admin @ si @ GGG2012b Serial 2142  
Permanent link to this record
 

 
Author Joan Arnedo-Moreno; Agata Lapedriza edit  openurl
  Title Visualizing key authenticity: turning your face into your public key Type Conference Article
  Year 2010 Publication 6th China International Conference on Information Security and Cryptology Abbreviated Journal  
  Volume Issue Pages 605-618  
  Keywords  
  Abstract Biometric information has become a technology complementary to cryptography, allowing to conveniently manage cryptographic data. Two important needs are ful lled: rst of all, making such data always readily available, and additionally, making its legitimate owner easily identi able. In this work we propose a signature system which integrates face recognition biometrics with and identity-based signature scheme, so the user's face e ectively becomes his public key and system ID. Thus, other users may verify messages using photos of the claimed sender, providing a reasonable trade-o between system security and usability, as well as a much more straightforward public key authenticity and distribution process.  
  Address (up)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference Inscrypt  
  Notes OR;MV Approved no  
  Call Number Admin @ si @ ArL2010c Serial 2149  
Permanent link to this record
 

 
Author Petia Radeva; Michal Drozdzal; Santiago Segui; Laura Igual; Carolina Malagelada; Fernando Azpiroz; Jordi Vitria edit   pdf
doi  isbn
openurl 
  Title Active labeling: Application to wireless endoscopy analysis Type Conference Article
  Year 2012 Publication High Performance Computing and Simulation, International Conference on Abbreviated Journal  
  Volume Issue Pages 174-181  
  Keywords  
  Abstract Today, robust learners trained in a real supervised machine learning application should count with a rich collection of positive and negative examples. Although in many applications, it is not difficult to obtain huge amount of data, labeling those data can be a very expensive process, especially when dealing with data of high variability and complexity. A good example of such cases are data from medical imaging applications where annotating anomalies like tumors, polyps, atherosclerotic plaque or informative frames in wireless endoscopy need highly trained experts. Building a representative set of training data from medical videos (e.g. Wireless Capsule Endoscopy) means that thousands of frames to be labeled by an expert. It is quite normal that data in new videos come different and thus are not represented by the training set. In this paper, we review the main approaches on active learning and illustrate how active learning can help to reduce expert effort in constructing the training sets. We show that applying active learning criteria, the number of human interventions can be significantly reduced. The proposed system allows the annotation of informative/non-informative frames of Wireless Capsule Endoscopy video containing more than 30000 frames each one with less than 100 expert ”clicks”.  
  Address (up)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4673-2359-8 Medium  
  Area Expedition Conference HPCS  
  Notes MILAB; OR;MV Approved no  
  Call Number Admin @ si @ RDS2012 Serial 2152  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: