|   | 
Details
   web
Records
Author Santiago Segui; Michal Drozdzal; Ekaterina Zaytseva; Fernando Azpiroz; Petia Radeva; Jordi Vitria
Title Detection of wrinkle frames in endoluminal videos using betweenness centrality measures for images Type Journal Article
Year 2014 Publication IEEE Transactions on Information Technology in Biomedicine Abbreviated Journal TITB
Volume 18 Issue 6 Pages 1831-1838
Keywords Wireless Capsule Endoscopy; Small Bowel Motility Dysfunction; Contraction Detection; Structured Prediction; Betweenness Centrality
Abstract Intestinal contractions are one of the most important events to diagnose motility pathologies of the small intestine. When visualized by wireless capsule endoscopy (WCE), the sequence of frames that represents a contraction is characterized by a clear wrinkle structure in the central frames that corresponds to the folding of the intestinal wall. In this paper we present a new method to robustly detect wrinkle frames in full WCE videos by using a new mid-level image descriptor that is based on a centrality measure proposed for graphs. We present an extended validation, carried out in a very large database, that shows that the proposed method achieves state of the art performance for this task.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes OR; MILAB; 600.046;MV Approved no
Call Number Admin @ si @ SDZ2014 Serial 2385
Permanent link to this record
 

 
Author Akhil Gurram; Onay Urfalioglu; Ibrahim Halfaoui; Fahd Bouzaraa; Antonio Lopez
Title Monocular Depth Estimation by Learning from Heterogeneous Datasets Type Conference Article
Year 2018 Publication IEEE Intelligent Vehicles Symposium Abbreviated Journal
Volume Issue Pages 2176 - 2181
Keywords
Abstract Depth estimation provides essential information to perform autonomous driving and driver assistance. Especially, Monocular Depth Estimation is interesting from a practical point of view, since using a single camera is cheaper than many other options and avoids the need for continuous calibration strategies as required by stereo-vision approaches. State-of-the-art methods for Monocular Depth Estimation are based on Convolutional Neural Networks (CNNs). A promising line of work consists of introducing additional semantic information about the traffic scene when training CNNs for depth estimation. In practice, this means that the depth data used for CNN training is complemented with images having pixel-wise semantic labels, which usually are difficult to annotate (eg crowded urban images). Moreover, so far it is common practice to assume that the same raw training data is associated with both types of ground truth, ie, depth and semantic labels. The main contribution of this paper is to show that this hard constraint can be circumvented, ie, that we can train CNNs for depth estimation by leveraging the depth and semantic information coming from heterogeneous datasets. In order to illustrate the benefits of our approach, we combine KITTI depth and Cityscapes semantic segmentation datasets, outperforming state-of-the-art results on Monocular Depth Estimation.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference IV
Notes ADAS; 600.124; 600.116; 600.118 Approved no
Call Number Admin @ si @ GUH2018 Serial 3183
Permanent link to this record
 

 
Author Jiaolong Xu; David Vazquez; Antonio Lopez; Javier Marin; Daniel Ponsa
Title Learning a Multiview Part-based Model in Virtual World for Pedestrian Detection Type Conference Article
Year 2013 Publication IEEE Intelligent Vehicles Symposium Abbreviated Journal
Volume Issue Pages 467 - 472
Keywords Pedestrian Detection; Virtual World; Part based
Abstract State-of-the-art deformable part-based models based on latent SVM have shown excellent results on human detection. In this paper, we propose to train a multiview deformable part-based model with automatically generated part examples from virtual-world data. The method is efficient as: (i) the part detectors are trained with precisely extracted virtual examples, thus no latent learning is needed, (ii) the multiview pedestrian detector enhances the performance of the pedestrian root model, (iii) a top-down approach is used for part detection which reduces the searching space. We evaluate our model on Daimler and Karlsruhe Pedestrian Benchmarks with publicly available Caltech pedestrian detection evaluation framework and the result outperforms the state-of-the-art latent SVM V4.0, on both average miss rate and speed (our detector is ten times faster).
Address Gold Coast; Australia; June 2013
Corporate Author Thesis
Publisher IEEE Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1931-0587 ISBN 978-1-4673-2754-1 Medium
Area Expedition Conference IV
Notes ADAS; 600.054; 600.057 Approved no
Call Number XVL2013; ADAS @ adas @ xvl2013a Serial 2214
Permanent link to this record
 

 
Author Naveen Onkarappa; Angel Sappa
Title An Empirical Study on Optical Flow Accuracy Depending on Vehicle Speed Type Conference Article
Year 2012 Publication IEEE Intelligent Vehicles Symposium Abbreviated Journal
Volume Issue Pages 1138-1143
Keywords
Abstract Driver assistance and safety systems are getting attention nowadays towards automatic navigation and safety. Optical flow as a motion estimation technique has got major roll in making these systems a reality. Towards this, in the current paper, the suitability of polar representation for optical flow estimation in such systems is demonstrated. Furthermore, the influence of individual regularization terms on the accuracy of optical flow on image sequences of different speeds is empirically evaluated. Also a new synthetic dataset of image sequences with different speeds is generated along with the ground-truth optical flow.
Address Alcalá de Henares
Corporate Author Thesis
Publisher IEEE Xplore Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1931-0587 ISBN 978-1-4673-2119-8 Medium
Area Expedition Conference IV
Notes ADAS Approved no
Call Number Admin @ si @ NaS2012 Serial 2020
Permanent link to this record
 

 
Author Miguel Oliveira; Angel Sappa; V. Santos
Title Color Correction for Onboard Multi-camera Systems using 3D Gaussian Mixture Models Type Conference Article
Year 2012 Publication IEEE Intelligent Vehicles Symposium Abbreviated Journal
Volume Issue Pages 299-303
Keywords
Abstract The current paper proposes a novel color correction approach for onboard multi-camera systems. It works by segmenting the given images into several regions. A probabilistic segmentation framework, using 3D Gaussian Mixture Models, is proposed. Regions are used to compute local color correction functions, which are then combined to obtain the final corrected image. An image data set of road scenarios is used to establish a performance comparison of the proposed method with other seven well known color correction algorithms. Results show that the proposed approach is the highest scoring color correction method. Also, the proposed single step 3D color space probabilistic segmentation reduces processing time over similar approaches.
Address Alcalá de Henares
Corporate Author Thesis
Publisher IEEE Xplore Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1931-0587 ISBN 978-1-4673-2119-8 Medium
Area Expedition Conference IV
Notes ADAS Approved no
Call Number Admin @ si @ OSS2012b Serial 2021
Permanent link to this record
 

 
Author Diego Cheda; Daniel Ponsa; Antonio Lopez
Title Pedestrian Candidates Generation using Monocular Cues Type Conference Article
Year 2012 Publication IEEE Intelligent Vehicles Symposium Abbreviated Journal
Volume Issue Pages 7-12
Keywords pedestrian detection
Abstract Common techniques for pedestrian candidates generation (e.g., sliding window approaches) are based on an exhaustive search over the image. This implies that the number of windows produced is huge, which translates into a significant time consumption in the classification stage. In this paper, we propose a method that significantly reduces the number of windows to be considered by a classifier. Our method is a monocular one that exploits geometric and depth information available on single images. Both representations of the world are fused together to generate pedestrian candidates based on an underlying model which is focused only on objects standing vertically on the ground plane and having certain height, according with their depths on the scene. We evaluate our algorithm on a challenging dataset and demonstrate its application for pedestrian detection, where a considerable reduction in the number of candidate windows is reached.
Address
Corporate Author Thesis
Publisher IEEE Xplore Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1931-0587 ISBN 978-1-4673-2119-8 Medium
Area Expedition Conference IV
Notes ADAS Approved no
Call Number Admin @ si @ CPL2012c; ADAS @ adas @ cpl2012d Serial 2013
Permanent link to this record
 

 
Author Diego Alejandro Cheda; Daniel Ponsa; Antonio Lopez
Title Camera Egomotion Estimation in the ADAS Context Type Conference Article
Year 2010 Publication 13th International IEEE Annual Conference on Intelligent Transportation Systems Abbreviated Journal
Volume Issue Pages 1415–1420
Keywords
Abstract Camera-based Advanced Driver Assistance Systems (ADAS) have concentrated many research efforts in the last decades. Proposals based on monocular cameras require the knowledge of the camera pose with respect to the environment, in order to reach an efficient and robust performance. A common assumption in such systems is considering the road as planar, and the camera pose with respect to it as approximately known. However, in real situations, the camera pose varies along time due to the vehicle movement, the road slope, and irregularities on the road surface. Thus, the changes in the camera position and orientation (i.e., the egomotion) are critical information that must be estimated at every frame to avoid poor performances. This work focuses on egomotion estimation from a monocular camera under the ADAS context. We review and compare egomotion methods with simulated and real ADAS-like sequences. Basing on the results of our experiments, we show which of the considered nonlinear and linear algorithms have the best performance in this domain.
Address Madeira Island (Portugal)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 2153-0009 ISBN 978-1-4244-7657-2 Medium
Area Expedition Conference ITSC
Notes ADAS Approved no
Call Number ADAS @ adas @ CPL2010 Serial 1425
Permanent link to this record
 

 
Author Ferran Diego; Daniel Ponsa; Joan Serrat; Antonio Lopez
Title Vehicle geolocalization based on video synchronization Type Conference Article
Year 2010 Publication 13th Annual International Conference on Intelligent Transportation Systems Abbreviated Journal
Volume Issue Pages 1511–1516
Keywords video alignment
Abstract TC8.6
This paper proposes a novel method for estimating the geospatial localization of a vehicle. I uses as input a georeferenced video sequence recorded by a forward-facing camera attached to the windscreen. The core of the proposed method is an on-line video synchronization which finds out the corresponding frame in the georeferenced video sequence to the one recorded at each time by the camera on a second drive through the same track. Once found the corresponding frame in the georeferenced video sequence, we transfer its geospatial information of this frame. The key advantages of this method are: 1) the increase of the update rate and the geospatial accuracy with regard to a standard low-cost GPS and 2) the ability to localize a vehicle even when a GPS is not available or is not reliable enough, like in certain urban areas. Experimental results for an urban environments are presented, showing an average of relative accuracy of 1.5 meters.
Address Madeira Island (Portugal)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 2153-0009 ISBN 978-1-4244-7657-2 Medium
Area Expedition Conference ITSC
Notes ADAS Approved no
Call Number ADAS @ adas @ DPS2010 Serial 1423
Permanent link to this record
 

 
Author Ferran Diego; Jose Manuel Alvarez; Joan Serrat; Antonio Lopez
Title Vision-based road detection via on-line video registration Type Conference Article
Year 2010 Publication 13th Annual International Conference on Intelligent Transportation Systems Abbreviated Journal
Volume Issue Pages 1135–1140
Keywords video alignment; road detection
Abstract TB6.2
Road segmentation is an essential functionality for supporting advanced driver assistance systems (ADAS) such as road following and vehicle and pedestrian detection. Significant efforts have been made in order to solve this task using vision-based techniques. The major challenge is to deal with lighting variations and the presence of objects on the road surface. In this paper, we propose a new road detection method to infer the areas of the image depicting road surfaces without performing any image segmentation. The idea is to previously segment manually or semi-automatically the road region in a traffic-free reference video record on a first drive. And then to transfer these regions to the frames of a second video sequence acquired later in a second drive through the same road, in an on-line manner. This is possible because we are able to automatically align the two videos in time and space, that is, to synchronize them and warp each frame of the first video to its corresponding frame in the second one. The geometric transform can thus transfer the road region to the present frame on-line. In order to reduce the different lighting conditions which are present in outdoor scenarios, our approach incorporates a shadowless feature space which represents an image in an illuminant-invariant feature space. Furthermore, we propose a dynamic background subtraction algorithm which removes the regions containing vehicles in the observed frames which are within the transferred road region.
Address Madeira Island (Portugal)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 2153-0009 ISBN 978-1-4244-7657-2 Medium
Area Expedition Conference ITSC
Notes ADAS Approved no
Call Number ADAS @ adas @ DAS2010 Serial 1424
Permanent link to this record
 

 
Author Sergio Vera; Miguel Angel Gonzalez Ballester; Debora Gil
Title A medial map capturing the essential geometry of organs Type Conference Article
Year 2012 Publication ISBI Workshop on Open Source Medical Image Analysis software Abbreviated Journal
Volume Issue Pages 1691 - 1694
Keywords Medial Surface Representation, Volume Reconstruction,Geometry , Image reconstruction , Liver , Manifolds , Shape , Surface morphology , Surface reconstruction
Abstract Medial representations are powerful tools for describing and parameterizing the volumetric shape of anatomical structures. Accurate computation of one pixel wide medial surfaces is mandatory. Those surfaces must represent faithfully the geometry of the volume. Although morphological methods produce excellent results in 2D, their complexity and quality drops across dimensions, due to a more complex description of pixel neighborhoods. This paper introduces a continuous operator for accurate and efficient computation of medial structures of arbitrary dimension. Our experiments show its higher performance for medical imaging applications in terms of simplicity of medial structures and capability for reconstructing the anatomical volume
Address Barcelona,Spain
Corporate Author Thesis
Publisher IEEE Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1945-7928 ISBN 978-1-4577-1857-1 Medium
Area Expedition Conference ISBI
Notes IAM Approved no
Call Number IAM @ iam @ VGG2012a Serial 1989
Permanent link to this record
 

 
Author Sergio Escalera; Oriol Pujol; Eric Laciar; Jordi Vitria; Esther Pueyo; Petia Radeva
Title Coronary Damage Classification of Patients with the Chagas Disease with Error-Correcting Output Codes Type Conference Article
Year 2008 Publication Intelligent Systems, 4th International IEEE Conference, 6–8 setembre 2008. Abbreviated Journal
Volume 2 Issue Pages 12–17
Keywords
Abstract The Chagaspsila disease is endemic in all Latin America, affecting millions of people in the continent. In order to diagnose and treat the Chagaspsila disease, it is important to detect and measure the coronary damage of the patient. In this paper, we analyze and categorize patients into different groups based on the coronary damage produced by the disease. Based on the features of the heart cycle extracted using high resolution ECG, a multi-class scheme of error-correcting output codes (ECOC) is formulated and successfully applied. The results show that the proposed scheme obtains significant performance improvements compared to previous works and state-of-the-art ECOC designs.
Address Varna (Bulgaria)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference IS’08
Notes MILAB; OR;HuPBA;MV Approved no
Call Number BCNPCL @ bcnpcl @ EPL2008 Serial 1042
Permanent link to this record
 

 
Author Xavier Soria; Angel Sappa; Arash Akbarinia
Title Multispectral Single-Sensor RGB-NIR Imaging: New Challenges and Opportunities Type Conference Article
Year 2017 Publication 7th International Conference on Image Processing Theory, Tools & Applications Abbreviated Journal
Volume Issue Pages
Keywords Color restoration; Neural networks; Singlesensor cameras; Multispectral images; RGB-NIR dataset
Abstract Multispectral images captured with a single sensor camera have become an attractive alternative for numerous computer vision applications. However, in order to fully exploit their potentials, the color restoration problem (RGB representation) should be addressed. This problem is more evident in outdoor scenarios containing vegetation, living beings, or specular materials. The problem of color distortion emerges from the sensitivity of sensors due to the overlap of visible and near infrared spectral bands. This paper empirically evaluates the variability of the near infrared (NIR) information with respect to the changes of light throughout the day. A tiny neural network is proposed to restore the RGB color representation from the given RGBN (Red, Green, Blue, NIR) images. In order to evaluate the proposed algorithm, different experiments on a RGBN outdoor dataset are conducted, which include various challenging cases. The obtained result shows the challenge and the importance of addressing color restoration in single sensor multispectral images.
Address Montreal; Canada; November 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference IPTA
Notes NEUROBIT; MSIAU; 600.122 Approved no
Call Number Admin @ si @ SSA2017 Serial 3074
Permanent link to this record
 

 
Author Kamal Nasrollahi; Sergio Escalera; P. Rasti; Gholamreza Anbarjafari; Xavier Baro; Hugo Jair Escalante; Thomas B. Moeslund
Title Deep Learning based Super-Resolution for Improved Action Recognition Type Conference Article
Year 2015 Publication 5th International Conference on Image Processing Theory, Tools and Applications IPTA2015 Abbreviated Journal
Volume Issue Pages 67 - 72
Keywords
Abstract Action recognition systems mostly work with videos of proper quality and resolution. Even most challenging benchmark databases for action recognition, hardly include videos of low-resolution from, e.g., surveillance cameras. In videos recorded by such cameras, due to the distance between people and cameras, people are pictured very small and hence challenge action recognition algorithms. Simple upsampling methods, like bicubic interpolation, cannot retrieve all the detailed information that can help the recognition. To deal with this problem, in this paper we combine results of bicubic interpolation with results of a state-ofthe-art deep learning-based super-resolution algorithm, through an alpha-blending approach. The experimental results obtained on down-sampled version of a large subset of Hoolywood2 benchmark database show the importance of the proposed system in increasing the recognition rate of a state-of-the-art action recognition system for handling low-resolution videos.
Address Orleans; France; November 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference IPTA
Notes HuPBA;MV Approved no
Call Number Admin @ si @ NER2015 Serial 2648
Permanent link to this record
 

 
Author Joan Arnedo-Moreno; D. Bañeres; Xavier Baro; S. Caballe; S. Guerrero; L. Porta; J. Prieto
Title Va-ID: A trust-based virtual assessment system Type Conference Article
Year 2014 Publication 6th International Conference on Intelligent Networking and Collaborative Systems Abbreviated Journal
Volume Issue Pages 328 - 335
Keywords
Abstract Even though online education is a very important pillar of lifelong education, institutions are still reluctant to wager for a fully online educational model. At the end, they keep relying on on-site assessment systems, mainly because fully virtual alternatives do not have the deserved social recognition or credibility. Thus, the design of virtual assessment systems that are able to provide effective proof of student authenticity and authorship and the integrity of the activities in a scalable and cost efficient manner would be very helpful. This paper presents ValID, a virtual assessment approach based on a continuous trust level evaluation between students and the institution. The current trust level serves as the main mechanism to dynamically decide which kind of controls a given student should be subjected to, across different courses in a degree. The main goal is providing a fair trade-off between security, scalability and cost, while maintaining the perceived quality of the educational model.
Address Salerna; Italy; September 2014
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-1-4799-6386-7 Medium
Area Expedition Conference INCOS
Notes OR; HuPBA;MV Approved no
Call Number Admin @ si @ ABB2014 Serial 2620
Permanent link to this record
 

 
Author Hugo Jair Escalante; Isabelle Guyon; Sergio Escalera; Julio C. S. Jacques Junior; Xavier Baro; Evelyne Viegas; Yagmur Gucluturk; Umut Guclu; Marcel A. J. van Gerven; Rob van Lier; Meysam Madadi; Stephane Ayache
Title Design of an Explainable Machine Learning Challenge for Video Interviews Type Conference Article
Year 2017 Publication International Joint Conference on Neural Networks Abbreviated Journal
Volume Issue Pages
Keywords
Abstract This paper reviews and discusses research advances on “explainable machine learning” in computer vision. We focus on a particular area of the “Looking at People” (LAP) thematic domain: first impressions and personality analysis. Our aim is to make the computational intelligence and computer vision communities aware of the importance of developing explanatory mechanisms for computer-assisted decision making applications, such as automating recruitment. Judgments based on personality traits are being made routinely by human resource departments to evaluate the candidates' capacity of social insertion and their potential of career growth. However, inferring personality traits and, in general, the process by which we humans form a first impression of people, is highly subjective and may be biased. Previous studies have demonstrated that learning machines can learn to mimic human decisions. In this paper, we go one step further and formulate the problem of explaining the decisions of the models as a means of identifying what visual aspects are important, understanding how they relate to decisions suggested, and possibly gaining insight into undesirable negative biases. We design a new challenge on explainability of learning machines for first impressions analysis. We describe the setting, scenario, evaluation metrics and preliminary outcomes of the competition. To the best of our knowledge this is the first effort in terms of challenges for explainability in computer vision. In addition our challenge design comprises several other quantitative and qualitative elements of novelty, including a “coopetition” setting, which combines competition and collaboration.
Address Anchorage; Alaska; USA; May 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference IJCNN
Notes HUPBA; no proj Approved no
Call Number Admin @ si @ EGE2017 Serial 2922
Permanent link to this record