Diego Alejandro Cheda, Daniel Ponsa, & Antonio Lopez. (2010). Camera Egomotion Estimation in the ADAS Context. In 13th International IEEE Annual Conference on Intelligent Transportation Systems (1415–1420).
Abstract: Camera-based Advanced Driver Assistance Systems (ADAS) have concentrated many research efforts in the last decades. Proposals based on monocular cameras require the knowledge of the camera pose with respect to the environment, in order to reach an efficient and robust performance. A common assumption in such systems is considering the road as planar, and the camera pose with respect to it as approximately known. However, in real situations, the camera pose varies along time due to the vehicle movement, the road slope, and irregularities on the road surface. Thus, the changes in the camera position and orientation (i.e., the egomotion) are critical information that must be estimated at every frame to avoid poor performances. This work focuses on egomotion estimation from a monocular camera under the ADAS context. We review and compare egomotion methods with simulated and real ADAS-like sequences. Basing on the results of our experiments, we show which of the considered nonlinear and linear algorithms have the best performance in this domain.
|
Sergio Escalera, Oriol Pujol, & Petia Radeva. (2008). Loss-Weighted Decoding for Error-Correcting Output Coding. In 3rd International Conference on Computer Vision Theory and Applications, (Vol. 2, 117–122).
|
David Masip, Agata Lapedriza, & Jordi Vitria. (2008). Multitask Learning: An Application to Incremental Face Recognition. In 3rd International Conference on Computer Vision Theory and Applications (Vol. 1, 585–590).
|
Agata Lapedriza, David Masip, & Jordi Vitria. (2008). Subject Recognition Using a New Approach for Feature Extraction. In 3rd International Conference on Computer Vision Theory and Applications (Vol. 2, 61–66).
|
Oriol Pujol, Misael Rosales, Petia Radeva, & E Fernandez-Nofrerias. (2003). Intravascular Ultrasound Images Vessel Characterization using AdaBoost.
|
Jaume Garcia, Joel Barajas, Francesc Carreras, Sandra Pujades, & Petia Radeva. (2005). An intuitive validation technique to compare local versus global tagged MRI analysis. In Computers In Cardiology (Vol. 32, 29–32).
Abstract: Myocardium appears as a uniform tissue that seen in convectional Magnetic Resonance Images (MRI) shows just the contractile part of its movement. MR Tagging is a unique imaging technique that prints a grid over the tissue which moves according to the underlying movement of the myocardium revealing the true deformation of the cardiac muscle. Optical flow techniques based on spectral information estimate tissue displacement by analyzing information encoded in the phase maps which can be obtained using, local (Gabor) and global (HARP) methods. In this paper we compare both in synthetic and real Tagged MR sequences. We conclude that local method is slightly more accurate than the global one. On the other hand, global method is more efficient as it is much faster and less parameters have to be taken into account
|
Joel Barajas, Karla Lizbeth Caballero, & Petia Radeva. (2007). Cardiac Phase Extraction in IVUS Sequences Using 1-D Gabor Filters. In Engineering in Medicine and Biology Society, 29th Annual International Conference of the IEEE (343–36).
|
Karla Lizbeth Caballero, Joel Barajas, & Petia Radeva. (2007). Using Reconstructed IVUS Images for Coronary Plaque Classification. In Engineering in Medicine and Biology Society, 29th Annual International Conference of the IEEE (2167–2170).
|
Onur Ferhat, & Fernando Vilariño. (2013). A Cheap Portable Eye-Tracker Solution for Common Setups. In 17th European Conference on Eye Movements.
Abstract: We analyze the feasibility of a cheap eye-tracker where the hardware consists of a single webcam and a Raspberry Pi device. Our aim is to discover the limits of such a system and to see whether it provides an acceptable performance. We base our work on the open source Opengazer (Zielinski, 2013) and we propose several improvements to create a robust, real-time system. After assessing the accuracy of our eye-tracker in elaborated experiments involving 18 subjects under 4 different system setups, we developed a simple game to see how it performs in practice and we also installed it on a Raspberry Pi to create a portable stand-alone eye-tracker which achieves 1.62° horizontal accuracy with 3 fps refresh rate for a building cost of 70 Euros.
Keywords: Low cost; eye-tracker; software; webcam; Raspberry Pi
|
Eduard Vazquez, Ramon Baldrich, Joost Van de Weijer, & Maria Vanrell. (2011). Describing Reflectances for Colour Segmentation Robust to Shadows, Highlights and Textures. TPAMI - IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(5), 917–930.
Abstract: The segmentation of a single material reflectance is a challenging problem due to the considerable variation in image measurements caused by the geometry of the object, shadows, and specularities. The combination of these effects has been modeled by the dichromatic reflection model. However, the application of the model to real-world images is limited due to unknown acquisition parameters and compression artifacts. In this paper, we present a robust model for the shape of a single material reflectance in histogram space. The method is based on a multilocal creaseness analysis of the histogram which results in a set of ridges representing the material reflectances. The segmentation method derived from these ridges is robust to both shadow, shading and specularities, and texture in real-world images. We further complete the method by incorporating prior knowledge from image statistics, and incorporate spatial coherence by using multiscale color contrast information. Results obtained show that our method clearly outperforms state-of-the-art segmentation methods on a widely used segmentation benchmark, having as a main characteristic its excellent performance in the presence of shadows and highlights at low computational cost.
|
Arjan Gijsenij, Theo Gevers, & Joost Van de Weijer. (2012). Improving Color Constancy by Photometric Edge Weighting. TPAMI - IEEE Transaction on Pattern Analysis and Machine Intelligence, 34(5), 918–929.
Abstract: : Edge-based color constancy methods make use of image derivatives to estimate the illuminant. However, different edge types exist in real-world images such as material, shadow and highlight edges. These different edge types may have a distinctive influence on the performance of the illuminant estimation. Therefore, in this paper, an extensive analysis is provided of different edge types on the performance of edge-based color constancy methods. First, an edge-based taxonomy is presented classifying edge types based on their photometric properties (e.g. material, shadow-geometry and highlights). Then, a performance evaluation of edge-based color constancy is provided using these different edge types. From this performance evaluation it is derived that specular and shadow edge types are more valuable than material edges for the estimation of the illuminant. To this end, the (iterative) weighted Grey-Edge algorithm is proposed in which these edge types are more emphasized for the estimation of the illuminant. Images that are recorded under controlled circumstances demonstrate that the proposed iterative weighted Grey-Edge algorithm based on highlights reduces the median angular error with approximately $25\%$. In an uncontrolled environment, improvements in angular error up to $11\%$ are obtained with respect to regular edge-based color constancy.
|
Albert Andaluz, Francesc Carreras, Debora Gil, & Jaume Garcia. (2010). Una aplicació amigable pel càlcul de indicadors clínics del ventricle esquerre. Barcelona: Biocat.
|
Anjan Dutta, & Zeynep Akata. (2019). Semantically Tied Paired Cycle Consistency for Zero-Shot Sketch-based Image Retrieval. In 32nd IEEE Conference on Computer Vision and Pattern Recognition (pp. 5089–5098).
Abstract: Zero-shot sketch-based image retrieval (SBIR) is an emerging task in computer vision, allowing to retrieve natural images relevant to sketch queries that might not been seen in the training phase. Existing works either require aligned sketch-image pairs or inefficient memory fusion layer for mapping the visual information to a semantic space. In this work, we propose a semantically aligned paired cycle-consistent generative (SEM-PCYC) model for zero-shot SBIR, where each branch maps the visual information to a common semantic space via an adversarial training. Each of these branches maintains a cycle consistency that only requires supervision at category levels, and avoids the need of highly-priced aligned sketch-image pairs. A classification criteria on the generators' outputs ensures the visual to semantic space mapping to be discriminating. Furthermore, we propose to combine textual and hierarchical side information via a feature selection auto-encoder that selects discriminating side information within a same end-to-end model. Our results demonstrate a significant boost in zero-shot SBIR performance over the state-of-the-art on the challenging Sketchy and TU-Berlin datasets.
|
Armin Mehri, & Angel Sappa. (2019). Colorizing Near Infrared Images through a Cyclic Adversarial Approach of Unpaired Samples. In IEEE International Conference on Computer Vision and Pattern Recognition-Workshops.
Abstract: This paper presents a novel approach for colorizing near infrared (NIR) images. The approach is based on image-to-image translation using a Cycle-Consistent adversarial network for learning the color channels on unpaired dataset. This architecture is able to handle unpaired datasets. The approach uses as generators tailored networks that require less computation times, converge faster and generate high quality samples. The obtained results have been quantitatively—using standard evaluation metrics—and qualitatively evaluated showing considerable improvements with respect to the state of the art
|
Patricia Suarez, Angel Sappa, Boris X. Vintimilla, & Riad I. Hammoud. (2019). Image Vegetation Index through a Cycle Generative Adversarial Network. In IEEE International Conference on Computer Vision and Pattern Recognition-Workshops.
Abstract: This paper proposes a novel approach to estimate the Normalized Difference Vegetation Index (NDVI) just from an RGB image. The NDVI values are obtained by using images from the visible spectral band together with a synthetic near infrared image obtained by a cycled GAN. The cycled GAN network is able to obtain a NIR image from a given gray scale image. It is trained by using unpaired set of gray scale and NIR images by using a U-net architecture and a multiple loss function (gray scale images are obtained from the provided RGB images). Then, the NIR image estimated with the proposed cycle generative adversarial network is used to compute the NDVI index. Experimental results are provided showing the validity of the proposed approach. Additionally, comparisons with previous approaches are also provided.
|