|
Diego Cheda. 2012. Monocular Depth Cues in Computer Vision Applications. (Ph.D. thesis, Ediciones Graficas Rey.)
Abstract: Depth perception is a key aspect of human vision. It is a routine and essential visual task that the human do effortlessly in many daily activities. This has often been associated with stereo vision, but humans have an amazing ability to perceive depth relations even from a single image by using several monocular cues.
In the computer vision field, if image depth information were available, many tasks could be posed from a different perspective for the sake of higher performance and robustness. Nevertheless, given a single image, this possibility is usually discarded, since obtaining depth information has frequently been performed by three-dimensional reconstruction techniques, requiring two or more images of the same scene taken from different viewpoints. Recently, some proposals have shown the feasibility of computing depth information from single images. In essence, the idea is to take advantage of a priori knowledge of the acquisition conditions and the observed scene to estimate depth from monocular pictorial cues. These approaches try to precisely estimate the scene depth maps by employing computationally demanding techniques. However, to assist many computer vision algorithms, it is not really necessary computing a costly and detailed depth map of the image. Indeed, just a rough depth description can be very valuable in many problems.
In this thesis, we have demonstrated how coarse depth information can be integrated in different tasks following alternative strategies to obtain more precise and robust results. In that sense, we have proposed a simple, but reliable enough technique, whereby image scene regions are categorized into discrete depth ranges to build a coarse depth map. Based on this representation, we have explored the potential usefulness of our method in three application domains from novel viewpoints: camera rotation parameters estimation, background estimation and pedestrian candidate generation. In the first case, we have computed camera rotation mounted in a moving vehicle applying two novels methods based on distant elements in the image, where the translation component of the image flow vectors is negligible. In background estimation, we have proposed a novel method to reconstruct the background by penalizing close regions in a cost function, which integrates color, motion, and depth terms. Finally, we have benefited of geometric and depth information available on single images for pedestrian candidate generation to significantly reduce the number of generated windows to be further processed by a pedestrian classifier. In all cases, results have shown that our approaches contribute to better performances.
|
|
|
Fadi Dornaika and Angel Sappa. 2006. 3D Face Tracking using Appearance Registration and Robust Iterative Closest Point Algorithm. 21st International Symposium on Computer and Information Sciences (ISCIS´06), LNCS 4263: 532–541.
|
|
|
Fadi Dornaika and Angel Sappa. 2006. Rigid and Non-Rigid Face Motion Tracking by Aligning Texture Maps and Stereo-Based 3D Models. 8th International Conference on Advanced Concepts for Intelligent Vision Systems (ACIVS´06), LNCS 4179: 675–684.
|
|
|
Fadi Dornaika and Angel Sappa. 2006. 3D Motion from Image Derivatives using the Least Trimmed Square Regression. International Workshop on Intelligent Computing in Pattern Analysis/Synthesis (IWICPAS´06), LNCS 4153: 76–84.
|
|
|
Fadi Dornaika and Angel Sappa. 2007. SFM for Planar Scenes: a Direct and Robust Approach. book chapter: Informatics in Control, Automation and Robotics II, Ed. J. Filipe, J. Ferrier, J. Cetto and M. Carvalho, pp. 129–136. (best papers ICINCO 2005).
|
|
|
Fadi Dornaika and Angel Sappa. 2008. Real Time Image Registration for Planar Structure and 3D Sensor Pose Estimation. In Asim Bhatti, ed. Stereo Vision.299–316.
|
|
|
Felipe Codevilla. 2019. On Building End-to-End Driving Models Through Imitation Learning. (Ph.D. thesis, Ediciones Graficas Rey.)
Abstract: Autonomous vehicles are now considered as an assured asset in the future. Literally, all the relevant car-markers are now in a race to produce fully autonomous vehicles. These car-makers usually make use of modular pipelines for designing autonomous vehicles. This strategy decomposes the problem in a variety of tasks such as object detection and recognition, semantic and instance segmentation, depth estimation, SLAM and place recognition, as well as planning and control. Each module requires a separate set of expert algorithms, which are costly specially in the amount of human labor and necessity of data labelling. An alternative, that recently has driven considerable interest, is the end-to-end driving. In the end-to-end driving paradigm, perception and control are learned simultaneously using a deep network. These sensorimotor models are typically obtained by imitation learning fromhuman demonstrations. The main advantage is that this approach can directly learn from large fleets of human-driven vehicles without requiring a fixed ontology and extensive amounts of labeling. However, scaling end-to-end driving methods to behaviors more complex than simple lane keeping or lead vehicle following remains an open problem. On this thesis, in order to achieve more complex behaviours, we
address some issues when creating end-to-end driving system through imitation
learning. The first of themis a necessity of an environment for algorithm evaluation and collection of driving demonstrations. On this matter, we participated on the creation of the CARLA simulator, an open source platformbuilt from ground up for autonomous driving validation and prototyping. Since the end-to-end approach is purely reactive, there is also the necessity to provide an interface with a global planning system. With this, we propose the conditional imitation learning that conditions the actions produced into some high level command. Evaluation is also a concern and is commonly performed by comparing the end-to-end network output to some pre-collected driving dataset. We show that this is surprisingly weakly correlated to the actual driving and propose strategies on how to better acquire data and a better comparison strategy. Finally, we confirmwell-known generalization issues
(due to dataset bias and overfitting), new ones (due to dynamic objects and the
lack of a causal model), and training instability; problems requiring further research before end-to-end driving through imitation can scale to real-world driving.
|
|
|
Felipe Lumbreras. 2001. Segmentation, classification and modelization of textures by means of multiresolution decomposition techniques..
|
|
|
Felipe Lumbreras, Ramon Baldrich, Maria Vanrell, Joan Serrat and Juan J. Villanueva. 1999. Multiresolution texture classification of ceramic tiles. Recent Research developments in optical engineering, Research Signpost, 2: 213–228.
|
|
|
Fernando Barrera. 2012. Multimodal Stereo from Thermal Infrared and Visible Spectrum. (Ph.D. thesis, Ediciones Graficas Rey.)
Abstract: Recent advances in thermal infrared imaging (LWIR) has allowed its use in applications beyond of the military domain. Nowadays, this new family of sensors is included in different technical and scientific applications. They offer features that facilitate tasks, such as detection of pedestrians, hot spots, differences in temperature, among others, which can significantly improve the performance of a system where the persons are expected to play the principal role. For instance, video surveillance applications, monitoring, and pedestrian detection.
During this dissertation the next question is stated: Could a couple of sensors measuring different bands of the electromagnetic spectrum, as the visible and thermal infrared, be used to extract depth information? Although it is a complex question, we shows that a system of these characteristics is possible as well as their advantages, drawbacks, and potential opportunities.
The matching and fusion of data coming from different sensors, as the emissions registered at visible and infrared bands, represents a special challenge, because it has been showed that theses signals are weak correlated. Therefore, many traditional techniques of image processing and computer vision are not helpful, requiring adjustments for their correct performance in every modality.
In this research an experimental study that compares different cost functions and matching approaches is performed, in order to build a multimodal stereovision system. Furthermore, the common problems in infrared/visible stereo, specially in the outdoor scenes are identified. Our framework summarizes the architecture of a generic stereo algorithm, at different levels: computational, functional, and structural, which can be extended toward high-level fusion (semantic) and high-order (prior).The proposed framework is intended to explore novel multimodal stereo matching approaches, going from sparse to dense representations (both disparity and depth maps). Moreover, context information is added in form of priors and assumptions. Finally, this dissertation shows a promissory way toward the integration of multiple sensors for recovering three-dimensional information.
|
|