|
Records |
Links |
|
Author |
Carme Julia; Felipe Lumbreras; Angel Sappa |
|
|
Title |
A Factorization-based Approach to Photometric Stereo |
Type |
Journal Article |
|
Year |
2011 |
Publication |
International Journal of Imaging Systems and Technology |
Abbreviated Journal |
IJIST |
|
|
Volume |
21 |
Issue |
1 |
Pages |
115-119 |
|
|
Keywords |
|
|
|
Abstract |
This article presents an adaptation of a factorization technique to tackle the photometric stereo problem. That is to recover the surface normals and reflectance of an object from a set of images obtained under different lighting conditions. The main contribution of the proposed approach is to consider pixels in shadow and saturated regions as missing data, in order to reduce their influence to the result. Concretely, an adapted Alternation technique is used to deal with missing data. Experimental results considering both synthetic and real images show the viability of the proposed factorization-based strategy. © 2011 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 21, 115–119, 2011. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ JLS2011; ADAS @ adas @ |
Serial |
1711 |
|
Permanent link to this record |
|
|
|
|
Author |
Miguel Oliveira; Victor Santos; Angel Sappa; P. Dias; A. Moreira |
|
|
Title |
Incremental texture mapping for autonomous driving |
Type |
Journal Article |
|
Year |
2016 |
Publication |
Robotics and Autonomous Systems |
Abbreviated Journal |
RAS |
|
|
Volume |
84 |
Issue |
|
Pages |
113-128 |
|
|
Keywords |
Scene reconstruction; Autonomous driving; Texture mapping |
|
|
Abstract |
Autonomous vehicles have a large number of on-board sensors, not only for providing coverage all around the vehicle, but also to ensure multi-modality in the observation of the scene. Because of this, it is not trivial to come up with a single, unique representation that feeds from the data given by all these sensors. We propose an algorithm which is capable of mapping texture collected from vision based sensors onto a geometric description of the scenario constructed from data provided by 3D sensors. The algorithm uses a constrained Delaunay triangulation to produce a mesh which is updated using a specially devised sequence of operations. These enforce a partial configuration of the mesh that avoids bad quality textures and ensures that there are no gaps in the texture. Results show that this algorithm is capable of producing fine quality textures. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS; 600.086 |
Approved |
no |
|
|
Call Number |
Admin @ si @ OSS2016b |
Serial |
2912 |
|
Permanent link to this record |
|
|
|
|
Author |
Miguel Oliveira; Victor Santos; Angel Sappa |
|
|
Title |
Multimodal Inverse Perspective Mapping |
Type |
Journal Article |
|
Year |
2015 |
Publication |
Information Fusion |
Abbreviated Journal |
IF |
|
|
Volume |
24 |
Issue |
|
Pages |
108–121 |
|
|
Keywords |
Inverse perspective mapping; Multimodal sensor fusion; Intelligent vehicles |
|
|
Abstract |
Over the past years, inverse perspective mapping has been successfully applied to several problems in the field of Intelligent Transportation Systems. In brief, the method consists of mapping images to a new coordinate system where perspective effects are removed. The removal of perspective associated effects facilitates road and obstacle detection and also assists in free space estimation. There is, however, a significant limitation in the inverse perspective mapping: the presence of obstacles on the road disrupts the effectiveness of the mapping. The current paper proposes a robust solution based on the use of multimodal sensor fusion. Data from a laser range finder is fused with images from the cameras, so that the mapping is not computed in the regions where obstacles are present. As shown in the results, this considerably improves the effectiveness of the algorithm and reduces computation time when compared with the classical inverse perspective mapping. Furthermore, the proposed approach is also able to cope with several cameras with different lenses or image resolutions, as well as dynamic viewpoints. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS; 600.055; 600.076 |
Approved |
no |
|
|
Call Number |
Admin @ si @ OSS2015c |
Serial |
2532 |
|
Permanent link to this record |
|
|
|
|
Author |
Monica Piñol; Angel Sappa; Ricardo Toledo |
|
|
Title |
Adaptive Feature Descriptor Selection based on a Multi-Table Reinforcement Learning Strategy |
Type |
Journal Article |
|
Year |
2015 |
Publication |
Neurocomputing |
Abbreviated Journal |
NEUCOM |
|
|
Volume |
150 |
Issue |
A |
Pages |
106–115 |
|
|
Keywords |
Reinforcement learning; Q-learning; Bag of features; Descriptors |
|
|
Abstract |
This paper presents and evaluates a framework to improve the performance of visual object classification methods, which are based on the usage of image feature descriptors as inputs. The goal of the proposed framework is to learn the best descriptor for each image in a given database. This goal is reached by means of a reinforcement learning process using the minimum information. The visual classification system used to demonstrate the proposed framework is based on a bag of features scheme, and the reinforcement learning technique is implemented through the Q-learning approach. The behavior of the reinforcement learning with different state definitions is evaluated. Additionally, a method that combines all these states is formulated in order to select the optimal state. Finally, the chosen actions are obtained from the best set of image descriptors in the literature: PHOW, SIFT, C-SIFT, SURF and Spin. Experimental results using two public databases (ETH and COIL) are provided showing both the validity of the proposed approach and comparisons with state of the art. In all the cases the best results are obtained with the proposed approach. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS; 600.055; 600.076 |
Approved |
no |
|
|
Call Number |
Admin @ si @ PST2015 |
Serial |
2473 |
|
Permanent link to this record |
|
|
|
|
Author |
Gemma Rotger; Francesc Moreno-Noguer; Felipe Lumbreras; Antonio Agudo |
|
|
Title |
Detailed 3D face reconstruction from a single RGB image |
Type |
Journal |
|
Year |
2019 |
Publication |
Journal of WSCG |
Abbreviated Journal |
JWSCG |
|
|
Volume |
27 |
Issue |
2 |
Pages |
103-112 |
|
|
Keywords |
3D Wrinkle Reconstruction; Face Analysis, Optimization. |
|
|
Abstract |
This paper introduces a method to obtain a detailed 3D reconstruction of facial skin from a single RGB image.
To this end, we propose the exclusive use of an input image without requiring any information about the observed material nor training data to model the wrinkle properties. They are detected and characterized directly from the image via a simple and effective parametric model, determining several features such as location, orientation, width, and height. With these ingredients, we propose to minimize a photometric error to retrieve the final detailed 3D map, which is initialized by current techniques based on deep learning. In contrast with other approaches, we only require estimating a depth parameter, making our approach fast and intuitive. Extensive experimental evaluation is presented in a wide variety of synthetic and real images, including different skin properties and facial
expressions. In all cases, our method outperforms the current approaches regarding 3D reconstruction accuracy, providing striking results for both large and fine wrinkles. |
|
|
Address |
2019/11 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS; 600.086; 600.130; 600.122 |
Approved |
no |
|
|
Call Number |
Admin @ si @ |
Serial |
3708 |
|
Permanent link to this record |