|
Records |
Links |
|
Author |
Jiaolong Xu; Liang Xiao; Antonio Lopez |

|
|
Title |
Self-supervised Domain Adaptation for Computer Vision Tasks |
Type |
Journal Article |
|
Year |
2019 |
Publication |
IEEE ACCESS |
Abbreviated Journal |
ACCESS |
|
|
Volume |
7 |
Issue |
|
Pages  |
156694 - 156706 |
|
|
Keywords |
|
|
|
Abstract |
Recent progress of self-supervised visual representation learning has achieved remarkable success on many challenging computer vision benchmarks. However, whether these techniques can be used for domain adaptation has not been explored. In this work, we propose a generic method for self-supervised domain adaptation, using object recognition and semantic segmentation of urban scenes as use cases. Focusing on simple pretext/auxiliary tasks (e.g. image rotation prediction), we assess different learning strategies to improve domain adaptation effectiveness by self-supervision. Additionally, we propose two complementary strategies to further boost the domain adaptation accuracy on semantic segmentation within our method, consisting of prediction layer alignment and batch normalization calibration. The experimental results show adaptation levels comparable to most studied domain adaptation methods, thus, bringing self-supervision as a new alternative for reaching domain adaptation. The code is available at this link. https://github.com/Jiaolong/self-supervised-da. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ XXL2019 |
Serial |
3302 |
|
Permanent link to this record |
|
|
|
|
Author |
Francisco Blanco; Felipe Lumbreras; Joan Serrat; Roswitha Siener; Silvia Serranti; Giuseppe Bonifazi; Montserrat Lopez Mesas; Manuel Valiente |

|
|
Title |
Taking advantage of Hyperspectral Imaging classification of urinary stones against conventional IR Spectroscopy |
Type |
Journal Article |
|
Year |
2014 |
Publication |
Journal of Biomedical Optics |
Abbreviated Journal |
JBiO |
|
|
Volume |
19 |
Issue |
12 |
Pages  |
126004-1 - 126004-9 |
|
|
Keywords |
|
|
|
Abstract |
The analysis of urinary stones is mandatory for the best management of the disease after the stone passage in order to prevent further stone episodes. Thus the use of an appropriate methodology for an individualized stone analysis becomes a key factor for giving the patient the most suitable treatment. A recently developed hyperspectral imaging methodology, based on pixel-to-pixel analysis of near-infrared spectral images, is compared to the reference technique in stone analysis, infrared (IR) spectroscopy. The developed classification model yields >90% correct classification rate when compared to IR and is able to precisely locate stone components within the structure of the stone with a 15 µm resolution. Due to the little sample pretreatment, low analysis time, good performance of the model, and the automation of the measurements, they become analyst independent; this methodology can be considered to become a routine analysis for clinical laboratories. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS; 600.076 |
Approved |
no |
|
|
Call Number |
Admin @ si @ BLS2014 |
Serial |
2563 |
|
Permanent link to this record |
|
|
|
|
Author |
Katerine Diaz; Jesus Martinez del Rincon; Aura Hernandez-Sabate; Debora Gil |

|
|
Title |
Continuous head pose estimation using manifold subspace embedding and multivariate regression |
Type |
Journal Article |
|
Year |
2018 |
Publication |
IEEE ACCESS |
Abbreviated Journal |
ACCESS |
|
|
Volume |
6 |
Issue |
|
Pages  |
18325 - 18334 |
|
|
Keywords |
Head Pose estimation; HOG features; Generalized Discriminative Common Vectors; B-splines; Multiple linear regression |
|
|
Abstract |
In this paper, a continuous head pose estimation system is proposed to estimate yaw and pitch head angles from raw facial images. Our approach is based on manifold learningbased methods, due to their promising generalization properties shown for face modelling from images. The method combines histograms of oriented gradients, generalized discriminative common vectors and continuous local regression to achieve successful performance. Our proposal was tested on multiple standard face datasets, as well as in a realistic scenario. Results show a considerable performance improvement and a higher consistence of our model in comparison with other state-of-art methods, with angular errors varying between 9 and 17 degrees. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
2169-3536 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ DMH2018b |
Serial |
3091 |
|
Permanent link to this record |
|
|
|
|
Author |
Marçal Rusiñol; J. Chazalon; Katerine Diaz |


|
|
Title |
Augmented Songbook: an Augmented Reality Educational Application for Raising Music Awareness |
Type |
Journal Article |
|
Year |
2018 |
Publication |
Multimedia Tools and Applications |
Abbreviated Journal |
MTAP |
|
|
Volume |
77 |
Issue |
11 |
Pages  |
13773-13798 |
|
|
Keywords |
Augmented reality; Document image matching; Educational applications |
|
|
Abstract |
This paper presents the development of an Augmented Reality mobile application which aims at sensibilizing young children to abstract concepts of music. Such concepts are, for instance, the musical notation or the idea of rhythm. Recent studies in Augmented Reality for education suggest that such technologies have multiple benefits for students, including younger ones. As mobile document image acquisition and processing gains maturity on mobile platforms, we explore how it is possible to build a markerless and real-time application to augment the physical documents with didactic animations and interactive virtual content. Given a standard image processing pipeline, we compare the performance of different local descriptors at two key stages of the process. Results suggest alternatives to the SIFT local descriptors, regarding result quality and computational efficiency, both for document model identification and perspective transform estimation. All experiments are performed on an original and public dataset we introduce here. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG; ADAS; 600.084; 600.121; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ RCD2018 |
Serial |
2996 |
|
Permanent link to this record |
|
|
|
|
Author |
Cristhian Aguilera; Fernando Barrera; Felipe Lumbreras; Angel Sappa; Ricardo Toledo |


|
|
Title |
Multispectral Image Feature Points |
Type |
Journal Article |
|
Year |
2012 |
Publication |
Sensors |
Abbreviated Journal |
SENS |
|
|
Volume |
12 |
Issue |
9 |
Pages  |
12661-12672 |
|
|
Keywords |
multispectral image descriptor; color and infrared images; feature point descriptor |
|
|
Abstract |
Far-Infrared and Visible Spectrum images. It allows matching interest points on images of the same scene but acquired in different spectral bands. Initially, points of interest are detected on both images through a SIFT-like based scale space representation. Then, these points are characterized using an Edge Oriented Histogram (EOH) descriptor. Finally, points of interest from multispectral images are matched by finding nearest couples using the information from the descriptor. The provided experimental results and comparisons with similar methods show both the validity of the proposed approach as well as the improvements it offers with respect to the current state-of-the-art. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ ABL2012 |
Serial |
2154 |
|
Permanent link to this record |
|
|
|
|
Author |
Joan Marc Llargues Asensio; Juan Peralta; Raul Arrabales; Manuel Gonzalez Bedia; Paulo Cortez; Antonio Lopez |

|
|
Title |
Artificial Intelligence Approaches for the Generation and Assessment of Believable Human-Like Behaviour in Virtual Characters |
Type |
Journal Article |
|
Year |
2014 |
Publication |
Expert Systems With Applications |
Abbreviated Journal |
EXSY |
|
|
Volume |
41 |
Issue |
16 |
Pages  |
7281–7290 |
|
|
Keywords |
Turing test; Human-like behaviour; Believability; Non-player characters; Cognitive architectures; Genetic algorithm; Artificial neural networks |
|
|
Abstract |
Having artificial agents to autonomously produce human-like behaviour is one of the most ambitious original goals of Artificial Intelligence (AI) and remains an open problem nowadays. The imitation game originally proposed by Turing constitute a very effective method to prove the indistinguishability of an artificial agent. The behaviour of an agent is said to be indistinguishable from that of a human when observers (the so-called judges in the Turing test) cannot tell apart humans and non-human agents. Different environments, testing protocols, scopes and problem domains can be established to develop limited versions or variants of the original Turing test. In this paper we use a specific version of the Turing test, based on the international BotPrize competition, built in a First-Person Shooter video game, where both human players and non-player characters interact in complex virtual environments. Based on our past experience both in the BotPrize competition and other robotics and computer game AI applications we have developed three new more advanced controllers for believable agents: two based on a combination of the CERA–CRANIUM and SOAR cognitive architectures and other based on ADANN, a system for the automatic evolution and adaptation of artificial neural networks. These two new agents have been put to the test jointly with CCBot3, the winner of BotPrize 2010 competition (Arrabales et al., 2012), and have showed a significant improvement in the humanness ratio. Additionally, we have confronted all these bots to both First-person believability assessment (BotPrize original judging protocol) and Third-person believability assessment, demonstrating that the active involvement of the judge has a great impact in the recognition of human-like behaviour. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS; 600.055; 600.057; 600.076 |
Approved |
no |
|
|
Call Number |
Admin @ si @ LPA2014 |
Serial |
2500 |
|
Permanent link to this record |
|
|
|
|
Author |
J.S. Cope; P.Remagnino; S.Mannan; Katerine Diaz; Francesc J. Ferri; P.Wilkin |


|
|
Title |
Reverse Engineering Expert Visual Observations: From Fixations To The Learning Of Spatial Filters With A Neural-Gas Algorithm |
Type |
Journal Article |
|
Year |
2013 |
Publication |
Expert Systems with Applications |
Abbreviated Journal |
EXWA |
|
|
Volume |
40 |
Issue |
17 |
Pages  |
6707-6712 |
|
|
Keywords |
Neural gas; Expert vision; Eye-tracking; Fixations |
|
|
Abstract |
Human beings can become experts in performing specific vision tasks, for example, doctors analysing medical images, or botanists studying leaves. With sufficient knowledge and experience, people can become very efficient at such tasks. When attempting to perform these tasks with a machine vision system, it would be highly beneficial to be able to replicate the process which the expert undergoes. Advances in eye-tracking technology can provide data to allow us to discover the manner in which an expert studies an image. This paper presents a first step towards utilizing these data for computer vision purposes. A growing-neural-gas algorithm is used to learn a set of Gabor filters which give high responses to image regions which a human expert fixated on. These filters can then be used to identify regions in other images which are likely to be useful for a given vision task. The algorithm is evaluated by learning filters for locating specific areas of plant leaves. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0957-4174 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ CRM2013 |
Serial |
2438 |
|
Permanent link to this record |
|
|
|
|
Author |
Mohammad Rouhani; Angel Sappa |


|
|
Title |
The Richer Representation the Better Registration |
Type |
Journal Article |
|
Year |
2013 |
Publication |
IEEE Transactions on Image Processing |
Abbreviated Journal |
TIP |
|
|
Volume |
22 |
Issue |
12 |
Pages  |
5036-5049 |
|
|
Keywords |
|
|
|
Abstract |
In this paper, the registration problem is formulated as a point to model distance minimization. Unlike most of the existing works, which are based on minimizing a point-wise correspondence term, this formulation avoids the correspondence search that is time-consuming. In the first stage, the target set is described through an implicit function by employing a linear least squares fitting. This function can be either an implicit polynomial or an implicit B-spline from a coarse to fine representation. In the second stage, we show how the obtained implicit representation is used as an interface to convert point-to-point registration into point-to-implicit problem. Furthermore, we show that this registration distance is smooth and can be minimized through the Levengberg-Marquardt algorithm. All the formulations presented for both stages are compact and easy to implement. In addition, we show that our registration method can be handled using any implicit representation though some are coarse and others provide finer representations; hence, a tradeoff between speed and accuracy can be set by employing the right implicit function. Experimental results and comparisons in 2D and 3D show the robustness and the speed of convergence of the proposed approach. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1057-7149 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ RoS2013 |
Serial |
2665 |
|
Permanent link to this record |
|
|
|
|
Author |
Fahad Shahbaz Khan; Jiaolong Xu; Muhammad Anwer Rao; Joost van de Weijer; Andrew Bagdanov; Antonio Lopez |

|
|
Title |
Recognizing Actions through Action-specific Person Detection |
Type |
Journal Article |
|
Year |
2015 |
Publication |
IEEE Transactions on Image Processing |
Abbreviated Journal |
TIP |
|
|
Volume |
24 |
Issue |
11 |
Pages  |
4422-4432 |
|
|
Keywords |
|
|
|
Abstract |
Action recognition in still images is a challenging problem in computer vision. To facilitate comparative evaluation independently of person detection, the standard evaluation protocol for action recognition uses an oracle person detector to obtain perfect bounding box information at both training and test time. The assumption is that, in practice, a general person detector will provide candidate bounding boxes for action recognition. In this paper, we argue that this paradigm is suboptimal and that action class labels should already be considered during the detection stage. Motivated by the observation that body pose is strongly conditioned on action class, we show that: 1) the existing state-of-the-art generic person detectors are not adequate for proposing candidate bounding boxes for action classification; 2) due to limited training examples, the direct training of action-specific person detectors is also inadequate; and 3) using only a small number of labeled action examples, the transfer learning is able to adapt an existing detector to propose higher quality bounding boxes for subsequent action classification. To the best of our knowledge, we are the first to investigate transfer learning for the task of action-specific person detection in still images. We perform extensive experiments on two benchmark data sets: 1) Stanford-40 and 2) PASCAL VOC 2012. For the action detection task (i.e., both person localization and classification of the action performed), our approach outperforms methods based on general person detection by 5.7% mean average precision (MAP) on Stanford-40 and 2.1% MAP on PASCAL VOC 2012. Our approach also significantly outperforms the state of the art with a MAP of 45.4% on Stanford-40 and 31.4% on PASCAL VOC 2012. We also evaluate our action detection approach for the task of action classification (i.e., recognizing actions without localizing them). For this task, our approach, without using any ground-truth person localization at test tim- , outperforms on both data sets state-of-the-art methods, which do use person locations. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1057-7149 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS; LAMP; 600.076; 600.079 |
Approved |
no |
|
|
Call Number |
Admin @ si @ KXR2015 |
Serial |
2668 |
|
Permanent link to this record |
|
|
|
|
Author |
Alejandro Gonzalez Alzate; David Vazquez; Antonio Lopez; Jaume Amores |


|
|
Title |
On-Board Object Detection: Multicue, Multimodal, and Multiview Random Forest of Local Experts |
Type |
Journal Article |
|
Year |
2017 |
Publication |
IEEE Transactions on cybernetics |
Abbreviated Journal |
Cyber |
|
|
Volume |
47 |
Issue |
11 |
Pages  |
3980 - 3990 |
|
|
Keywords |
Multicue; multimodal; multiview; object detection |
|
|
Abstract |
Despite recent significant advances, object detection continues to be an extremely challenging problem in real scenarios. In order to develop a detector that successfully operates under these conditions, it becomes critical to leverage upon multiple cues, multiple imaging modalities, and a strong multiview (MV) classifier that accounts for different object views and poses. In this paper, we provide an extensive evaluation that gives insight into how each of these aspects (multicue, multimodality, and strong MV classifier) affect accuracy both individually and when integrated together. In the multimodality component, we explore the fusion of RGB and depth maps obtained by high-definition light detection and ranging, a type of modality that is starting to receive increasing attention. As our analysis reveals, although all the aforementioned aspects significantly help in improving the accuracy, the fusion of visible spectrum and depth information allows to boost the accuracy by a much larger margin. The resulting detector not only ranks among the top best performers in the challenging KITTI benchmark, but it is built upon very simple blocks that are easy to implement and computationally efficient. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
2168-2267 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS; 600.085; 600.082; 600.076; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ |
Serial |
2810 |
|
Permanent link to this record |