|
Records |
Links |
|
Author |
Josep Llados; Ernest Valveny; Gemma Sanchez; Enric Marti |
![goto web page url](img/www.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
A Case Study of Pattern Recognition: Symbol Recognition in Graphic Documentsa |
Type |
Conference Article |
|
Year |
2003 |
Publication |
Proceedings of Pattern Recognition in Information Systems |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
1-13 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address ![sorted by Address field, descending order (down)](img/sort_desc.gif) |
Angers, France |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
ICEIS Press |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
972-98816-3-4 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
PRIS'03 |
|
|
Notes |
DAG;IAM; |
Approved |
no |
|
|
Call Number |
IAM @ iam @ LVS2003 |
Serial |
1576 |
|
Permanent link to this record |
|
|
|
|
Author |
Arnau Ramisa; Ramon Lopez de Mantaras; Ricardo Toledo |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Comparing Combinations of Feature Regions for Panoramic VSLAM |
Type |
Conference Article |
|
Year |
2007 |
Publication |
4th International Conference on Informatics in Control, Automation and Robotics |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
292–297 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address ![sorted by Address field, descending order (down)](img/sort_desc.gif) |
Angers (France) |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICINCO |
|
|
Notes |
RV;ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ RLA2007 |
Serial |
900 |
|
Permanent link to this record |
|
|
|
|
Author |
Hugo Jair Escalante; Isabelle Guyon; Sergio Escalera; Julio C. S. Jacques Junior; Xavier Baro; Evelyne Viegas; Yagmur Gucluturk; Umut Guclu; Marcel A. J. van Gerven; Rob van Lier; Meysam Madadi; Stephane Ayache |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Design of an Explainable Machine Learning Challenge for Video Interviews |
Type |
Conference Article |
|
Year |
2017 |
Publication |
International Joint Conference on Neural Networks |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
This paper reviews and discusses research advances on “explainable machine learning” in computer vision. We focus on a particular area of the “Looking at People” (LAP) thematic domain: first impressions and personality analysis. Our aim is to make the computational intelligence and computer vision communities aware of the importance of developing explanatory mechanisms for computer-assisted decision making applications, such as automating recruitment. Judgments based on personality traits are being made routinely by human resource departments to evaluate the candidates' capacity of social insertion and their potential of career growth. However, inferring personality traits and, in general, the process by which we humans form a first impression of people, is highly subjective and may be biased. Previous studies have demonstrated that learning machines can learn to mimic human decisions. In this paper, we go one step further and formulate the problem of explaining the decisions of the models as a means of identifying what visual aspects are important, understanding how they relate to decisions suggested, and possibly gaining insight into undesirable negative biases. We design a new challenge on explainability of learning machines for first impressions analysis. We describe the setting, scenario, evaluation metrics and preliminary outcomes of the competition. To the best of our knowledge this is the first effort in terms of challenges for explainability in computer vision. In addition our challenge design comprises several other quantitative and qualitative elements of novelty, including a “coopetition” setting, which combines competition and collaboration. |
|
|
Address ![sorted by Address field, descending order (down)](img/sort_desc.gif) |
Anchorage; Alaska; USA; May 2017 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
IJCNN |
|
|
Notes |
HUPBA; no proj |
Approved |
no |
|
|
Call Number |
Admin @ si @ EGE2017 |
Serial |
2922 |
|
Permanent link to this record |
|
|
|
|
Author |
Sergio Escalera; Xavier Baro; Hugo Jair Escalante; Isabelle Guyon |
![download PDF file pdf](img/file_PDF.gif)
|
|
Title |
ChaLearn Looking at People: A Review of Events and Resources |
Type |
Conference Article |
|
Year |
2017 |
Publication |
30th International Joint Conference on Neural Networks |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
This paper reviews the historic of ChaLearn Looking at People (LAP) events. We started in 2011 (with the release of the first Kinect device) to run challenges related to human action/activity and gesture recognition. Since then we have regularly organized events in a series of competitions covering all aspects of visual analysis of humans. So far we have organized more than 10 international challenges and events in this field. This paper reviews associated events, and introduces the ChaLearn LAP platform where public resources (including code, data and preprints of papers) related to the organized events are available. We also provide a discussion on perspectives of ChaLearn LAP activities. |
|
|
Address ![sorted by Address field, descending order (down)](img/sort_desc.gif) |
Anchorage; Alaska; USA; May 2017 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
IJCNN |
|
|
Notes |
HuPBA; 602.143 |
Approved |
no |
|
|
Call Number |
Admin @ si @ EBE2017 |
Serial |
3012 |
|
Permanent link to this record |
|
|
|
|
Author |
Bogdan Raducanu; Fadi Dornaika |
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Dynamic Facial Expression Recognition Using Laplacian Eigenmaps-Based Manifold Learning |
Type |
Conference Article |
|
Year |
2010 |
Publication |
IEEE International Conference on Robotics and Automation |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
156–161 |
|
|
Keywords |
|
|
|
Abstract |
In this paper, we propose an integrated framework for tracking, modelling and recognition of facial expressions. The main contributions are: (i) a view- and texture independent scheme that exploits facial action parameters estimated by an appearance-based 3D face tracker; (ii) the complexity of the non-linear facial expression space is modelled through a manifold, whose structure is learned using Laplacian Eigenmaps. The projected facial expressions are afterwards recognized based on Nearest Neighbor classifier; (iii) with the proposed approach, we developed an application for an AIBO robot, in which it mirrors the perceived facial expression. |
|
|
Address ![sorted by Address field, descending order (down)](img/sort_desc.gif) |
Anchorage; AK; USA; |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1050-4729 |
ISBN |
978-1-4244-5038-1 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICRA |
|
|
Notes |
OR; MV |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ RaD2010 |
Serial |
1310 |
|
Permanent link to this record |
|
|
|
|
Author |
Patricia Suarez; Angel Sappa; Boris X. Vintimilla; Riad I. Hammoud |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Cycle Generative Adversarial Network: Towards A Low-Cost Vegetation Index Estimation |
Type |
Conference Article |
|
Year |
2021 |
Publication |
28th IEEE International Conference on Image Processing |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
19-22 |
|
|
Keywords |
|
|
|
Abstract |
This paper presents a novel unsupervised approach to estimate the Normalized Difference Vegetation Index (NDVI). The NDVI is obtained as the ratio between information from the visible and near infrared spectral bands; in the current work, the NDVI is estimated just from an image of the visible spectrum through a Cyclic Generative Adversarial Network (CyclicGAN). This unsupervised architecture learns to estimate the NDVI index by means of an image translation between the red channel of a given RGB image and the NDVI unpaired index’s image. The translation is obtained by means of a ResNET architecture and a multiple loss function. Experimental results obtained with this unsupervised scheme show the validity of the implemented model. Additionally, comparisons with the state of the art approaches are provided showing improvements with the proposed approach. |
|
|
Address ![sorted by Address field, descending order (down)](img/sort_desc.gif) |
Anchorage-Alaska; USA; September 2021 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICIP |
|
|
Notes |
MSIAU; 600.130; 600.122; 601.349 |
Approved |
no |
|
|
Call Number |
Admin @ si @ SSV2021b |
Serial |
3579 |
|
Permanent link to this record |
|
|
|
|
Author |
Pau Riba; Josep Llados; Alicia Fornes |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Error-tolerant coarse-to-fine matching model for hierarchical graphs |
Type |
Conference Article |
|
Year |
2017 |
Publication |
11th IAPR-TC-15 International Workshop on Graph-Based Representations in Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
10310 |
Issue |
|
Pages |
107-117 |
|
|
Keywords |
Graph matching; Hierarchical graph; Graph-based representation; Coarse-to-fine matching |
|
|
Abstract |
Graph-based representations are effective tools to capture structural information from visual elements. However, retrieving a query graph from a large database of graphs implies a high computational complexity. Moreover, these representations are very sensitive to noise or small changes. In this work, a novel hierarchical graph representation is designed. Using graph clustering techniques adapted from graph-based social media analysis, we propose to generate a hierarchy able to deal with different levels of abstraction while keeping information about the topology. For the proposed representations, a coarse-to-fine matching method is defined. These approaches are validated using real scenarios such as classification of colour images and handwritten word spotting. |
|
|
Address ![sorted by Address field, descending order (down)](img/sort_desc.gif) |
Anacapri; Italy; May 2017 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer International Publishing |
Place of Publication |
|
Editor |
Pasquale Foggia; Cheng-Lin Liu; Mario Vento |
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
GbRPR |
|
|
Notes |
DAG; 600.097; 601.302; 600.121 |
Approved |
no |
|
|
Call Number |
Admin @ si @ RLF2017a |
Serial |
2951 |
|
Permanent link to this record |
|
|
|
|
Author |
Y. Patel; Lluis Gomez; Marçal Rusiñol; Dimosthenis Karatzas |
![download PDF file pdf](img/file_PDF.gif)
|
|
Title |
Dynamic Lexicon Generation for Natural Scene Images |
Type |
Conference Article |
|
Year |
2016 |
Publication |
14th European Conference on Computer Vision Workshops |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
395-410 |
|
|
Keywords |
scene text; photo OCR; scene understanding; lexicon generation; topic modeling; CNN |
|
|
Abstract |
Many scene text understanding methods approach the endtoend recognition problem from a word-spotting perspective and take huge benet from using small per-image lexicons. Such customized lexicons are normally assumed as given and their source is rarely discussed.
In this paper we propose a method that generates contextualized lexicons
for scene images using only visual information. For this, we exploit
the correlation between visual and textual information in a dataset consisting
of images and textual content associated with them. Using the topic modeling framework to discover a set of latent topics in such a dataset allows us to re-rank a xed dictionary in a way that prioritizes the words that are more likely to appear in a given image. Moreover, we train a CNN that is able to reproduce those word rankings but using only the image raw pixels as input. We demonstrate that the quality of the automatically obtained custom lexicons is superior to a generic frequency-based baseline. |
|
|
Address ![sorted by Address field, descending order (down)](img/sort_desc.gif) |
Amsterdam; The Netherlands; October 2016 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ECCVW |
|
|
Notes |
DAG; 600.084 |
Approved |
no |
|
|
Call Number |
Admin @ si @ PGR2016 |
Serial |
2825 |
|
Permanent link to this record |
|
|
|
|
Author |
Victor Ponce; Baiyu Chen; Marc Oliu; Ciprian Corneanu; Albert Clapes; Isabelle Guyon; Xavier Baro; Hugo Jair Escalante; Sergio Escalera |
![download PDF file pdf](img/file_PDF.gif)
|
|
Title |
ChaLearn LAP 2016: First Round Challenge on First Impressions – Dataset and Results |
Type |
Conference Article |
|
Year |
2016 |
Publication |
14th European Conference on Computer Vision Workshops |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
Behavior Analysis; Personality Traits; First Impressions |
|
|
Abstract |
This paper summarizes the ChaLearn Looking at People 2016 First Impressions challenge data and results obtained by the teams in the rst round of the competition. The goal of the competition was to automatically evaluate ve \apparent“ personality traits (the so-called \Big Five”) from videos of subjects speaking in front of a camera, by using human judgment. In this edition of the ChaLearn challenge, a novel data set consisting of 10,000 shorts clips from YouTube videos has been made publicly available. The ground truth for personality traits was obtained from workers of Amazon Mechanical Turk (AMT). To alleviate calibration problems between workers, we used pairwise comparisons between videos, and variable levels were reconstructed by tting a Bradley-Terry-Luce model with maximum likelihood. The CodaLab open source
platform was used for submission of predictions and scoring. The competition attracted, over a period of 2 months, 84 participants who are grouped in several teams. Nine teams entered the nal phase. Despite the diculty of the task, the teams made great advances in this round of the challenge. |
|
|
Address ![sorted by Address field, descending order (down)](img/sort_desc.gif) |
Amsterdam; The Netherlands; October 2016 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ECCVW |
|
|
Notes |
HuPBA;MV; 600.063 |
Approved |
no |
|
|
Call Number |
Admin @ si @ PCP2016 |
Serial |
2828 |
|
Permanent link to this record |
|
|
|
|
Author |
Cesar de Souza; Adrien Gaidon; Eleonora Vig; Antonio Lopez |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Sympathy for the Details: Dense Trajectories and Hybrid Classification Architectures for Action Recognition |
Type |
Conference Article |
|
Year |
2016 |
Publication |
14th European Conference on Computer Vision |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
697-716 |
|
|
Keywords |
|
|
|
Abstract |
Action recognition in videos is a challenging task due to the complexity of the spatio-temporal patterns to model and the difficulty to acquire and learn on large quantities of video data. Deep learning, although a breakthrough for image classification and showing promise for videos, has still not clearly superseded action recognition methods using hand-crafted features, even when training on massive datasets. In this paper, we introduce hybrid video classification architectures based on carefully designed unsupervised representations of hand-crafted spatio-temporal features classified by supervised deep networks. As we show in our experiments on five popular benchmarks for action recognition, our hybrid model combines the best of both worlds: it is data efficient (trained on 150 to 10000 short clips) and yet improves significantly on the state of the art, including recent deep models trained on millions of manually labelled images and videos. |
|
|
Address ![sorted by Address field, descending order (down)](img/sort_desc.gif) |
Amsterdam; The Netherlands; October 2016 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ECCV |
|
|
Notes |
ADAS; 600.076; 600.085 |
Approved |
no |
|
|
Call Number |
Admin @ si @ SGV2016 |
Serial |
2824 |
|
Permanent link to this record |
|
|
|
|
Author |
Baiyu Chen; Sergio Escalera; Isabelle Guyon; Victor Ponce; N. Shah; Marc Oliu |
![download PDF file pdf](img/file_PDF.gif)
|
|
Title |
Overcoming Calibration Problems in Pattern Labeling with Pairwise Ratings: Application to Personality Traits |
Type |
Conference Article |
|
Year |
2016 |
Publication |
14th European Conference on Computer Vision Workshops |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
Calibration of labels; Label bias; Ordinal labeling; Variance Models; Bradley-Terry-Luce model; Continuous labels; Regression; Personality traits; Crowd-sourced labels |
|
|
Abstract |
We address the problem of calibration of workers whose task is to label patterns with continuous variables, which arises for instance in labeling images of videos of humans with continuous traits. Worker bias is particularly dicult to evaluate and correct when many workers contribute just a few labels, a situation arising typically when labeling is crowd-sourced. In the scenario of labeling short videos of people facing a camera with personality traits, we evaluate the feasibility of the pairwise ranking method to alleviate bias problems. Workers are exposed to pairs of videos at a time and must order by preference. The variable levels are reconstructed by fitting a Bradley-Terry-Luce model with maximum likelihood. This method may at first sight, seem prohibitively expensive because for N videos, p = N (N-1)/2 pairs must be potentially processed by workers rather that N videos. However, by performing extensive simulations, we determine an empirical law for the scaling of the number of pairs needed as a function of the number of videos in order to achieve a given accuracy of score reconstruction and show that the pairwise method is a ordable. We apply the method to the labeling of a large scale dataset of 10,000 videos used in the ChaLearn Apparent Personality Trait challenge. |
|
|
Address ![sorted by Address field, descending order (down)](img/sort_desc.gif) |
Amsterdam; The Netherlands; October 2016 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ECCVW |
|
|
Notes |
HuPBA;MILAB; |
Approved |
no |
|
|
Call Number |
Admin @ si @ CEG2016 |
Serial |
2829 |
|
Permanent link to this record |
|
|
|
|
Author |
Iiris Lusi; Sergio Escalera; Gholamreza Anbarjafari |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
SASE: RGB-Depth Database for Human Head Pose Estimation |
Type |
Conference Article |
|
Year |
2016 |
Publication |
14th European Conference on Computer Vision Workshops |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
Slides |
|
|
Address ![sorted by Address field, descending order (down)](img/sort_desc.gif) |
Amsterdam; The Netherlands; October 2016 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ECCVW |
|
|
Notes |
HuPBA;MILAB; |
Approved |
no |
|
|
Call Number |
Admin @ si @ LEA2016a |
Serial |
2840 |
|
Permanent link to this record |
|
|
|
|
Author |
Saad Minhas; Aura Hernandez-Sabate; Shoaib Ehsan; Katerine Diaz; Ales Leonardis; Antonio Lopez; Klaus McDonald Maier |
![download PDF file pdf](img/file_PDF.gif)
|
|
Title |
LEE: A photorealistic Virtual Environment for Assessing Driver-Vehicle Interactions in Self-Driving Mode |
Type |
Conference Article |
|
Year |
2016 |
Publication |
14th European Conference on Computer Vision Workshops |
Abbreviated Journal |
|
|
|
Volume |
9915 |
Issue |
|
Pages |
894-900 |
|
|
Keywords |
Simulation environment; Automated Driving; Driver-Vehicle interaction |
|
|
Abstract |
Photorealistic virtual environments are crucial for developing and testing automated driving systems in a safe way during trials. As commercially available simulators are expensive and bulky, this paper presents a low-cost, extendable, and easy-to-use (LEE) virtual environment with the aim to highlight its utility for level 3 driving automation. In particular, an experiment is performed using the presented simulator to explore the influence of different variables regarding control transfer of the car after the system was driving autonomously in a highway scenario. The results show that the speed of the car at the time when the system needs to transfer the control to the human driver is critical. |
|
|
Address ![sorted by Address field, descending order (down)](img/sort_desc.gif) |
Amsterdam; The Netherlands; October 2016 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ECCVW |
|
|
Notes |
ADAS;IAM; 600.085; 600.076 |
Approved |
no |
|
|
Call Number |
MHE2016 |
Serial |
2865 |
|
Permanent link to this record |
|
|
|
|
Author |
Joel Barajas; Jaume Garcia; Francesc Carreras; Sandra Pujades; Petia Radeva |
![download PDF file pdf](img/file_PDF.gif)
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title |
Angle Images Using Gabor Filters in Cardiac Tagged MRI |
Type |
Conference Article |
|
Year |
2005 |
Publication |
Proceeding of the 2005 conference on Artificial Intelligence Research and Development |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
107-114 |
|
|
Keywords |
Angle Images, Gabor Filters, Harp, Tagged Mri |
|
|
Abstract |
Tagged Magnetic Resonance Imaging (MRI) is a non-invasive technique used to examine cardiac deformation in vivo. An Angle Image is a representation of a Tagged MRI which recovers the relative position of the tissue respect to the distorted tags. Thus cardiac deformation can be estimated. This paper describes a new approach to generate Angle Images using a bank of Gabor filters in short axis cardiac Tagged MRI. Our method improves the Angle Images obtained by global techniques, like HARP, with a local frequency analysis. We propose to use the phase response of a combination of a Gabor filters bank, and use it to find a more precise deformation of the left ventricle. We demonstrate the accuracy of our method over HARP by several experimental results. |
|
|
Address ![sorted by Address field, descending order (down)](img/sort_desc.gif) |
Amsterdam; The Netherlands |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
IOS Press |
Place of Publication |
Amsterdam, The Netherlands |
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
1-58603-560-6 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CAIRD |
|
|
Notes |
IAM;MILAB |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ BGC2005; IAM @ iam |
Serial |
595 |
|
Permanent link to this record |
|
|
|
|
Author |
Bhaskar Chakraborty; Ognjen Rudovic; Jordi Gonzalez |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
View-Invariant Human-Body Detection with Extension to Human Action Recognition using Component-Wise HMM of Body Parts |
Type |
Conference Article |
|
Year |
2008 |
Publication |
8th IEEE International Conference on Automatic Face and Gesture Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address ![sorted by Address field, descending order (down)](img/sort_desc.gif) |
Amsterdam; The Netherlands |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
FG |
|
|
Notes |
ISE |
Approved |
no |
|
|
Call Number |
ISE @ ise @ CRG2008 |
Serial |
1113 |
|
Permanent link to this record |