|
Records |
Links |
|
Author |
Xavier Perez Sala; Fernando De la Torre; Laura Igual; Sergio Escalera; Cecilio Angulo |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Subspace Procrustes Analysis |
Type |
Conference Article |
|
Year |
2014 |
Publication |
ECCV Workshop on ChaLearn Looking at People |
Abbreviated Journal |
|
|
|
Volume |
8925 |
Issue |
|
Pages |
654-668 |
|
|
Keywords |
|
|
|
Abstract |
Procrustes Analysis (PA) has been a popular technique to align and build 2-D statistical models of shapes. Given a set of 2-D shapes PA is applied to remove rigid transformations. Then, a non-rigid 2-D model is computed by modeling (e.g., PCA) the residual. Although PA has been widely used, it has several limitations for modeling 2-D shapes: occluded landmarks and missing data can result in local minima solutions, and there is no guarantee that the 2-D shapes provide a uniform sampling of the 3-D space of rotations for the object. To address previous issues, this paper proposes Subspace PA (SPA). Given several instances of a 3-D object, SPA computes the mean and a 2-D subspace that can simultaneously model all rigid and non-rigid deformations of the 3-D object. We propose a discrete (DSPA) and continuous (CSPA) formulation for SPA, assuming that 3-D samples of an object are provided. DSPA extends the traditional PA, and produces unbiased 2-D models by uniformly sampling dierent views of the 3-D object. CSPA provides a continuous approach to uniformly sample the space of 3-D rotations, being more ecient in space and time. Experiments using SPA to learn 2-D models of bodies from motion capture data illustrate the benets of our approach. |
|
|
Address ![sorted by Address field, ascending order (up)](img/sort_asc.gif) |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ECCVW |
|
|
Notes |
OR; HuPBA;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ PTI2014 |
Serial |
2539 |
|
Permanent link to this record |
|
|
|
|
Author |
Thanh Ha Do; Salvatore Tabbone; Oriol Ramos Terrades |
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Spotting Symbol Using Sparsity over Learned Dictionary of Local Descriptors |
Type |
Conference Article |
|
Year |
2014 |
Publication |
11th IAPR International Workshop on Document Analysis and Systems |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
156-160 |
|
|
Keywords |
|
|
|
Abstract |
This paper proposes a new approach to spot symbols into graphical documents using sparse representations. More specifically, a dictionary is learned from a training database of local descriptors defined over the documents. Following their sparse representations, interest points sharing similar properties are used to define interest regions. Using an original adaptation of information retrieval techniques, a vector model for interest regions and for a query symbol is built based on its sparsity in a visual vocabulary where the visual words are columns in the learned dictionary. The matching process is performed comparing the similarity between vector models. Evaluation on SESYD datasets demonstrates that our method is promising. |
|
|
Address ![sorted by Address field, ascending order (up)](img/sort_asc.gif) |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
978-1-4799-3243-6 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
DAS |
|
|
Notes |
DAG; 600.077 |
Approved |
no |
|
|
Call Number |
Admin @ si @ DTR2014 |
Serial |
2543 |
|
Permanent link to this record |
|
|
|
|
Author |
Eloi Puertas; Miguel Angel Bautista; Daniel Sanchez; Sergio Escalera; Oriol Pujol |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Learning to Segment Humans by Stacking their Body Parts, |
Type |
Conference Article |
|
Year |
2014 |
Publication |
ECCV Workshop on ChaLearn Looking at People |
Abbreviated Journal |
|
|
|
Volume |
8925 |
Issue |
|
Pages |
685-697 |
|
|
Keywords |
Human body segmentation; Stacked Sequential Learning |
|
|
Abstract |
Human segmentation in still images is a complex task due to the wide range of body poses and drastic changes in environmental conditions. Usually, human body segmentation is treated in a two-stage fashion. First, a human body part detection step is performed, and then, human part detections are used as prior knowledge to be optimized by segmentation strategies. In this paper, we present a two-stage scheme based on Multi-Scale Stacked Sequential Learning (MSSL). We define an extended feature set by stacking a multi-scale decomposition of body
part likelihood maps. These likelihood maps are obtained in a first stage
by means of a ECOC ensemble of soft body part detectors. In a second stage, contextual relations of part predictions are learnt by a binary classifier, obtaining an accurate body confidence map. The obtained confidence map is fed to a graph cut optimization procedure to obtain the final segmentation. Results show improved segmentation when MSSL is included in the human segmentation pipeline. |
|
|
Address ![sorted by Address field, ascending order (up)](img/sort_asc.gif) |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ECCVW |
|
|
Notes |
HuPBA;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ PBS2014 |
Serial |
2553 |
|
Permanent link to this record |
|
|
|
|
Author |
Marc Bolaños; Maite Garolera; Petia Radeva |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Video Segmentation of Life-Logging Videos |
Type |
Conference Article |
|
Year |
2014 |
Publication |
8th Conference on Articulated Motion and Deformable Objects |
Abbreviated Journal |
|
|
|
Volume |
8563 |
Issue |
|
Pages |
1-9 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address ![sorted by Address field, ascending order (up)](img/sort_asc.gif) |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
AMDO |
|
|
Notes |
MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ BGR2014 |
Serial |
2558 |
|
Permanent link to this record |
|
|
|
|
Author |
Jiaolong Xu; Sebastian Ramos; David Vazquez; Antonio Lopez |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
DA-DPM Pedestrian Detection |
Type |
Conference Article |
|
Year |
2013 |
Publication |
ICCV Workshop on Reconstruction meets Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
Domain Adaptation; Pedestrian Detection |
|
|
Abstract |
|
|
|
Address ![sorted by Address field, ascending order (up)](img/sort_asc.gif) |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICCVW-RR |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ XRV2013 |
Serial |
2569 |
|
Permanent link to this record |
|
|
|
|
Author |
Maedeh Aghaei; Petia Radeva |
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Bag-of-Tracklets for Person Tracking in Life-Logging Data |
Type |
Conference Article |
|
Year |
2014 |
Publication |
17th International Conference of the Catalan Association for Artificial Intelligence |
Abbreviated Journal |
|
|
|
Volume |
269 |
Issue |
|
Pages |
35-44 |
|
|
Keywords |
|
|
|
Abstract |
By increasing popularity of wearable cameras, life-logging data analysis is becoming more and more important and useful to derive significant events out of this substantial collection of images. In this study, we introduce a new tracking method applied to visual life-logging, called bag-of-tracklets, which is based on detecting, localizing and tracking of people. Given the low spatial and temporal resolution of the image data, our model generates and groups tracklets in a unsupervised framework and extracts image sequences of person appearance according to a similarity score of the bag-of-tracklets. The model output is a meaningful sequence of events expressing human appearance and tracking them in life-logging data. The achieved results prove the robustness of our model in terms of efficiency and accuracy despite the low spatial and temporal resolution of the data. |
|
|
Address ![sorted by Address field, ascending order (up)](img/sort_asc.gif) |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
978-1-61499-451-0 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CCIA |
|
|
Notes |
MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ AgR2015 |
Serial |
2607 |
|
Permanent link to this record |
|
|
|
|
Author |
Agata Lapedriza; David Masip; David Sanchez |
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Emotions Classification using Facial Action Units Recognition |
Type |
Conference Article |
|
Year |
2014 |
Publication |
17th International Conference of the Catalan Association for Artificial Intelligence |
Abbreviated Journal |
|
|
|
Volume |
269 |
Issue |
|
Pages |
55-64 |
|
|
Keywords |
|
|
|
Abstract |
In this work we build a system for automatic emotion classification from image sequences. We analyze subtle changes in facial expressions by detecting a subset of 12 representative facial action units (AUs). Then, we classify emotions based on the output of these AUs classifiers, i.e. the presence/absence of AUs. We base the AUs classification upon a set of spatio-temporal geometric and appearance features for facial representation, fusing them within the emotion classifier. A decision tree is trained for emotion classifying, making the resulting model easy to interpret by capturing the combination of AUs activation that lead to a particular emotion. For Cohn-Kanade database, the proposed system classifies 7 emotions with a mean accuracy of near 90%, attaining a similar recognition accuracy in comparison with non-interpretable models that are not based in AUs detection. |
|
|
Address ![sorted by Address field, ascending order (up)](img/sort_asc.gif) |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
978-1-61499-451-0 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CCIA |
|
|
Notes |
OR;MV |
Approved |
no |
|
|
Call Number |
Admin @ si @ LMS2014 |
Serial |
2622 |
|
Permanent link to this record |
|
|
|
|
Author |
Pau Riba; Josep Llados; Alicia Fornes |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Handwritten Word Spotting by Inexact Matching of Grapheme Graphs |
Type |
Conference Article |
|
Year |
2015 |
Publication |
13th International Conference on Document Analysis and Recognition ICDAR2015 |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
781 - 785 |
|
|
Keywords |
|
|
|
Abstract |
This paper presents a graph-based word spotting for handwritten documents. Contrary to most word spotting techniques, which use statistical representations, we propose a structural representation suitable to be robust to the inherent deformations of handwriting. Attributed graphs are constructed using a part-based approach. Graphemes extracted from shape convexities are used as stable units of handwriting, and are associated to graph nodes. Then, spatial relations between them determine graph edges. Spotting is defined in terms of an error-tolerant graph matching using bipartite-graph matching algorithm. To make the method usable in large datasets, a graph indexing approach that makes use of binary embeddings is used as preprocessing. Historical documents are used as experimental framework. The approach is comparable to statistical ones in terms of time and memory requirements, especially when dealing with large document collections. |
|
|
Address ![sorted by Address field, ascending order (up)](img/sort_asc.gif) |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICDAR |
|
|
Notes |
DAG; 600.077; 600.061; 602.006 |
Approved |
no |
|
|
Call Number |
Admin @ si @ RLF2015b |
Serial |
2642 |
|
Permanent link to this record |
|
|
|
|
Author |
Dimosthenis Karatzas; Lluis Gomez; Anguelos Nicolaou; Suman Ghosh; Andrew Bagdanov; Masakazu Iwamura; J. Matas; L. Neumann; V. Ramaseshan; S. Lu ; Faisal Shafait; Seiichi Uchida; Ernest Valveny |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
ICDAR 2015 Competition on Robust Reading |
Type |
Conference Article |
|
Year |
2015 |
Publication |
13th International Conference on Document Analysis and Recognition ICDAR2015 |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
1156-1160 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address ![sorted by Address field, ascending order (up)](img/sort_asc.gif) |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICDAR |
|
|
Notes |
DAG; 600.077; 600.084 |
Approved |
no |
|
|
Call Number |
Admin @ si @ KGN2015 |
Serial |
2690 |
|
Permanent link to this record |
|
|
|
|
Author |
Lluis Gomez; Dimosthenis Karatzas |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Object Proposals for Text Extraction in the Wild |
Type |
Conference Article |
|
Year |
2015 |
Publication |
13th International Conference on Document Analysis and Recognition ICDAR2015 |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
206 - 210 |
|
|
Keywords |
|
|
|
Abstract |
Object Proposals is a recent computer vision technique receiving increasing interest from the research community. Its main objective is to generate a relatively small set of bounding box proposals that are most likely to contain objects of interest. The use of Object Proposals techniques in the scene text understanding field is innovative. Motivated by the success of powerful while expensive techniques to recognize words in a holistic way, Object Proposals techniques emerge as an alternative to the traditional text detectors. In this paper we study to what extent the existing generic Object Proposals methods may be useful for scene text understanding. Also, we propose a new Object Proposals algorithm that is specifically designed for text and compare it with other generic methods in the state of the art. Experiments show that our proposal is superior in its ability of producing good quality word proposals in an efficient way. The source code of our method is made publicly available |
|
|
Address ![sorted by Address field, ascending order (up)](img/sort_asc.gif) |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICDAR |
|
|
Notes |
DAG; 600.077; 600.084; 601.197 |
Approved |
no |
|
|
Call Number |
Admin @ si @ GoK2015 |
Serial |
2691 |
|
Permanent link to this record |
|
|
|
|
Author |
Maedeh Aghaei; Mariella Dimiccoli; Petia Radeva |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Towards social interaction detection in egocentric photo-streams |
Type |
Conference Article |
|
Year |
2015 |
Publication |
Proceedings of SPIE, 8th International Conference on Machine Vision , ICMV 2015 |
Abbreviated Journal |
|
|
|
Volume |
9875 |
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
Detecting social interaction in videos relying solely on visual cues is a valuable task that is receiving increasing attention in recent years. In this work, we address this problem in the challenging domain of egocentric photo-streams captured by a low temporal resolution wearable camera (2fpm). The major difficulties to be handled in this context are the sparsity of observations as well as unpredictability of camera motion and attention orientation due to the fact that the camera is worn as part of clothing. Our method consists of four steps: multi-faces localization and tracking, 3D localization, pose estimation and analysis of f-formations. By estimating pair-to-pair interaction probabilities over the sequence, our method states the presence or absence of interaction with the camera wearer and specifies which people are more involved in the interaction. We tested our method over a dataset of 18.000 images and we show its reliability on our considered purpose. © (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only. |
|
|
Address ![sorted by Address field, ascending order (up)](img/sort_asc.gif) |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICMV |
|
|
Notes |
MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ ADR2015a |
Serial |
2702 |
|
Permanent link to this record |
|
|
|
|
Author |
Gloria Fernandez Esparrach; Jorge Bernal; Cristina Rodriguez de Miguel; Debora Gil; Fernando Vilariño; Henry Cordova; Cristina Sanchez Montes; I.Araujo ; Maria Lopez Ceron; J.Llach; F. Javier Sanchez |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Colonic polyps are correctly identified by a computer vision method using wm-dova energy maps |
Type |
Conference Article |
|
Year |
2015 |
Publication |
Proceedings of 23 United European- UEG Week 2015 |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address ![sorted by Address field, ascending order (up)](img/sort_asc.gif) |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
UEG |
|
|
Notes |
MV; IAM; 600.075;SIAI |
Approved |
no |
|
|
Call Number |
Admin @ si @ FBR2015 |
Serial |
2732 |
|
Permanent link to this record |
|
|
|
|
Author |
Aleksandr Setkov; Fabio Martinez Carillo; Michele Gouiffes; Christian Jacquemin; Maria Vanrell; Ramon Baldrich |
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
DAcImPro: A Novel Database of Acquired Image Projections and Its Application to Object Recognition |
Type |
Conference Article |
|
Year |
2015 |
Publication |
Advances in Visual Computing. Proceedings of 11th International Symposium, ISVC 2015 Part II |
Abbreviated Journal |
|
|
|
Volume |
9475 |
Issue |
|
Pages |
463-473 |
|
|
Keywords |
Projector-camera systems; Feature descriptors; Object recognition |
|
|
Abstract |
Projector-camera systems are designed to improve the projection quality by comparing original images with their captured projections, which is usually complicated due to high photometric and geometric variations. Many research works address this problem using their own test data which makes it extremely difficult to compare different proposals. This paper has two main contributions. Firstly, we introduce a new database of acquired image projections (DAcImPro) that, covering photometric and geometric conditions and providing data for ground-truth computation, can serve to evaluate different algorithms in projector-camera systems. Secondly, a new object recognition scenario from acquired projections is presented, which could be of a great interest in such domains, as home video projections and public presentations. We show that the task is more challenging than the classical recognition problem and thus requires additional pre-processing, such as color compensation or projection area selection. |
|
|
Address ![sorted by Address field, ascending order (up)](img/sort_asc.gif) |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer International Publishing |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN |
978-3-319-27862-9 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ISVC |
|
|
Notes |
CIC |
Approved |
no |
|
|
Call Number |
Admin @ si @ SMG2015 |
Serial |
2736 |
|
Permanent link to this record |
|
|
|
|
Author |
Jose Manuel Alvarez; Theo Gevers; Antonio Lopez |
![goto web page url](img/www.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Evaluating Color Representation for Online Road Detection |
Type |
Conference Article |
|
Year |
2013 |
Publication |
ICCV Workshop on Computer Vision in Vehicle Technology: From Earth to Mars |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
594-595 |
|
|
Keywords |
|
|
|
Abstract |
Detecting traversable road areas ahead a moving vehicle is a key process for modern autonomous driving systems. Most existing algorithms use color to classify pixels as road or background. These algorithms reduce the effect of lighting variations and weather conditions by exploiting the discriminant/invariant properties of different color representations. However, up to date, no comparison between these representations have been conducted. Therefore, in this paper, we perform an evaluation of existing color representations for road detection. More specifically, we focus on color planes derived from RGB data and their most com-
mon combinations. The evaluation is done on a set of 7000 road images acquired
using an on-board camera in different real-driving situations. |
|
|
Address ![sorted by Address field, ascending order (up)](img/sort_asc.gif) |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CVVT:E2M |
|
|
Notes |
ADAS;ISE |
Approved |
no |
|
|
Call Number |
Admin @ si @ AGL2013 |
Serial |
2794 |
|
Permanent link to this record |
|
|
|
|
Author |
Ozan Caglayan; Walid Aransa; Adrien Bardet; Mercedes Garcia-Martinez; Fethi Bougares; Loic Barrault; Marc Masana; Luis Herranz; Joost Van de Weijer |
![download PDF file pdf](img/file_PDF.gif)
|
|
Title |
LIUM-CVC Submissions for WMT17 Multimodal Translation Task |
Type |
Conference Article |
|
Year |
2017 |
Publication |
2nd Conference on Machine Translation |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
This paper describes the monomodal and multimodal Neural Machine Translation systems developed by LIUM and CVC for WMT17 Shared Task on Multimodal Translation. We mainly explored two multimodal architectures where either global visual features or convolutional feature maps are integrated in order to benefit from visual context. Our final systems ranked first for both En-De and En-Fr language pairs according to the automatic evaluation metrics METEOR and BLEU. |
|
|
Address ![sorted by Address field, ascending order (up)](img/sort_asc.gif) |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
WMT |
|
|
Notes |
LAMP; 600.106; 600.120 |
Approved |
no |
|
|
Call Number |
Admin @ si @ CAB2017 |
Serial |
3035 |
|
Permanent link to this record |