Home | [1–10] << 11 >> |
Records | |||||
---|---|---|---|---|---|
Author | Alejandro Gonzalez Alzate; David Vazquez; Antonio Lopez; Jaume Amores | ||||
Title | On-Board Object Detection: Multicue, Multimodal, and Multiview Random Forest of Local Experts | Type | Journal Article | ||
Year | 2017 | Publication | IEEE Transactions on cybernetics | Abbreviated Journal | Cyber |
Volume | 47 | Issue | 11 | Pages | 3980 - 3990 |
Keywords | Multicue; multimodal; multiview; object detection | ||||
Abstract | Despite recent significant advances, object detection continues to be an extremely challenging problem in real scenarios. In order to develop a detector that successfully operates under these conditions, it becomes critical to leverage upon multiple cues, multiple imaging modalities, and a strong multiview (MV) classifier that accounts for different object views and poses. In this paper, we provide an extensive evaluation that gives insight into how each of these aspects (multicue, multimodality, and strong MV classifier) affect accuracy both individually and when integrated together. In the multimodality component, we explore the fusion of RGB and depth maps obtained by high-definition light detection and ranging, a type of modality that is starting to receive increasing attention. As our analysis reveals, although all the aforementioned aspects significantly help in improving the accuracy, the fusion of visible spectrum and depth information allows to boost the accuracy by a much larger margin. The resulting detector not only ranks among the top best performers in the challenging KITTI benchmark, but it is built upon very simple blocks that are easy to implement and computationally efficient. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 2168-2267 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS; 600.085; 600.082; 600.076; 600.118 | Approved | no | ||
Call Number | Admin @ si @ | Serial | 2810 | ||
Permanent link to this record | |||||
Author | Alejandro Cartas; Mariella Dimiccoli; Petia Radeva | ||||
Title | Batch-based activity recognition from egocentric photo-streams | Type | Conference Article | ||
Year | 2017 | Publication | 1st International workshop on Egocentric Perception, Interaction and Computing | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Activity recognition from long unstructured egocentric photo-streams has several applications in assistive technology such as health monitoring and frailty detection, just to name a few. However, one of its main technical challenges is to deal with the low frame rate of wearable photo-cameras, which causes abrupt appearance changes between consecutive frames. In consequence, important discriminatory low-level features from motion such as optical flow cannot be estimated. In this paper, we present a batch-driven approach for training a deep learning architecture that strongly rely on Long short-term units to tackle this problem. We propose two different implementations of the same approach that process a photo-stream sequence using batches of fixed size with the goal of capturing the temporal evolution of high-level features. The main difference between these implementations is that one explicitly models consecutive batches by overlapping them. Experimental results over a public dataset acquired by three users demonstrate the validity of the proposed architectures to exploit the temporal evolution of convolutional features over time without relying on event boundaries. | ||||
Address | Venice; Italy; October 2017; | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICCV - EPIC | ||
Notes | MILAB; no menciona | Approved | no | ||
Call Number | Admin @ si @ CDR2017 | Serial | 3023 | ||
Permanent link to this record | |||||
Author | Albert Clapes; Tinne Tuytelaars; Sergio Escalera | ||||
Title | Darwintrees for action recognition | Type | Conference Article | ||
Year | 2017 | Publication | Chalearn Workshop on Action, Gesture, and Emotion Recognition: Large Scale Multimodal Gesture Recognition and Real versus Fake expressed emotions at ICCV | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICCVW | ||
Notes | HUPBA; no menciona | Approved | no | ||
Call Number | Admin @ si @ CTE2017 | Serial | 3069 | ||
Permanent link to this record | |||||
Author | Albert Berenguel; Oriol Ramos Terrades; Josep Llados; Cristina Cañero | ||||
Title | e-Counterfeit: a mobile-server platform for document counterfeit detection | Type | Conference Article | ||
Year | 2017 | Publication | 14th IAPR International Conference on Document Analysis and Recognition | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | This paper presents a novel application to detect counterfeit identity documents forged by a scan-printing operation. Texture analysis approaches are proposed to extract validation features from security background that is usually printed in documents as IDs or banknotes. The main contribution of this work is the end-to-end mobile-server architecture, which provides a service for non-expert users and therefore can be used in several scenarios. The system also provides a crowdsourcing mode so labeled images can be gathered, generating databases for incremental training of the algorithms. | ||||
Address | Kyoto; Japan; November 2017 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICDAR | ||
Notes | DAG; 600.061; 600.097; 600.121 | Approved | no | ||
Call Number | Admin @ si @ BRL2018 | Serial | 3084 | ||
Permanent link to this record | |||||
Author | Albert Berenguel; Oriol Ramos Terrades; Josep Llados; Cristina Cañero | ||||
Title | Evaluation of Texture Descriptors for Validation of Counterfeit Documents | Type | Conference Article | ||
Year | 2017 | Publication | 14th International Conference on Document Analysis and Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 1237-1242 | ||
Keywords | |||||
Abstract | This paper describes an exhaustive comparative analysis and evaluation of different existing texture descriptor algorithms to differentiate between genuine and counterfeit documents. We include in our experiments different categories of algorithms and compare them in different scenarios with several counterfeit datasets, comprising banknotes and identity documents. Computational time in the extraction of each descriptor is important because the final objective is to use it in a real industrial scenario. HoG and CNN based descriptors stands out statistically over the rest in terms of the F1-score/time ratio performance. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 2379-2140 | ISBN | Medium | ||
Area | Expedition | Conference | ICDAR | ||
Notes | DAG; 600.061; 601.269; 600.097; 600.121 | Approved | no | ||
Call Number | Admin @ si @ BRL2017 | Serial | 3092 | ||
Permanent link to this record | |||||
Author | Aitor Alvarez-Gila; Joost Van de Weijer; Estibaliz Garrote | ||||
Title | Adversarial Networks for Spatial Context-Aware Spectral Image Reconstruction from RGB | Type | Conference Article | ||
Year | 2017 | Publication | 1st International Workshop on Physics Based Vision meets Deep Learning | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Hyperspectral signal reconstruction aims at recovering the original spectral input that produced a certain trichromatic (RGB) response from a capturing device or observer.
Given the heavily underconstrained, non-linear nature of the problem, traditional techniques leverage different statistical properties of the spectral signal in order to build informative priors from real world object reflectances for constructing such RGB to spectral signal mapping. However, most of them treat each sample independently, and thus do not benefit from the contextual information that the spatial dimensions can provide. We pose hyperspectral natural image reconstruction as an image to image mapping learning problem, and apply a conditional generative adversarial framework to help capture spatial semantics. This is the first time Convolutional Neural Networks -and, particularly, Generative Adversarial Networks- are used to solve this task. Quantitative evaluation shows a Root Mean Squared Error (RMSE) drop of 44:7% and a Relative RMSE drop of 47:0% on the ICVL natural hyperspectral image dataset. |
||||
Address | Venice; Italy; October 2017 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICCV-PBDL | ||
Notes | LAMP; 600.109; 600.106; 600.120 | Approved | no | ||
Call Number | Admin @ si @ AWG2017 | Serial | 2969 | ||
Permanent link to this record | |||||
Author | Adria Rico; Alicia Fornes | ||||
Title | Camera-based Optical Music Recognition using a Convolutional Neural Network | Type | Conference Article | ||
Year | 2017 | Publication | 12th IAPR International Workshop on Graphics Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 27-28 | ||
Keywords | optical music recognition; document analysis; convolutional neural network; deep learning | ||||
Abstract | Optical Music Recognition (OMR) consists in recognizing images of music scores. Contrary to expectation, the current OMR systems usually fail when recognizing images of scores captured by digital cameras and smartphones. In this work, we propose a camera-based OMR system based on Convolutional Neural Networks, showing promising preliminary results | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | GREC | ||
Notes | DAG;600.097; 600.121 | Approved | no | ||
Call Number | Admin @ si @ RiF2017 | Serial | 3059 | ||
Permanent link to this record |