|   | 
Details
   web
Records
Author Zhijie Fang; Antonio Lopez
Title Is the Pedestrian going to Cross? Answering by 2D Pose Estimation Type Conference Article
Year 2018 Publication IEEE Intelligent Vehicles Symposium Abbreviated Journal
Volume Issue Pages 1271 - 1276
Keywords
Abstract Our recent work suggests that, thanks to nowadays powerful CNNs, image-based 2D pose estimation is a promising cue for determining pedestrian intentions such as crossing the road in the path of the ego-vehicle, stopping before entering the road, and starting to walk or bending towards the road. This statement is based on the results obtained on non-naturalistic sequences (Daimler dataset), i.e. in sequences choreographed specifically for performing the study. Fortunately, a new publicly available dataset (JAAD) has appeared recently to allow developing methods for detecting pedestrian intentions in naturalistic driving conditions; more specifically, for addressing the relevant question is the pedestrian going to cross? Accordingly, in this paper we use JAAD to assess the usefulness of 2D pose estimation for answering such a question. We combine CNN-based pedestrian detection, tracking and pose estimation to predict the crossing action from monocular images. Overall, the proposed pipeline provides new state-ofthe-art results.
Address
Corporate Author (up) Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference IV
Notes ADAS; 600.124; 600.116; 600.118 Approved no
Call Number Admin @ si @ FaL2018 Serial 3181
Permanent link to this record
 

 
Author Jiaolong Xu; Peng Wang; Heng Yang; Antonio Lopez
Title Training a Binary Weight Object Detector by Knowledge Transfer for Autonomous Driving Type Conference Article
Year 2019 Publication IEEE International Conference on Robotics and Automation Abbreviated Journal
Volume Issue Pages 2379-2384
Keywords
Abstract Autonomous driving has harsh requirements of small model size and energy efficiency, in order to enable the embedded system to achieve real-time on-board object detection. Recent deep convolutional neural network based object detectors have achieved state-of-the-art accuracy. However, such models are trained with numerous parameters and their high computational costs and large storage prohibit the deployment to memory and computation resource limited systems. Low-precision neural networks are popular techniques for reducing the computation requirements and memory footprint. Among them, binary weight neural network (BWN) is the extreme case which quantizes the float-point into just bit. BWNs are difficult to train and suffer from accuracy deprecation due to the extreme low-bit representation. To address this problem, we propose a knowledge transfer (KT) method to aid the training of BWN using a full-precision teacher network. We built DarkNet-and MobileNet-based binary weight YOLO-v2 detectors and conduct experiments on KITTI benchmark for car, pedestrian and cyclist detection. The experimental results show that the proposed method maintains high detection accuracy while reducing the model size of DarkNet-YOLO from 257 MB to 8.8 MB and MobileNet-YOLO from 193 MB to 7.9 MB.
Address Montreal; Canada; May 2019
Corporate Author (up) Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICRA
Notes ADAS; 600.124; 600.116; 600.118 Approved no
Call Number Admin @ si @ XWY2018 Serial 3182
Permanent link to this record
 

 
Author Akhil Gurram; Onay Urfalioglu; Ibrahim Halfaoui; Fahd Bouzaraa; Antonio Lopez
Title Monocular Depth Estimation by Learning from Heterogeneous Datasets Type Conference Article
Year 2018 Publication IEEE Intelligent Vehicles Symposium Abbreviated Journal
Volume Issue Pages 2176 - 2181
Keywords
Abstract Depth estimation provides essential information to perform autonomous driving and driver assistance. Especially, Monocular Depth Estimation is interesting from a practical point of view, since using a single camera is cheaper than many other options and avoids the need for continuous calibration strategies as required by stereo-vision approaches. State-of-the-art methods for Monocular Depth Estimation are based on Convolutional Neural Networks (CNNs). A promising line of work consists of introducing additional semantic information about the traffic scene when training CNNs for depth estimation. In practice, this means that the depth data used for CNN training is complemented with images having pixel-wise semantic labels, which usually are difficult to annotate (eg crowded urban images). Moreover, so far it is common practice to assume that the same raw training data is associated with both types of ground truth, ie, depth and semantic labels. The main contribution of this paper is to show that this hard constraint can be circumvented, ie, that we can train CNNs for depth estimation by leveraging the depth and semantic information coming from heterogeneous datasets. In order to illustrate the benefits of our approach, we combine KITTI depth and Cityscapes semantic segmentation datasets, outperforming state-of-the-art results on Monocular Depth Estimation.
Address
Corporate Author (up) Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference IV
Notes ADAS; 600.124; 600.116; 600.118 Approved no
Call Number Admin @ si @ GUH2018 Serial 3183
Permanent link to this record
 

 
Author Alejandro Cartas; Estefania Talavera; Petia Radeva; Mariella Dimiccoli
Title On the Role of Event Boundaries in Egocentric Activity Recognition from Photostreams Type Miscellaneous
Year 2018 Publication Arxiv Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Event boundaries play a crucial role as a pre-processing step for detection, localization, and recognition tasks of human activities in videos. Typically, although their intrinsic subjectiveness, temporal bounds are provided manually as input for training action recognition algorithms. However, their role for activity recognition in the domain of egocentric photostreams has been so far neglected. In this paper, we provide insights of how automatically computed boundaries can impact activity recognition results in the emerging domain of egocentric photostreams. Furthermore, we collected a new annotated dataset acquired by 15 people by a wearable photo-camera and we used it to show the generalization capabilities of several deep learning based architectures to unseen users.
Address
Corporate Author (up) Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB; no proj Approved no
Call Number Admin @ si @ CTR2018 Serial 3184
Permanent link to this record
 

 
Author Md.Mostafa Kamal Sarker; Hatem A. Rashwan; Hatem A. Rashwan; Estefania Talavera; Syeda Furruka Banu; Petia Radeva; Domenec Puig
Title MACNet: Multi-scale Atrous Convolution Networks for Food Places Classification in Egocentric Photo-streams Type Conference Article
Year 2018 Publication European Conference on Computer Vision workshops Abbreviated Journal
Volume Issue Pages 423-433
Keywords
Abstract First-person (wearable) camera continually captures unscripted interactions of the camera user with objects, people, and scenes reflecting his personal and relational tendencies. One of the preferences of people is their interaction with food events. The regulation of food intake and its duration has a great importance to protect against diseases. Consequently, this work aims to develop a smart model that is able to determine the recurrences of a person on food places during a day. This model is based on a deep end-to-end model for automatic food places recognition by analyzing egocentric photo-streams. In this paper, we apply multi-scale Atrous convolution networks to extract the key features related to food places of the input images. The proposed model is evaluated on an in-house private dataset called “EgoFoodPlaces”. Experimental results shows promising results of food places classification recognition in egocentric photo-streams.
Address
Corporate Author (up) Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LCNS
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ECCVW
Notes MILAB; no menciona Approved no
Call Number Admin @ si @ SRR2018b Serial 3185
Permanent link to this record
 

 
Author Alejandro Cartas; Juan Marin; Petia Radeva; Mariella Dimiccoli
Title Batch-based activity recognition from egocentric photo-streams revisited Type Journal Article
Year 2018 Publication Pattern Analysis and Applications Abbreviated Journal PAA
Volume 21 Issue 4 Pages 953–965
Keywords Egocentric vision; Lifelogging; Activity recognition; Deep learning; Recurrent neural networks
Abstract Wearable cameras can gather large amounts of image data that provide rich visual information about the daily activities of the wearer. Motivated by the large number of health applications that could be enabled by the automatic recognition of daily activities, such as lifestyle characterization for habit improvement, context-aware personal assistance and tele-rehabilitation services, we propose a system to classify 21 daily activities from photo-streams acquired by a wearable photo-camera. Our approach combines the advantages of a late fusion ensemble strategy relying on convolutional neural networks at image level with the ability of recurrent neural networks to account for the temporal evolution of high-level features in photo-streams without relying on event boundaries. The proposed batch-based approach achieved an overall accuracy of 89.85%, outperforming state-of-the-art end-to-end methodologies. These results were achieved on a dataset consists of 44,902 egocentric pictures from three persons captured during 26 days in average.
Address
Corporate Author (up) Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB; no proj Approved no
Call Number Admin @ si @ CMR2018 Serial 3186
Permanent link to this record
 

 
Author Mariella Dimiccoli; Cathal Gurrin; David J. Crandall; Xavier Giro; Petia Radeva
Title Introduction to the special issue: Egocentric Vision and Lifelogging Type Journal Article
Year 2018 Publication Journal of Visual Communication and Image Representation Abbreviated Journal JVCIR
Volume 55 Issue Pages 352-353
Keywords
Abstract
Address
Corporate Author (up) Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB; no proj Approved no
Call Number Admin @ si @ DGC2018 Serial 3187
Permanent link to this record
 

 
Author Sumit K. Banchhor; Narendra D. Londhe; Tadashi Araki; Luca Saba; Petia Radeva; Narendra N. Khanna; Jasjit S. Suri
Title Calcium detection, its quantification, and grayscale morphology-based risk stratification using machine learning in multimodality big data coronary and carotid scans: A review. Type Journal Article
Year 2018 Publication Computers in Biology and Medicine Abbreviated Journal CBM
Volume 101 Issue Pages 184-198
Keywords Heart disease; Stroke; Atherosclerosis; Intravascular; Coronary; Carotid; Calcium; Morphology; Risk stratification
Abstract Purpose of review

Atherosclerosis is the leading cause of cardiovascular disease (CVD) and stroke. Typically, atherosclerotic calcium is found during the mature stage of the atherosclerosis disease. It is therefore often a challenge to identify and quantify the calcium. This is due to the presence of multiple components of plaque buildup in the arterial walls. The American College of Cardiology/American Heart Association guidelines point to the importance of calcium in the coronary and carotid arteries and further recommend its quantification for the prevention of heart disease. It is therefore essential to stratify the CVD risk of the patient into low- and high-risk bins.
Recent finding

Calcium formation in the artery walls is multifocal in nature with sizes at the micrometer level. Thus, its detection requires high-resolution imaging. Clinical experience has shown that even though optical coherence tomography offers better resolution, intravascular ultrasound still remains an important imaging modality for coronary wall imaging. For a computer-based analysis system to be complete, it must be scientifically and clinically validated. This study presents a state-of-the-art review (condensation of 152 publications after examining 200 articles) covering the methods for calcium detection and its quantification for coronary and carotid arteries, the pros and cons of these methods, and the risk stratification strategies. The review also presents different kinds of statistical models and gold standard solutions for the evaluation of software systems useful for calcium detection and quantification. Finally, the review concludes with a possible vision for designing the next-generation system for better clinical outcomes.
Address
Corporate Author (up) Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB; no proj Approved no
Call Number Admin @ si @ BLA2018 Serial 3188
Permanent link to this record
 

 
Author Marçal Rusiñol; Dimosthenis Karatzas; Josep Llados
Title Automatic Verification of Properly Signed Multi-page Document Images Type Conference Article
Year 2015 Publication Proceedings of the Eleventh International Symposium on Visual Computing Abbreviated Journal
Volume 9475 Issue Pages 327-336
Keywords Document Image; Manual Inspection; Signature Verification; Rejection Criterion; Document Flow
Abstract In this paper we present an industrial application for the automatic screening of incoming multi-page documents in a banking workflow aimed at determining whether these documents are properly signed or not. The proposed method is divided in three main steps. First individual pages are classified in order to identify the pages that should contain a signature. In a second step, we segment within those key pages the location where the signatures should appear. The last step checks whether the signatures are present or not. Our method is tested in a real large-scale environment and we report the results when checking two different types of real multi-page contracts, having in total more than 14,500 pages.
Address Las Vegas, Nevada, USA; December 2015
Corporate Author (up) Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume 9475 Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ISVC
Notes DAG; 600.077 Approved no
Call Number Admin @ si @ Serial 3189
Permanent link to this record
 

 
Author L. Rothacker; Marçal Rusiñol; Josep Llados; G.A. Fink
Title A Two-stage Approach to Segmentation-Free Query-by-example Word Spotting Type Journal
Year 2014 Publication Manuscript Cultures Abbreviated Journal
Volume 7 Issue Pages 47-58
Keywords
Abstract With the ongoing progress in digitization, huge document collections and archives have become available to a broad audience. Scanned document images can be transmitted electronically and studied simultaneously throughout the world. While this is very beneficial, it is often impossible to perform automated searches on these document collections. Optical character recognition usually fails when it comes to handwritten or historic documents. In order to address the need for exploring document collections rapidly, researchers are working on word spotting. In query-by-example word spotting scenarios, the user selects an exemplary occurrence of the query word in a document image. The word spotting system then retrieves all regions in the collection that are visually similar to the given example of the query word. The best matching regions are presented to the user and no actual transcription is required.
An important property of a word spotting system is the computational speed with which queries can be executed. In our previous work, we presented a relatively slow but high-precision method. In the present work, we will extend this baseline system to an integrated two-stage approach. In a coarse-grained first stage, we will filter document images efficiently in order to identify regions that are likely to contain the query word. In the fine-grained second stage, these regions will be analyzed with our previously presented high-precision method. Finally, we will report recognition results and query times for the well-known George Washington
benchmark in our evaluation. We achieve state-of-the-art recognition results while the query times can be reduced to 50% in comparison with our baseline.
Address
Corporate Author (up) Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes DAG; 600.061; 600.077 Approved no
Call Number Admin @ si @ Serial 3190
Permanent link to this record
 

 
Author Cristhian A. Aguilera-Carrasco; C. Aguilera; Angel Sappa
Title Melamine Faced Panels Defect Classification beyond the Visible Spectrum Type Journal Article
Year 2018 Publication Sensors Abbreviated Journal SENS
Volume 18 Issue 11 Pages 1-10
Keywords industrial application; infrared; machine learning
Abstract In this work, we explore the use of images from different spectral bands to classify defects in melamine faced panels, which could appear through the production process. Through experimental evaluation, we evaluate the use of images from the visible (VS), near-infrared (NIR), and long wavelength infrared (LWIR), to classify the defects using a feature descriptor learning approach together with a support vector machine classifier. Two descriptors were evaluated, Extended Local Binary Patterns (E-LBP) and SURF using a Bag of Words (BoW) representation. The evaluation was carried on with an image set obtained during this work, which contained five different defect categories that currently occurs in the industry. Results show that using images from beyond the visual spectrum helps to improve classification performance in contrast with a single visible spectrum solution.
Address
Corporate Author (up) Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MSIAU; 600.122 Approved no
Call Number Admin @ si @ AAS2018 Serial 3191
Permanent link to this record
 

 
Author Xavier Soria; Angel Sappa
Title Improving Edge Detection in RGB Images by Adding NIR Channel Type Conference Article
Year 2018 Publication 14th IEEE International Conference on Signal Image Technology & Internet Based System Abbreviated Journal
Volume Issue Pages
Keywords Edge detection; Contour detection; VGG; CNN; RGB-NIR; Near infrared images
Abstract The edge detection is yet a critical problem in many computer vision and image processing tasks. The manuscript presents an Holistically-Nested Edge Detection based approach to study the inclusion of Near-Infrared in the Visible spectrum
images. To do so, a Single Sensor based dataset has been acquired in the range of 400nm to 1100nm wavelength spectral band. Prominent results have been obtained even when the ground truth (annotated edge-map) is based in the visible wavelength spectrum.
Address Las Palmas de Gran Canaria; November 2018
Corporate Author (up) Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference SITIS
Notes MSIAU; 600.122 Approved no
Call Number Admin @ si @ SoS2018 Serial 3192
Permanent link to this record
 

 
Author Patricia Suarez; Angel Sappa; Boris X. Vintimilla
Title Cross-spectral image dehaze through a dense stacked conditional GAN based approach Type Conference Article
Year 2018 Publication 14th IEEE International Conference on Signal Image Technology & Internet Based System Abbreviated Journal
Volume Issue Pages
Keywords Infrared imaging; Dense; Stacked CGAN; Crossspectral; Convolutional networks
Abstract This paper proposes a novel approach to remove haze from RGB images using a near infrared images based on a dense stacked conditional Generative Adversarial Network (CGAN). The architecture of the deep network implemented
receives, besides the images with haze, its corresponding image in the near infrared spectrum, which serve to accelerate the learning process of the details of the characteristics of the images. The model uses a triplet layer that allows the independence learning of each channel of the visible spectrum image to remove the haze on each color channel separately. A multiple loss function scheme is proposed, which ensures balanced learning between the colors
and the structure of the images. Experimental results have shown that the proposed method effectively removes the haze from the images. Additionally, the proposed approach is compared with a state of the art approach showing better results.
Address Las Palmas de Gran Canaria; November 2018
Corporate Author (up) Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-1-5386-9385-8 Medium
Area Expedition Conference SITIS
Notes MSIAU; 600.086; 600.130; 600.122 Approved no
Call Number Admin @ si @ SSV2018a Serial 3193
Permanent link to this record
 

 
Author Jorge Charco; Boris X. Vintimilla; Angel Sappa
Title Deep learning based camera pose estimation in multi-view environment Type Conference Article
Year 2018 Publication 14th IEEE International Conference on Signal Image Technology & Internet Based System Abbreviated Journal
Volume Issue Pages
Keywords Deep learning; Camera pose estimation; Multiview environment; Siamese architecture
Abstract This paper proposes to use a deep learning network architecture for relative camera pose estimation on a multi-view environment. The proposed network is a variant architecture of AlexNet to use as regressor for prediction the relative translation and rotation as output. The proposed approach is trained from
scratch on a large data set that takes as input a pair of imagesfrom the same scene. This new architecture is compared with a previous approach using standard metrics, obtaining better results on the relative camera pose.
Address Las Palmas de Gran Canaria; November 2018
Corporate Author (up) Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference SITIS
Notes MSIAU; 600.086; 600.130; 600.122 Approved no
Call Number Admin @ si @ CVS2018 Serial 3194
Permanent link to this record
 

 
Author Patricia Suarez; Angel Sappa; Boris X. Vintimilla; Riad I. Hammoud
Title Near InfraRed Imagery Colorization Type Conference Article
Year 2018 Publication 25th International Conference on Image Processing Abbreviated Journal
Volume Issue Pages 2237 - 2241
Keywords Convolutional Neural Networks (CNN), Generative Adversarial Network (GAN), Infrared Imagery colorization
Abstract This paper proposes a stacked conditional Generative Adversarial Network-based method for Near InfraRed (NIR) imagery colorization. We propose a variant architecture of Generative Adversarial Network (GAN) that uses multiple
loss functions over a conditional probabilistic generative model. We show that this new architecture/loss-function yields better generalization and representation of the generated colored IR images. The proposed approach is evaluated on a large test dataset and compared to recent state of the art methods using standard metrics.
Address Athens; Greece; October 2018
Corporate Author (up) Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICIP
Notes MSIAU; 600.086; 600.130; 600.122 Approved no
Call Number Admin @ si @ SSV2018b Serial 3195
Permanent link to this record