|   | 
Details
   web
Records
Author David Vazquez; Antonio Lopez; Daniel Ponsa; Javier Marin
Title Virtual Worlds and Active Learning for Human Detection Type Conference Article
Year 2011 Publication 13th International Conference on Multimodal Interaction Abbreviated Journal
Volume (up) Issue Pages 393-400
Keywords Pedestrian Detection; Human detection; Virtual; Domain Adaptation; Active Learning
Abstract Image based human detection is of paramount interest due to its potential applications in fields such as advanced driving assistance, surveillance and media analysis. However, even detecting non-occluded standing humans remains a challenge of intensive research. The most promising human detectors rely on classifiers developed in the discriminative paradigm, i.e., trained with labelled samples. However, labeling is a manual intensive step, especially in cases like human detection where it is necessary to provide at least bounding boxes framing the humans for training. To overcome such problem, some authors have proposed the use of a virtual world where the labels of the different objects are obtained automatically. This means that the human models (classifiers) are learnt using the appearance of rendered images, i.e., using realistic computer graphics. Later, these models are used for human detection in images of the real world. The results of this technique are surprisingly good. However, these are not always as good as the classical approach of training and testing with data coming from the same camera, or similar ones. Accordingly, in this paper we address the challenge of using a virtual world for gathering (while playing a videogame) a large amount of automatically labelled samples (virtual humans and background) and then training a classifier that performs equal, in real-world images, than the one obtained by equally training from manually labelled real-world samples. For doing that, we cast the problem as one of domain adaptation. In doing so, we assume that a small amount of manually labelled samples from real-world images is required. To collect these labelled samples we propose a non-standard active learning technique. Therefore, ultimately our human model is learnt by the combination of virtual and real world labelled samples (Fig. 1), which has not been done before. We present quantitative results showing that this approach is valid.
Address Alicante, Spain
Corporate Author Thesis
Publisher ACM DL Place of Publication New York, NY, USA, USA Editor
Language English Summary Language English Original Title Virtual Worlds and Active Learning for Human Detection
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-1-4503-0641-6 Medium
Area Expedition Conference ICMI
Notes ADAS Approved yes
Call Number ADAS @ adas @ VLP2011a Serial 1683
Permanent link to this record
 

 
Author Aura Hernandez-Sabate; Debora Gil
Title The Benefits of IVUS Dynamics for Retrieving Stable Models of Arteries Type Book Chapter
Year 2012 Publication Intravascular Ultrasound Abbreviated Journal
Volume (up) Issue Pages 185-206
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Intech Place of Publication Editor Yasuhiro Honda
Language English Summary Language english Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-953-307-900-4 Medium
Area Expedition Conference
Notes IAM; ADAS Approved no
Call Number IAM @ iam @ HeG2012 Serial 1684
Permanent link to this record
 

 
Author Andrew Nolan; Daniel Serrano; Aura Hernandez-Sabate; Daniel Ponsa; Antonio Lopez
Title Obstacle mapping module for quadrotors on outdoor Search and Rescue operations Type Conference Article
Year 2013 Publication International Micro Air Vehicle Conference and Flight Competition Abbreviated Journal
Volume (up) Issue Pages
Keywords UAV
Abstract Obstacle avoidance remains a challenging task for Micro Aerial Vehicles (MAV), due to their limited payload capacity to carry advanced sensors. Unlike larger vehicles, MAV can only carry light weight sensors, for instance a camera, which is our main assumption in this work. We explore passive monocular depth estimation and propose a novel method Position Aided Depth Estimation
(PADE). We analyse PADE performance and compare it against the extensively used Time To Collision (TTC). We evaluate the accuracy, robustness to noise and speed of three Optical Flow (OF) techniques, combined with both depth estimation methods. Our results show PADE is more accurate than TTC at depths between 0-12 meters and is less sensitive to noise. Our findings highlight the potential application of PADE for MAV to perform safe autonomous navigation in
unknown and unstructured environments.
Address Toulouse; France; September 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference IMAV
Notes ADAS; 600.054; 600.057;IAM Approved no
Call Number Admin @ si @ NSH2013 Serial 2371
Permanent link to this record
 

 
Author Anastasios Doulamis; Nikolaos Doulamis; Marco Bertini; Jordi Gonzalez; Thomas B. Moeslund
Title Analysis and Retrieval of Tracked Events and Motion in Imagery Streams Type Miscellaneous
Year 2013 Publication ACM/IEEE international workshop on Analysis and retrieval of tracked events and motion in imagery stream Abbreviated Journal
Volume (up) Issue Pages
Keywords
Abstract
Address Barcelona; October 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE Approved no
Call Number Admin @ si @ DDB2013 Serial 2372
Permanent link to this record
 

 
Author H. Emrah Tasli; Cevahir Çigla; Theo Gevers; A. Aydin Alatan
Title Super pixel extraction via convexity induced boundary adaptation Type Conference Article
Year 2013 Publication 14th IEEE International Conference on Multimedia and Expo Abbreviated Journal
Volume (up) Issue Pages 1-6
Keywords
Abstract This study presents an efficient super-pixel extraction algorithm with major contributions to the state-of-the-art in terms of accuracy and computational complexity. Segmentation accuracy is improved through convexity constrained geodesic distance utilization; while computational efficiency is achieved by replacing complete region processing with boundary adaptation idea. Starting from the uniformly distributed rectangular equal-sized super-pixels, region boundaries are adapted to intensity edges iteratively by assigning boundary pixels to the most similar neighboring super-pixels. At each iteration, super-pixel regions are updated and hence progressively converging to compact pixel groups. Experimental results with state-of-the-art comparisons, validate the performance of the proposed technique in terms of both accuracy and speed.
Address San Jose; USA; July 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1945-7871 ISBN Medium
Area Expedition Conference ICME
Notes ALTRES;ISE Approved no
Call Number Admin @ si @ TÇG2013 Serial 2367
Permanent link to this record
 

 
Author H. Emrah Tasli; Jan van Gemert; Theo Gevers
Title Spot the differences: from a photograph burst to the single best picture Type Conference Article
Year 2013 Publication 21ST ACM International Conference on Multimedia Abbreviated Journal
Volume (up) Issue Pages 729-732
Keywords
Abstract With the rise of the digital camera, people nowadays typically take several near-identical photos of the same scene to maximize the chances of a good shot. This paper proposes a user-friendly tool for exploring a personal photo gallery for selecting or even creating the best shot of a scene between its multiple alternatives. This functionality is realized through a graphical user interface where the best viewpoint can be selected from a generated panorama of the scene. Once the viewpoint is selected, the user is able to go explore possible alternatives coming from the other images. Using this tool, one can explore a photo gallery efficiently. Moreover, additional compositions from other images are also possible. With such additional compositions, one can go from a burst of photographs to the single best one. Even funny compositions of images, where you can duplicate a person in the same image, are possible with our proposed tool.
Address Barcelona
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ACM-MM
Notes ALTRES;ISE Approved no
Call Number TGG2013 Serial 2368
Permanent link to this record
 

 
Author Sezer Karaoglu; Jan van Gemert; Theo Gevers
Title Con-text: text detection using background connectivity for fine-grained object classification Type Conference Article
Year 2013 Publication 21ST ACM International Conference on Multimedia Abbreviated Journal
Volume (up) Issue Pages 757-760
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ACM-MM
Notes ALTRES;ISE Approved no
Call Number Admin @ si @ KGG2013 Serial 2369
Permanent link to this record
 

 
Author Eric Amiel
Title Visualisation de vaisseaux sanguins Type Report
Year 2005 Publication Rapport de Stage Abbreviated Journal
Volume (up) Issue Pages
Keywords
Abstract
Address
Corporate Author Université Paul Sabatier Toulouse III Thesis Bachelor's thesis
Publisher Université Paul Sabatier Toulouse III Place of Publication Toulouse Editor Enric Marti
Language French Summary Language French Original Title
Series Editor IUP Systèmes Intelligents Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes IAM Approved no
Call Number IAM @ iam @ Ami2005 Serial 1690
Permanent link to this record
 

 
Author Debora Gil; Agnes Borras; Manuel Ballester; Francesc Carreras; Ruth Aris; Manuel Vazquez; Enric Marti; Ferran Poveda
Title MIOCARDIA: Integrating cardiac function and muscular architecture for a better diagnosis Type Conference Article
Year 2011 Publication 14th International Symposium on Applied Sciences in Biomedical and Communication Technologies Abbreviated Journal
Volume (up) Issue Pages
Keywords
Abstract Deep understanding of myocardial structure of the heart would unravel crucial knowledge for clinical and medical procedures. The MIOCARDIA project is a multidisciplinary project in cooperation with l'Hospital de la Santa Creu i de Sant Pau, Clinica la Creu Blanca and Barcelona Supercomputing Center. The ultimate goal of this project is defining a computational model of the myocardium. The model takes into account the deep interrelation between the anatomy and the mechanics of the heart. The paper explains the workflow of the MIOCARDIA project. It also introduces a multiresolution reconstruction technique based on DT-MRI streamlining for simplified global myocardial model generation. Our reconstructions can restore the most complex myocardial structures and provides evidences of a global helical organization.
Address Barcelona; Spain
Corporate Author Association for Computing Machinery Thesis
Publisher Place of Publication Barcelona, Spain Editor Association for Computing Machinery
Language english Summary Language english Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-1-4503-0913-4 Medium
Area Expedition Conference ISABEL
Notes IAM Approved no
Call Number IAM @ iam @ GGB2011 Serial 1691
Permanent link to this record
 

 
Author Ivo Everts; Jan van Gemert; Theo Gevers
Title Evaluation of Color STIPs for Human Action Recognition Type Conference Article
Year 2013 Publication IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal
Volume (up) Issue Pages 2850-2857
Keywords
Abstract This paper is concerned with recognizing realistic human actions in videos based on spatio-temporal interest points (STIPs). Existing STIP-based action recognition approaches operate on intensity representations of the image data. Because of this, these approaches are sensitive to disturbing photometric phenomena such as highlights and shadows. Moreover, valuable information is neglected by discarding chromaticity from the photometric representation. These issues are addressed by Color STIPs. Color STIPs are multi-channel reformulations of existing intensity-based STIP detectors and descriptors, for which we consider a number of chromatic representations derived from the opponent color space. This enhanced modeling of appearance improves the quality of subsequent STIP detection and description. Color STIPs are shown to substantially outperform their intensity-based counterparts on the challenging UCF~sports, UCF11 and UCF50 action recognition benchmarks. Moreover, the results show that color STIPs are currently the single best low-level feature choice for STIP-based approaches to human action recognition.
Address Portland; oregon; June 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1063-6919 ISBN Medium
Area Expedition Conference CVPR
Notes ALTRES;ISE Approved no
Call Number Admin @ si @ EGG2013 Serial 2364
Permanent link to this record
 

 
Author Fares Alnajar; Theo Gevers; Roberto Valenti; Sennay Ghebreab
Title Calibration-free Gaze Estimation using Human Gaze Patterns Type Conference Article
Year 2013 Publication 15th IEEE International Conference on Computer Vision Abbreviated Journal
Volume (up) Issue Pages 137-144
Keywords
Abstract We present a novel method to auto-calibrate gaze estimators based on gaze patterns obtained from other viewers. Our method is based on the observation that the gaze patterns of humans are indicative of where a new viewer will look at [12]. When a new viewer is looking at a stimulus, we first estimate a topology of gaze points (initial gaze points). Next, these points are transformed so that they match the gaze patterns of other humans to find the correct gaze points. In a flexible uncalibrated setup with a web camera and no chin rest, the proposed method was tested on ten subjects and ten images. The method estimates the gaze points after looking at a stimulus for a few seconds with an average accuracy of 4.3 im. Although the reported performance is lower than what could be achieved with dedicated hardware or calibrated setup, the proposed method still provides a sufficient accuracy to trace the viewer attention. This is promising considering the fact that auto-calibration is done in a flexible setup , without the use of a chin rest, and based only on a few seconds of gaze initialization data. To the best of our knowledge, this is the first work to use human gaze patterns in order to auto-calibrate gaze estimators.
Address Sydney
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCV
Notes ALTRES;ISE Approved no
Call Number Admin @ si @ AGV2013 Serial 2365
Permanent link to this record
 

 
Author Hamdi Dibeklioglu; Albert Ali Salah; Theo Gevers
Title Like Father, Like Son: Facial Expression Dynamics for Kinship Verification Type Conference Article
Year 2013 Publication 15th IEEE International Conference on Computer Vision Abbreviated Journal
Volume (up) Issue Pages 1497-1504
Keywords
Abstract Kinship verification from facial appearance is a difficult problem. This paper explores the possibility of employing facial expression dynamics in this problem. By using features that describe facial dynamics and spatio-temporal appearance over smile expressions, we show that it is possible to improve the state of the art in this problem, and verify that it is indeed possible to recognize kinship by resemblance of facial expressions. The proposed method is tested on different kin relationships. On the average, 72.89% verification accuracy is achieved on spontaneous smiles.
Address Sydney
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCV
Notes ALTRES;ISE Approved no
Call Number Admin @ si @ DSG2013 Serial 2366
Permanent link to this record
 

 
Author Jorge Bernal; F. Javier Sanchez; Fernando Vilariño
Title Current Challenges on Polyp Detection in Colonoscopy Videos: From Region Segmentation to Region Classification. a Pattern Recognition-based Approach.ased Approach Type Conference Article
Year 2011 Publication 2nd International Workshop on Medical Image Analysis and Descriptionfor Diagnosis Systems Abbreviated Journal
Volume (up) Issue Pages 62-71
Keywords Medical Imaging, Colonoscopy, Pattern Recognition, Segmentation, Polyp Detection, Region Description, Machine Learning, Real-time.
Abstract In this paper we present our approach on real-time polyp detection in colonoscopy videos. Our method consists of three stages: Image Segmentation, Region Description and Image Classification. Taking into account the constraints of our project, we introduce our segmentation system that is based on the model of appearance of the polyp that we have defined after observing real videos from colonoscopy processes. The output of this stage will ideally be a low number of regions of which one of them should cover the whole polyp region (if there is one in the image). This regions will be described in terms of features and, as a result of a machine learning schema, classified based on the values that they have for the several features that we will use on their description. Although we are still on the early stages of the project, we present some preliminary segmentation results that indicates that we are going in a good direction.
Address Rome, Italy
Corporate Author Thesis
Publisher SciTePress Place of Publication Editor Djemal, Khalifa
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area 800 Expedition Conference MIAD
Notes MV;SIAI Approved no
Call Number IAM @ iam @ BSV2011a Serial 1695
Permanent link to this record
 

 
Author Petia Radeva; Jordi Vitria; Fernando Vilariño; Panagiota Spyridonos; Fernando Azpiroz; Juan Malagelada; Fosca de Iorio; Anna Accarino
Title Cascade analysis for intestinal contraction detection Type Patent
Year 2009 Publication US 2009/0284589 A1 Abbreviated Journal USPO
Volume (up) Issue Pages 1-25
Keywords
Abstract A method and system cascade analysisi for intestinal contraction detection is provided by extracting from image frames captured in-vivo. The method and system also relate to the detection of turbid liquids in intestinal tracts, to automatic detection of video image frames taken in the gastrointestinal tract including a field of view obstructed by turbid media, and more particulary, to extraction of image data obstructed by turbid media.
Address
Corporate Author US Patent Office Thesis
Publisher US Patent Office Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB; OR; MV;SIAI Approved no
Call Number IAM @ iam @ RVV2009 Serial 1700
Permanent link to this record
 

 
Author Panagiota Spyridonos; Fernando Vilariño; Jordi Vitria; Petia Radeva; Fernando Azpiroz; Juan Malagelada
Title Device, system and method for automatic detection of contractile activity in an image frame Type Patent
Year 2011 Publication US 2011/0044515 A1 Abbreviated Journal
Volume (up) Issue Pages
Keywords
Abstract A device, system and method for automatic detection of contractile activity of a body lumen in an image frame is provided, wherein image frames during contractile activity are captured and/or image frames including contractile activity are automatically detected, such as through pattern recognition and/or feature extraction to trace image frames including contractions, e.g., with wrinkle patterns. A manual procedure of annotation of contractions, e.g. tonic contractions in capsule endoscopy, may consist of the visualization of the whole video by a specialist, and the labeling of the contraction frames. Embodiments of the present invention may be suitable for implementation in an in vivo imaging system.
Address Pearl Cohen Zedek Latzer, LLP, 1500 Broadway 12th Floor, New York (NY) 10036 (US)
Corporate Author US Patent Office Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MV;OR;MILAB;SIAI Approved no
Call Number IAM @ iam @ SVV2011 Serial 1701
Permanent link to this record