toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Mohammad Ali Bagheri; Qigang Gao; Sergio Escalera edit   pdf
doi  isbn
openurl 
  Title Efficient pairwise classification using Local Cross Off strategy Type Conference Article
  Year 2012 Publication 25th Canadian Conference on Artificial Intelligence Abbreviated Journal  
  Volume 7310 Issue Pages 25-36  
  Keywords  
  Abstract The pairwise classification approach tends to perform better than other well-known approaches when dealing with multiclass classification problems. In the pairwise approach, however, the nuisance votes of many irrelevant classifiers may result in a wrong prediction class. To overcome this problem, a novel method, Local Crossing Off (LCO), is presented and evaluated in this paper. The proposed LCO system takes advantage of nearest neighbor classification algorithm because of its simplicity and speed, as well as the strength of other two powerful binary classifiers to discriminate between two classes. This paper provides a set of experimental results on 20 datasets using two base learners: Neural Networks and Support Vector Machines. The results show that the proposed technique not only achieves better classification accuracy, but also is computationally more efficient for tackling classification problems which have a relatively large number of target classes.  
  Address (up) Toronto, Ontario  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-30352-4 Medium  
  Area Expedition Conference AI  
  Notes HuPBA;MILAB Approved no  
  Call Number Admin @ si @ BGE2012c Serial 2044  
Permanent link to this record
 

 
Author Sergio Vera; Debora Gil; Agnes Borras; F. Javier Sanchez; Frederic Perez; Marius G. Linguraru; Miguel Angel Gonzalez Ballester edit   pdf
doi  isbn
openurl 
  Title Computation and Evaluation of Medial Surfaces for Shape Representation of Abdominal Organs Type Book Chapter
  Year 2012 Publication Workshop on Computational and Clinical Applications in Abdominal Imaging Abbreviated Journal  
  Volume 7029 Issue Pages 223–230  
  Keywords medial manifolds, abdomen.  
  Abstract Medial representations are powerful tools for describing and parameterizing the volumetric shape of anatomical structures. Existing methods show excellent results when applied to 2D
objects, but their quality drops across dimensions. This paper contributes to the computation of medial manifolds in two aspects. First, we provide a standard scheme for the computation of medial
manifolds that avoid degenerated medial axis segments; second, we introduce an energy based method which performs independently of the dimension. We evaluate quantitatively the performance of our
method with respect to existing approaches, by applying them to synthetic shapes of known medial geometry. Finally, we show results on shape representation of multiple abdominal organs,
exploring the use of medial manifolds for the representation of multi-organ relations.
 
  Address (up) Toronto; Canada;  
  Corporate Author Thesis  
  Publisher Springer Link Place of Publication Berlin Editor H. Yoshida et al  
  Language English Summary Language English Original Title  
  Series Editor Series Title Lecture Notes in Computer Science Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-28556-1 Medium  
  Area Expedition Conference ABDI  
  Notes IAM;MV Approved no  
  Call Number IAM @ iam @ VGB2012 Serial 1834  
Permanent link to this record
 

 
Author Ishaan Gulrajani; Kundan Kumar; Faruk Ahmed; Adrien Ali Taiga; Francesco Visin; David Vazquez; Aaron Courville edit   pdf
url  openurl
  Title PixelVAE: A Latent Variable Model for Natural Images Type Conference Article
  Year 2017 Publication 5th International Conference on Learning Representations Abbreviated Journal  
  Volume Issue Pages  
  Keywords Deep Learning; Unsupervised Learning  
  Abstract Natural image modeling is a landmark challenge of unsupervised learning. Variational Autoencoders (VAEs) learn a useful latent representation and generate samples that preserve global structure but tend to suffer from image blurriness. PixelCNNs model sharp contours and details very well, but lack an explicit latent representation and have difficulty modeling large-scale structure in a computationally efficient way. In this paper, we present PixelVAE, a VAE model with an autoregressive decoder based on PixelCNN. The resulting architecture achieves state-of-the-art log-likelihood on binarized MNIST. We extend PixelVAE to a hierarchy of multiple latent variables at different scales; this hierarchical model achieves competitive likelihood on 64x64 ImageNet and generates high-quality samples on LSUN bedrooms.  
  Address (up) Toulon; France; April 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICLR  
  Notes ADAS; 600.085; 600.076; 601.281; 600.118 Approved no  
  Call Number ADAS @ adas @ GKA2017 Serial 2815  
Permanent link to this record
 

 
Author Pau Rodriguez; Jordi Gonzalez; Jordi Cucurull; Josep M. Gonfaus; Xavier Roca edit   pdf
openurl 
  Title Regularizing CNNs with Locally Constrained Decorrelations Type Conference Article
  Year 2017 Publication 5th International Conference on Learning Representations Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address (up) Toulon; France; April 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICLR  
  Notes ISE; 602.143; 600.119; 600.098 Approved no  
  Call Number Admin @ si @ RGC2017 Serial 2927  
Permanent link to this record
 

 
Author Marçal Rusiñol; J. Chazalon; Jean-Marc Ogier edit   pdf
openurl 
  Title Filtrage de descripteurs locaux pour l'amélioration de la détection de documents Type Conference Article
  Year 2016 Publication Colloque International Francophone sur l'Écrit et le Document Abbreviated Journal  
  Volume Issue Pages  
  Keywords Local descriptors; mobile capture; document matching; keypoint selection  
  Abstract In this paper we propose an effective method aimed at reducing the amount of local descriptors to be indexed in a document matching framework.In an off-line training stage, the matching between the model document and incoming images is computed retaining the local descriptors from the model that steadily produce good matches. We have evaluated this approach by using the ICDAR2015 SmartDOC dataset containing near 25000 images from documents to be captured by a mobile device. We have tested the performance of this filtering step by using ORB and SIFT local detectors and descriptors. The results show an important gain both in quality of the final matching as well as in time and space requirements.  
  Address (up) Toulouse; France; March 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CIFED  
  Notes DAG; 600.084; 600.077 Approved no  
  Call Number Admin @ si @ RCO2016 Serial 2755  
Permanent link to this record
 

 
Author Andrew Nolan; Daniel Serrano; Aura Hernandez-Sabate; Daniel Ponsa; Antonio Lopez edit   pdf
openurl 
  Title Obstacle mapping module for quadrotors on outdoor Search and Rescue operations Type Conference Article
  Year 2013 Publication International Micro Air Vehicle Conference and Flight Competition Abbreviated Journal  
  Volume Issue Pages  
  Keywords UAV  
  Abstract Obstacle avoidance remains a challenging task for Micro Aerial Vehicles (MAV), due to their limited payload capacity to carry advanced sensors. Unlike larger vehicles, MAV can only carry light weight sensors, for instance a camera, which is our main assumption in this work. We explore passive monocular depth estimation and propose a novel method Position Aided Depth Estimation
(PADE). We analyse PADE performance and compare it against the extensively used Time To Collision (TTC). We evaluate the accuracy, robustness to noise and speed of three Optical Flow (OF) techniques, combined with both depth estimation methods. Our results show PADE is more accurate than TTC at depths between 0-12 meters and is less sensitive to noise. Our findings highlight the potential application of PADE for MAV to perform safe autonomous navigation in
unknown and unstructured environments.
 
  Address (up) Toulouse; France; September 2013  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IMAV  
  Notes ADAS; 600.054; 600.057;IAM Approved no  
  Call Number Admin @ si @ NSH2013 Serial 2371  
Permanent link to this record
 

 
Author Zhengying Liu; Isabelle Guyon; Julio C. S. Jacques Junior; Meysam Madadi; Sergio Escalera; Adrien Pavao; Hugo Jair Escalante; Wei-Wei Tu; Zhen Xu; Sebastien Treguer edit   pdf
url  openurl
  Title AutoCV Challenge Design and Baseline Results Type Conference Article
  Year 2019 Publication La Conference sur l’Apprentissage Automatique Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract We present the design and beta tests of a new machine learning challenge called AutoCV (for Automated Computer Vision), which is the first event in a series of challenges we are planning on the theme of Automated Deep Learning. We target applications for which Deep Learning methods have had great success in the past few years, with the aim of pushing the state of the art in fully automated methods to design the architecture of neural networks and train them without any human intervention. The tasks are restricted to multi-label image classification problems, from domains including medical, areal, people, object, and handwriting imaging. Thus the type of images will vary a lot in scales, textures, and structure. Raw data are provided (no features extracted), but all datasets are formatted in a uniform tensor manner (although images may have fixed or variable sizes within a dataset). The participants's code will be blind tested on a challenge platform in a controlled manner, with restrictions on training and test time and memory limitations. The challenge is part of the official selection of IJCNN 2019.  
  Address (up) Toulouse; Francia; July 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HUPBA; no proj Approved no  
  Call Number Admin @ si @ LGJ2019 Serial 3323  
Permanent link to this record
 

 
Author R. Herault; Franck Davoine; Fadi Dornaika; Y. Grandvalet edit  openurl
  Title Simultaneous and robust face and facial action tracking Type Miscellaneous
  Year 2006 Publication 15eme Congres Francophone AFRIF–AFIA de Reconnaissance des Formes et Intelligence Artificielle (RFIA´06) Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address (up) Tours (France)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number Admin @ si @ HDD2006 Serial 735  
Permanent link to this record
 

 
Author P. Wang; V. Eglin; C. Garcia; C. Largeron; Josep Llados; Alicia Fornes edit   pdf
doi  isbn
openurl 
  Title A Novel Learning-free Word Spotting Approach Based on Graph Representation Type Conference Article
  Year 2014 Publication 11th IAPR International Workshop on Document Analysis and Systems Abbreviated Journal  
  Volume Issue Pages 207-211  
  Keywords  
  Abstract Effective information retrieval on handwritten document images has always been a challenging task. In this paper, we propose a novel handwritten word spotting approach based on graph representation. The presented model comprises both topological and morphological signatures of handwriting. Skeleton-based graphs with the Shape Context labelled vertexes are established for connected components. Each word image is represented as a sequence of graphs. In order to be robust to the handwriting variations, an exhaustive merging process based on DTW alignment result is introduced in the similarity measure between word images. With respect to the computation complexity, an approximate graph edit distance approach using bipartite matching is employed for graph matching. The experiments on the George Washington dataset and the marriage records from the Barcelona Cathedral dataset demonstrate that the proposed approach outperforms the state-of-the-art structural methods.  
  Address (up) Tours; France; April 2014  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4799-3243-6 Medium  
  Area Expedition Conference DAS  
  Notes DAG; 600.061; 602.006; 600.077 Approved no  
  Call Number Admin @ si @ WEG2014b Serial 2517  
Permanent link to this record
 

 
Author Marçal Rusiñol; J. Chazalon; Jean-Marc Ogier edit  doi
isbn  openurl
  Title Combining Focus Measure Operators to Predict OCR Accuracy in Mobile-Captured Document Images Type Conference Article
  Year 2014 Publication 11th IAPR International Workshop on Document Analysis and Systems Abbreviated Journal  
  Volume Issue Pages 181 - 185  
  Keywords  
  Abstract Mobile document image acquisition is a new trend raising serious issues in business document processing workflows. Such digitization procedure is unreliable, and integrates many distortions which must be detected as soon as possible, on the mobile, to avoid paying data transmission fees, and losing information due to the inability to re-capture later a document with temporary availability. In this context, out-of-focus blur is major issue: users have no direct control over it, and it seriously degrades OCR recognition. In this paper, we concentrate on the estimation of focus quality, to ensure a sufficient legibility of a document image for OCR processing. We propose two contributions to improve OCR accuracy prediction for mobile-captured document images. First, we present 24 focus measures, never tested on document images, which are fast to compute and require no training. Second, we show that a combination of those measures enables state-of-the art performance regarding the correlation with OCR accuracy. The resulting approach is fast, robust, and easy to implement in a mobile device. Experiments are performed on a public dataset, and precise details about image processing are given.  
  Address (up) Tours; France; April 2014  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4799-3243-6 Medium  
  Area Expedition Conference DAS  
  Notes DAG; 601.223; 600.077 Approved no  
  Call Number Admin @ si @ RCO2014a Serial 2545  
Permanent link to this record
 

 
Author David Fernandez; R.Manmatha; Josep Llados; Alicia Fornes edit   pdf
doi  isbn
openurl 
  Title Sequential Word Spotting in Historical Handwritten Documents Type Conference Article
  Year 2014 Publication 11th IAPR International Workshop on Document Analysis and Systems Abbreviated Journal  
  Volume Issue Pages 101 - 105  
  Keywords  
  Abstract In this work we present a handwritten word spotting approach that takes advantage of the a priori known order of appearance of the query words. Given an ordered sequence of query word instances, the proposed approach performs a
sequence alignment with the words in the target collection. Although the alignment is quite sparse, i.e. the number of words in the database is higher than the query set, the improvement in the overall performance is sensitively higher than isolated word spotting. As application dataset, we use a collection of handwritten marriage licenses taking advantage of the ordered
index pages of family names.
 
  Address (up) Tours; Francia; April 2014  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4799-3243-6 Medium  
  Area Expedition Conference DAS  
  Notes DAG; 600.061; 600.056; 602.006; 600.077 Approved no  
  Call Number Admin @ si @ FML2014 Serial 2462  
Permanent link to this record
 

 
Author Christophe Rigaud; Dimosthenis Karatzas; Jean-Christophe Burie; Jean-Marc Ogier edit  doi
isbn  openurl
  Title Color descriptor for content-based drawing retrieval Type Conference Article
  Year 2014 Publication 11th IAPR International Workshop on Document Analysis and Systems Abbreviated Journal  
  Volume Issue Pages 267 - 271  
  Keywords  
  Abstract Human detection in computer vision field is an active field of research. Extending this to human-like drawings such as the main characters in comic book stories is not trivial. Comics analysis is a very recent field of research at the intersection of graphics, texts, objects and people recognition. The detection of the main comic characters is an essential step towards a fully automatic comic book understanding. This paper presents a color-based approach for comics character retrieval using content-based drawing retrieval and color palette.  
  Address (up) Tours; Francia; April 2014  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4799-3243-6 Medium  
  Area Expedition Conference DAS  
  Notes DAG; 600.056; 600.077 Approved no  
  Call Number Admin @ si @ RKB2014 Serial 2479  
Permanent link to this record
 

 
Author Dimosthenis Karatzas; Sergi Robles; Lluis Gomez edit   pdf
doi  isbn
openurl 
  Title An on-line platform for ground truthing and performance evaluation of text extraction systems Type Conference Article
  Year 2014 Publication 11th IAPR International Workshop on Document Analysis and Systems Abbreviated Journal  
  Volume Issue Pages 242 - 246  
  Keywords  
  Abstract This paper presents a set of on-line software tools for creating ground truth and calculating performance evaluation metrics for text extraction tasks such as localization, segmentation and recognition. The platform supports the definition of comprehensive ground truth information at different text representation levels while it offers centralised management and quality control of the ground truthing effort. It implements a range of state of the art performance evaluation algorithms and offers functionality for the definition of evaluation scenarios, on-line calculation of various performance metrics and visualisation of the results. The
presented platform, which comprises the backbone of the ICDAR 2011 (challenge 1) and 2013 (challenges 1 and 2) Robust Reading competitions, is now made available for public use.
 
  Address (up) Tours; Francia; April 2014  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4799-3243-6 Medium  
  Area Expedition Conference DAS  
  Notes DAG; 600.056; 600.077 Approved no  
  Call Number Admin @ si @ KRG2014 Serial 2491  
Permanent link to this record
 

 
Author Muhammad Anwer Rao; Fahad Shahbaz Khan; Joost Van de Weijer; Jorma Laaksonen edit   pdf
openurl 
  Title Top-Down Deep Appearance Attention for Action Recognition Type Conference Article
  Year 2017 Publication 20th Scandinavian Conference on Image Analysis Abbreviated Journal  
  Volume 10269 Issue Pages 297-309  
  Keywords Action recognition; CNNs; Feature fusion  
  Abstract Recognizing human actions in videos is a challenging problem in computer vision. Recently, convolutional neural network based deep features have shown promising results for action recognition. In this paper, we investigate the problem of fusing deep appearance and motion cues for action recognition. We propose a video representation which combines deep appearance and motion based local convolutional features within the bag-of-deep-features framework. Firstly, dense deep appearance and motion based local convolutional features are extracted from spatial (RGB) and temporal (flow) networks, respectively. Both visual cues are processed in parallel by constructing separate visual vocabularies for appearance and motion. A category-specific appearance map is then learned to modulate the weights of the deep motion features. The proposed representation is discriminative and binds the deep local convolutional features to their spatial locations. Experiments are performed on two challenging datasets: JHMDB dataset with 21 action classes and ACT dataset with 43 categories. The results clearly demonstrate that our approach outperforms both standard approaches of early and late feature fusion. Further, our approach is only employing action labels and without exploiting body part information, but achieves competitive performance compared to the state-of-the-art deep features based approaches.  
  Address (up) Tromso; June 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference SCIA  
  Notes LAMP; 600.109; 600.068; 600.120 Approved no  
  Call Number Admin @ si @ RKW2017b Serial 3039  
Permanent link to this record
 

 
Author Adela Barbulescu; Wenjuan Gong; Jordi Gonzalez; Thomas B. Moeslund; Xavier Roca edit   pdf
url  isbn
openurl 
  Title 3D Human Pose Estimation Using 2D Body Part Detectors Type Conference Article
  Year 2012 Publication 21st International Conference on Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 2484 - 2487  
  Keywords  
  Abstract Automatic 3D reconstruction of human poses from monocular images is a challenging and popular topic in the computer vision community, which provides a wide range of applications in multiple areas. Solutions for 3D pose estimation involve various learning approaches, such as support vector machines and Gaussian processes, but many encounter difficulties in cluttered scenarios and require additional input data, such as silhouettes, or controlled camera settings. We present a framework that is capable of estimating the 3D pose of a person from single images or monocular image sequences without requiring background information and which is robust to camera variations. The framework models the non-linearity present in human pose estimation as it benefits from flexible learning approaches, including a highly customizable 2D detector. Results on the HumanEva benchmark show how they perform and influence the quality of the 3D pose estimates.  
  Address (up) Tsubuka, Japan  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1051-4651 ISBN 978-1-4673-2216-4 Medium  
  Area Expedition Conference ICPR  
  Notes ISE Approved no  
  Call Number Admin @ si @ BGG2012 Serial 2172  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: