toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Mohammad Ali Bagheri; Qigang Gao; Sergio Escalera edit   pdf
doi  isbn
openurl 
  Title Efficient pairwise classification using Local Cross Off strategy Type Conference Article
  Year 2012 Publication 25th Canadian Conference on Artificial Intelligence Abbreviated Journal  
  Volume 7310 Issue Pages (down) 25-36  
  Keywords  
  Abstract The pairwise classification approach tends to perform better than other well-known approaches when dealing with multiclass classification problems. In the pairwise approach, however, the nuisance votes of many irrelevant classifiers may result in a wrong prediction class. To overcome this problem, a novel method, Local Crossing Off (LCO), is presented and evaluated in this paper. The proposed LCO system takes advantage of nearest neighbor classification algorithm because of its simplicity and speed, as well as the strength of other two powerful binary classifiers to discriminate between two classes. This paper provides a set of experimental results on 20 datasets using two base learners: Neural Networks and Support Vector Machines. The results show that the proposed technique not only achieves better classification accuracy, but also is computationally more efficient for tackling classification problems which have a relatively large number of target classes.  
  Address Toronto, Ontario  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-30352-4 Medium  
  Area Expedition Conference AI  
  Notes HuPBA;MILAB Approved no  
  Call Number Admin @ si @ BGE2012c Serial 2044  
Permanent link to this record
 

 
Author Angel Sappa; David Geronimo; Fadi Dornaika; Mohammad Rouhani; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title Moving object detection from mobile platforms using stereo data registration Type Book Chapter
  Year 2012 Publication Computational Intelligence paradigms in advanced pattern classification Abbreviated Journal  
  Volume 386 Issue Pages (down) 25-37  
  Keywords pedestrian detection  
  Abstract This chapter describes a robust approach for detecting moving objects from on-board stereo vision systems. It relies on a feature point quaternion-based registration, which avoids common problems that appear when computationally expensive iterative-based algorithms are used on dynamic environments. The proposed approach consists of three main stages. Initially, feature points are extracted and tracked through consecutive 2D frames. Then, a RANSAC based approach is used for registering two point sets, with known correspondences in the 3D space. The computed 3D rigid displacement is used to map two consecutive 3D point clouds into the same coordinate system by means of the quaternion method. Finally, moving objects correspond to those areas with large 3D registration errors. Experimental results show the viability of the proposed approach to detect moving objects like vehicles or pedestrians in different urban scenarios.  
  Address  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor Marek R. Ogiela; Lakhmi C. Jain  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1860-949X ISBN 978-3-642-24048-5 Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number Admin @ si @ SGD2012 Serial 2061  
Permanent link to this record
 

 
Author Klaus Broelemann; Anjan Dutta; Xiaoyi Jiang; Josep Llados edit   pdf
doi  isbn
openurl 
  Title Hierarchical Plausibility-Graphs for Symbol Spotting in Graphical Documents Type Book Chapter
  Year 2014 Publication Graphics Recognition. Current Trends and Challenges Abbreviated Journal  
  Volume 8746 Issue Pages (down) 25-37  
  Keywords  
  Abstract Graph representation of graphical documents often suffers from noise such as spurious nodes and edges, and their discontinuity. In general these errors occur during the low-level image processing viz. binarization, skeletonization, vectorization etc. Hierarchical graph representation is a nice and efficient way to solve this kind of problem by hierarchically merging node-node and node-edge depending on the distance. But the creation of hierarchical graph representing the graphical information often uses hard thresholds on the distance to create the hierarchical nodes (next state) of the lower nodes (or states) of a graph. As a result, the representation often loses useful information. This paper introduces plausibilities to the nodes of hierarchical graph as a function of distance and proposes a modified algorithm for matching subgraphs of the hierarchical graphs. The plausibility-annotated nodes help to improve the performance of the matching algorithm on two hierarchical structures. To show the potential of this approach, we conduct an experiment with the SESYD dataset.  
  Address  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor Bart Lamiroy; Jean-Marc Ogier  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-662-44853-3 Medium  
  Area Expedition Conference  
  Notes DAG; 600.045; 600.056; 600.061; 600.077 Approved no  
  Call Number Admin @ si @ BDJ2014 Serial 2699  
Permanent link to this record
 

 
Author Arnau Baro; Pau Riba; Jorge Calvo-Zaragoza; Alicia Fornes edit   pdf
doi  openurl
  Title Optical Music Recognition by Recurrent Neural Networks Type Conference Article
  Year 2017 Publication 14th IAPR International Workshop on Graphics Recognition Abbreviated Journal  
  Volume Issue Pages (down) 25-26  
  Keywords Optical Music Recognition; Recurrent Neural Network; Long Short-Term Memory  
  Abstract Optical Music Recognition is the task of transcribing a music score into a machine readable format. Many music scores are written in a single staff, and therefore, they could be treated as a sequence. Therefore, this work explores the use of Long Short-Term Memory (LSTM) Recurrent Neural Networks for reading the music score sequentially, where the LSTM helps in keeping the context. For training, we have used a synthetic dataset of more than 40000 images, labeled at primitive level  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICDAR  
  Notes DAG; 600.097; 601.302; 600.121 Approved no  
  Call Number Admin @ si @ BRC2017 Serial 3056  
Permanent link to this record
 

 
Author Gisel Bastidas-Guacho; Patricio Moreno; Boris X. Vintimilla; Angel Sappa edit  url
doi  openurl
  Title Application on the Loop of Multimodal Image Fusion: Trends on Deep-Learning Based Approaches Type Conference Article
  Year 2023 Publication 13th International Conference on Pattern Recognition Systems Abbreviated Journal  
  Volume 14234 Issue Pages (down) 25–36  
  Keywords  
  Abstract Multimodal image fusion allows the combination of information from different modalities, which is useful for tasks such as object detection, edge detection, and tracking, to name a few. Using the fused representation for applications results in better task performance. There are several image fusion approaches, which have been summarized in surveys. However, the existing surveys focus on image fusion approaches where the application on the loop of multimodal image fusion is not considered. On the contrary, this study summarizes deep learning-based multimodal image fusion for computer vision (e.g., object detection) and image processing applications (e.g., semantic segmentation), that is, approaches where the application module leverages the multimodal fusion process to enhance the final result. Firstly, we introduce image fusion and the existing general frameworks for image fusion tasks such as multifocus, multiexposure and multimodal. Then, we describe the multimodal image fusion approaches. Next, we review the state-of-the-art deep learning multimodal image fusion approaches for vision applications. Finally, we conclude our survey with the trends of task-driven multimodal image fusion.  
  Address Guayaquil; Ecuador; July 2023  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICPRS  
  Notes MSIAU Approved no  
  Call Number Admin @ si @ BMV2023 Serial 3932  
Permanent link to this record
 

 
Author Jelena Gorbova; Egils Avots; Iiris Lusi; Mark Fishel; Sergio Escalera; Gholamreza Anbarjafari edit  doi
openurl 
  Title Integrating Vision and Language for First Impression Personality Analysis Type Journal Article
  Year 2018 Publication IEEE Multimedia Abbreviated Journal MULTIMEDIA  
  Volume 25 Issue 2 Pages (down) 24 - 33  
  Keywords  
  Abstract The authors present a novel methodology for analyzing integrated audiovisual signals and language to assess a persons personality. An evaluation of their proposed multimodal method using a job candidate screening system that predicted five personality traits from a short video demonstrates the methods effectiveness.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HUPBA; 602.133 Approved no  
  Call Number Admin @ si @ GAL2018 Serial 3124  
Permanent link to this record
 

 
Author Ajian Liu; Xuan Li; Jun Wan; Yanyan Liang; Sergio Escalera; Hugo Jair Escalante; Meysam Madadi; Yi Jin; Zhuoyuan Wu; Xiaogang Yu; Zichang Tan; Qi Yuan; Ruikun Yang; Benjia Zhou; Guodong Guo; Stan Z. Li edit   pdf
url  openurl
  Title Cross-ethnicity Face Anti-spoofing Recognition Challenge: A Review Type Journal Article
  Year 2020 Publication IET Biometrics Abbreviated Journal BIO  
  Volume 10 Issue 1 Pages (down) 24-43  
  Keywords  
  Abstract Face anti-spoofing is critical to prevent face recognition systems from a security breach. The biometrics community has %possessed achieved impressive progress recently due the excellent performance of deep neural networks and the availability of large datasets. Although ethnic bias has been verified to severely affect the performance of face recognition systems, it still remains an open research problem in face anti-spoofing. Recently, a multi-ethnic face anti-spoofing dataset, CASIA-SURF CeFA, has been released with the goal of measuring the ethnic bias. It is the largest up to date cross-ethnicity face anti-spoofing dataset covering 3 ethnicities, 3 modalities, 1,607 subjects, 2D plus 3D attack types, and the first dataset including explicit ethnic labels among the recently released datasets for face anti-spoofing. We organized the Chalearn Face Anti-spoofing Attack Detection Challenge which consists of single-modal (e.g., RGB) and multi-modal (e.g., RGB, Depth, Infrared (IR)) tracks around this novel resource to boost research aiming to alleviate the ethnic bias. Both tracks have attracted 340 teams in the development stage, and finally 11 and 8 teams have submitted their codes in the single-modal and multi-modal face anti-spoofing recognition challenges, respectively. All the results were verified and re-ran by the organizing team, and the results were used for the final ranking. This paper presents an overview of the challenge, including its design, evaluation protocol and a summary of results. We analyze the top ranked solutions and draw conclusions derived from the competition. In addition we outline future work directions.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HUPBA; no proj Approved no  
  Call Number Admin @ si @ LLW2020b Serial 3523  
Permanent link to this record
 

 
Author Dustin Carrion Ojeda; Hong Chen; Adrian El Baz; Sergio Escalera; Chaoyu Guan; Isabelle Guyon; Ihsan Ullah; Xin Wang; Wenwu Zhu edit   pdf
url  openurl
  Title NeurIPS’22 Cross-Domain MetaDL competition: Design and baseline results Type Conference Article
  Year 2022 Publication Understanding Social Behavior in Dyadic and Small Group Interactions Abbreviated Journal  
  Volume 191 Issue Pages (down) 24-37  
  Keywords  
  Abstract We present the design and baseline results for a new challenge in the ChaLearn meta-learning series, accepted at NeurIPS'22, focusing on “cross-domain” meta-learning. Meta-learning aims to leverage experience gained from previous tasks to solve new tasks efficiently (i.e., with better performance, little training data, and/or modest computational resources). While previous challenges in the series focused on within-domain few-shot learning problems, with the aim of learning efficiently N-way k-shot tasks (i.e., N class classification problems with k training examples), this competition challenges the participants to solve “any-way” and “any-shot” problems drawn from various domains (healthcare, ecology, biology, manufacturing, and others), chosen for their humanitarian and societal impact. To that end, we created Meta-Album, a meta-dataset of 40 image classification datasets from 10 domains, from which we carve out tasks with any number of “ways” (within the range 2-20) and any number of “shots” (within the range 1-20). The competition is with code submission, fully blind-tested on the CodaLab challenge platform. The code of the winners will be open-sourced, enabling the deployment of automated machine learning solutions for few-shot image classification across several domains.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference PMLR  
  Notes HUPBA; no menciona Approved no  
  Call Number Admin @ si @ CCB2022 Serial 3802  
Permanent link to this record
 

 
Author Partha Pratim Roy; Umapada Pal; Josep Llados edit  url
doi  openurl
  Title Seal Object Detection in Document Images using GHT of Local Component Shapes Type Conference Article
  Year 2010 Publication 10th ACM Symposium On Applied Computing Abbreviated Journal  
  Volume Issue Pages (down) 23–27  
  Keywords  
  Abstract Due to noise, overlapped text/signature and multi-oriented nature, seal (stamp) object detection involves a difficult challenge. This paper deals with automatic detection of seal from documents with cluttered background. Here, a seal object is characterized by scale and rotation invariant spatial feature descriptors (distance and angular position) computed from recognition result of individual connected components (characters). Recognition of multi-scale and multi-oriented component is done using Support Vector Machine classifier. Generalized Hough Transform (GHT) is used to detect the seal and a voting is casted for finding possible location of the seal object in a document based on these spatial feature descriptor of components pairs. The peak of votes in GHT accumulator validates the hypothesis to locate the seal object in a document. Experimental results show that, the method is efficient to locate seal instance of arbitrary shape and orientation in documents.  
  Address Sierre, Switzerland  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference SAC  
  Notes DAG Approved no  
  Call Number DAG @ dag @ RPL2010a Serial 1291  
Permanent link to this record
 

 
Author S.Grau; Ana Puig; Sergio Escalera; Maria Salamo edit   pdf
url  doi
isbn  openurl
  Title Intelligent Interactive Volume Classification Type Conference Article
  Year 2013 Publication Pacific Graphics Abbreviated Journal  
  Volume 32 Issue 7 Pages (down) 23-28  
  Keywords  
  Abstract This paper defines an intelligent and interactive framework to classify multiple regions of interest from the original data on demand, without requiring any preprocessing or previous segmentation. The proposed intelligent and interactive approach is divided in three stages: visualize, training and testing. First, users visualize and label some samples directly on slices of the volume. Training and testing are based on a framework of Error Correcting Output Codes and Adaboost classifiers that learn to classify each region the user has painted. Later, at the testing stage, each classifier is directly applied on the rest of samples and combined to perform multi-class labeling, being used in the final rendering. We also parallelized the training stage using a GPU-based implementation for
obtaining a rapid interaction and classification.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-3-905674-50-7 Medium  
  Area Expedition Conference PG  
  Notes HuPBA; 600.046;MILAB Approved no  
  Call Number Admin @ si @ GPE2013b Serial 2355  
Permanent link to this record
 

 
Author Xavier Perez Sala; Laura Igual; Sergio Escalera; Cecilio Angulo edit   pdf
doi  openurl
  Title Uniform Sampling of Rotations for Discrete and Continuous Learning of 2D Shape Models Type Book Chapter
  Year 2012 Publication Vision Robotics: Technologies for Machine Learning and Vision Applications Abbreviated Journal  
  Volume Issue 2 Pages (down) 23-42  
  Keywords  
  Abstract Different methodologies of uniform sampling over the rotation group, SO(3), for building unbiased 2D shape models from 3D objects are introduced and reviewed in this chapter. State-of-the-art non uniform sampling approaches are discussed, and uniform sampling methods using Euler angles and quaternions are introduced. Moreover, since presented work is oriented to model building applications, it is not limited to general discrete methods to obtain uniform 3D rotations, but also from a continuous point of view in the case of Procrustes Analysis.  
  Address  
  Corporate Author Thesis  
  Publisher IGI-Global Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB;HuPBA Approved no  
  Call Number Admin @ si @ PIE2012 Serial 2064  
Permanent link to this record
 

 
Author Ariel Amato; Ivan Huerta; Mikhail Mozerov; Xavier Roca; Jordi Gonzalez edit   pdf
doi  isbn
openurl 
  Title Moving Cast Shadows Detection Methods for Video Surveillance Applications Type Book Chapter
  Year 2014 Publication Augmented Vision and Reality Abbreviated Journal  
  Volume 6 Issue Pages (down) 23-47  
  Keywords  
  Abstract Moving cast shadows are a major concern in today’s performance from broad range of many vision-based surveillance applications because they highly difficult the object classification task. Several shadow detection methods have been reported in the literature during the last years. They are mainly divided into two domains. One usually works with static images, whereas the second one uses image sequences, namely video content. In spite of the fact that both cases can be analogously analyzed, there is a difference in the application field. The first case, shadow detection methods can be exploited in order to obtain additional geometric and semantic cues about shape and position of its casting object (‘shape from shadows’) as well as the localization of the light source. While in the second one, the main purpose is usually change detection, scene matching or surveillance (usually in a background subtraction context). Shadows can in fact modify in a negative way the shape and color of the target object and therefore affect the performance of scene analysis and interpretation in many applications. This chapter wills mainly reviews shadow detection methods as well as their taxonomies related with the second case, thus aiming at those shadows which are associated with moving objects (moving shadows).  
  Address  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 2190-5916 ISBN 978-3-642-37840-9 Medium  
  Area Expedition Conference  
  Notes ISE; 605.203; 600.049; 302.018; 302.012; 600.078 Approved no  
  Call Number Admin @ si @ AHM2014 Serial 2223  
Permanent link to this record
 

 
Author Jose M. Armingol; Jorge Alfonso; Nourdine Aliane; Miguel Clavijo; Sergio Campos-Cordobes; Arturo de la Escalera; Javier del Ser; Javier Fernandez; Fernando Garcia; Felipe Jimenez; Antonio Lopez; Mario Mata edit  url
doi  openurl
  Title Environmental Perception for Intelligent Vehicles Type Book Chapter
  Year 2018 Publication Intelligent Vehicles. Enabling Technologies and Future Developments Abbreviated Journal  
  Volume Issue Pages (down) 23–101  
  Keywords Computer vision; laser techniques; data fusion; advanced driver assistance systems; traffic monitoring systems; intelligent vehicles  
  Abstract Environmental perception represents, because of its complexity, a challenge for Intelligent Transport Systems due to the great variety of situations and different elements that can happen in road environments and that must be faced by these systems. In connection with this, so far there are a variety of solutions as regards sensors and methods, so the results of precision, complexity, cost, or computational load obtained by these works are different. In this chapter some systems based on computer vision and laser techniques are presented. Fusion methods are also introduced in order to provide advanced and reliable perception systems.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.118 Approved no  
  Call Number Admin @ si @AAA2018 Serial 3046  
Permanent link to this record
 

 
Author Oscar Argudo; Marc Comino; Antonio Chica; Carlos Andujar; Felipe Lumbreras edit  url
openurl 
  Title Segmentation of aerial images for plausible detail synthesis Type Journal Article
  Year 2018 Publication Computers & Graphics Abbreviated Journal CG  
  Volume 71 Issue Pages (down) 23-34  
  Keywords Terrain editing; Detail synthesis; Vegetation synthesis; Terrain rendering; Image segmentation  
  Abstract The visual enrichment of digital terrain models with plausible synthetic detail requires the segmentation of aerial images into a suitable collection of categories. In this paper we present a complete pipeline for segmenting high-resolution aerial images into a user-defined set of categories distinguishing e.g. terrain, sand, snow, water, and different types of vegetation. This segmentation-for-synthesis problem implies that per-pixel categories must be established according to the algorithms chosen for rendering the synthetic detail. This precludes the definition of a universal set of labels and hinders the construction of large training sets. Since artists might choose to add new categories on the fly, the whole pipeline must be robust against unbalanced datasets, and fast on both training and inference. Under these constraints, we analyze the contribution of common per-pixel descriptors, and compare the performance of state-of-the-art supervised learning algorithms. We report the findings of two user studies. The first one was conducted to analyze human accuracy when manually labeling aerial images. The second user study compares detailed terrains built using different segmentation strategies, including official land cover maps. These studies demonstrate that our approach can be used to turn digital elevation models into fully-featured, detailed terrains with minimal authoring efforts.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0097-8493 ISBN Medium  
  Area Expedition Conference  
  Notes MSIAU; 600.086; 600.118 Approved no  
  Call Number Admin @ si @ ACC2018 Serial 3147  
Permanent link to this record
 

 
Author Sergio Escalera; Marti Soler; Stephane Ayache; Umut Guçlu; Jun Wan; Meysam Madadi; Xavier Baro; Hugo Jair Escalante; Isabelle Guyon edit  url
openurl 
  Title ChaLearn Looking at People: Inpainting and Denoising Challenges Type Book Chapter
  Year 2019 Publication The Springer Series on Challenges in Machine Learning Abbreviated Journal  
  Volume Issue Pages (down) 23-44  
  Keywords  
  Abstract Dealing with incomplete information is a well studied problem in the context of machine learning and computational intelligence. However, in the context of computer vision, the problem has only been studied in specific scenarios (e.g., certain types of occlusions in specific types of images), although it is common to have incomplete information in visual data. This chapter describes the design of an academic competition focusing on inpainting of images and video sequences that was part of the competition program of WCCI2018 and had a satellite event collocated with ECCV2018. The ChaLearn Looking at People Inpainting Challenge aimed at advancing the state of the art on visual inpainting by promoting the development of methods for recovering missing and occluded information from images and video. Three tracks were proposed in which visual inpainting might be helpful but still challenging: human body pose estimation, text overlays removal and fingerprint denoising. This chapter describes the design of the challenge, which includes the release of three novel datasets, and the description of evaluation metrics, baselines and evaluation protocol. The results of the challenge are analyzed and discussed in detail and conclusions derived from this event are outlined.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA; no proj Approved no  
  Call Number Admin @ si @ ESA2019 Serial 3327  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: