toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Fahad Shahbaz Khan; Muhammad Anwer Rao; Joost Van de Weijer; Michael Felsberg; J.Laaksonen edit  url
doi  isbn
openurl 
  Title Deep semantic pyramids for human attributes and action recognition Type Conference Article
  Year 2015 Publication Image Analysis, Proceedings of 19th Scandinavian Conference , SCIA 2015 Abbreviated Journal  
  Volume 9127 Issue Pages 341-353  
  Keywords Action recognition; Human attributes; Semantic pyramids  
  Abstract Describing persons and their actions is a challenging problem due to variations in pose, scale and viewpoint in real-world images. Recently, semantic pyramids approach [1] for pose normalization has shown to provide excellent results for gender and action recognition. The performance of semantic pyramids approach relies on robust image description and is therefore limited due to the use of shallow local features. In the context of object recognition [2] and object detection [3], convolutional neural networks (CNNs) or deep features have shown to improve the performance over the conventional shallow features.
We propose deep semantic pyramids for human attributes and action recognition. The method works by constructing spatial pyramids based on CNNs of different part locations. These pyramids are then combined to obtain a single semantic representation. We validate our approach on the Berkeley and 27 Human Attributes datasets for attributes classification. For action recognition, we perform experiments on two challenging datasets: Willow and PASCAL VOC 2010. The proposed deep semantic pyramids provide a significant gain of 17.2%, 13.9%, 24.3% and 22.6% compared to the standard shallow semantic pyramids on Berkeley, 27 Human Attributes, Willow and PASCAL VOC 2010 datasets respectively. Our results also show that deep semantic pyramids outperform conventional CNNs based on the full bounding box of the person. Finally, we compare our approach with state-of-the-art methods and show a gain in performance compared to best methods in literature.
 
  Address Denmark; Copenhagen; June 2015  
  Corporate Author Thesis  
  Publisher Springer International Publishing Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-319-19664-0 Medium  
  Area Expedition Conference (down) SCIA  
  Notes LAMP; 600.068; 600.079;ADAS Approved no  
  Call Number Admin @ si @ KRW2015b Serial 2672  
Permanent link to this record
 

 
Author Muhammad Anwer Rao; Fahad Shahbaz Khan; Joost Van de Weijer; Jorma Laaksonen edit   pdf
openurl 
  Title Top-Down Deep Appearance Attention for Action Recognition Type Conference Article
  Year 2017 Publication 20th Scandinavian Conference on Image Analysis Abbreviated Journal  
  Volume 10269 Issue Pages 297-309  
  Keywords Action recognition; CNNs; Feature fusion  
  Abstract Recognizing human actions in videos is a challenging problem in computer vision. Recently, convolutional neural network based deep features have shown promising results for action recognition. In this paper, we investigate the problem of fusing deep appearance and motion cues for action recognition. We propose a video representation which combines deep appearance and motion based local convolutional features within the bag-of-deep-features framework. Firstly, dense deep appearance and motion based local convolutional features are extracted from spatial (RGB) and temporal (flow) networks, respectively. Both visual cues are processed in parallel by constructing separate visual vocabularies for appearance and motion. A category-specific appearance map is then learned to modulate the weights of the deep motion features. The proposed representation is discriminative and binds the deep local convolutional features to their spatial locations. Experiments are performed on two challenging datasets: JHMDB dataset with 21 action classes and ACT dataset with 43 categories. The results clearly demonstrate that our approach outperforms both standard approaches of early and late feature fusion. Further, our approach is only employing action labels and without exploiting body part information, but achieves competitive performance compared to the state-of-the-art deep features based approaches.  
  Address Tromso; June 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down) SCIA  
  Notes LAMP; 600.109; 600.068; 600.120 Approved no  
  Call Number Admin @ si @ RKW2017b Serial 3039  
Permanent link to this record
 

 
Author Mario Rojas; David Masip; Jordi Vitria edit  doi
isbn  openurl
  Title Predicting Dominance Judgements Automatically: A Machine Learning Approach. Type Conference Article
  Year 2011 Publication IEEE International Workshop on Social Behavior Analysis Abbreviated Journal  
  Volume Issue Pages 939-944  
  Keywords  
  Abstract The amount of multimodal devices that surround us is growing everyday. In this context, human interaction and communication have become a focus of attention and a hot topic of research. A crucial element in human relations is the evaluation of individuals with respect to facial traits, what is called a first impression. Studies based on appearance have suggested that personality can be expressed by appearance and the observer may use such information to form judgments. In the context of rapid facial evaluation, certain personality traits seem to have a more pronounced effect on the relations and perceptions inside groups. The perception of dominance has been shown to be an active part of social roles at different stages of life, and even play a part in mate selection. The aim of this paper is to study to what extent this information is learnable from the point of view of computer science. Specifically we intend to determine if judgments of dominance can be learned by machine learning techniques. We implement two different descriptors in order to assess this. The first is the histogram of oriented gradients (HOG), and the second is a probabilistic appearance descriptor based on the frequencies of grouped binary tests. State of the art classification rules validate the performance of both descriptors, with respect to the prediction task. Experimental results show that machine learning techniques can predict judgments of dominance rather accurately (accuracies up to 90%) and that the HOG descriptor may characterize appropriately the information necessary for such task.  
  Address Santa Barbara, CA  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4244-9140-7 Medium  
  Area Expedition Conference (down) SBA  
  Notes OR;MV Approved no  
  Call Number Admin @ si @ RMV2011b Serial 1760  
Permanent link to this record
 

 
Author Carles Fernandez; Jordi Gonzalez edit  openurl
  Title Ontology for Semantic Integration in a Cognitive Surveillance System Type Conference Article
  Year 2007 Publication Semantic Multimedia, 2nd International Conference on Semantics and Digital Media Technologies Abbreviated Journal  
  Volume 4816 Issue Pages 263–263  
  Keywords  
  Abstract  
  Address Genova (Italy)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down) SAMT’07  
  Notes ISE Approved no  
  Call Number ISE @ ise @ FeG2007 Serial 919  
Permanent link to this record
 

 
Author Onur Ferhat; Arcadi Llanza; Fernando Vilariño edit  openurl
  Title Gaze interaction for multi-display systems using natural light eye-tracker Type Conference Article
  Year 2015 Publication 2nd International Workshop on Solutions for Automatic Gaze Data Analysis Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Bielefeld; Germany; September 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down) SAGA  
  Notes MV;SIAI Approved no  
  Call Number Admin @ si @ FLV2015b Serial 2676  
Permanent link to this record
 

 
Author Partha Pratim Roy; Umapada Pal; Josep Llados edit  url
doi  openurl
  Title Seal Object Detection in Document Images using GHT of Local Component Shapes Type Conference Article
  Year 2010 Publication 10th ACM Symposium On Applied Computing Abbreviated Journal  
  Volume Issue Pages 23–27  
  Keywords  
  Abstract Due to noise, overlapped text/signature and multi-oriented nature, seal (stamp) object detection involves a difficult challenge. This paper deals with automatic detection of seal from documents with cluttered background. Here, a seal object is characterized by scale and rotation invariant spatial feature descriptors (distance and angular position) computed from recognition result of individual connected components (characters). Recognition of multi-scale and multi-oriented component is done using Support Vector Machine classifier. Generalized Hough Transform (GHT) is used to detect the seal and a voting is casted for finding possible location of the seal object in a document based on these spatial feature descriptor of components pairs. The peak of votes in GHT accumulator validates the hypothesis to locate the seal object in a document. Experimental results show that, the method is efficient to locate seal instance of arbitrary shape and orientation in documents.  
  Address Sierre, Switzerland  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down) SAC  
  Notes DAG Approved no  
  Call Number DAG @ dag @ RPL2010a Serial 1291  
Permanent link to this record
 

 
Author Laura Igual; Santiago Segui; Jordi Vitria; Fernando Azpiroz; Petia Radeva edit  openurl
  Title Sparse Bayesian Feature Selection Applied to Intestinal Motility Analysis Type Conference Article
  Year 2007 Publication XVI Congreso Argentino de Bioingenieria Abbreviated Journal  
  Volume Issue Pages 467–470  
  Keywords  
  Abstract  
  Address San Juan (Argentina)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down) SABI  
  Notes MILAB;OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ ISV2007b Serial 896  
Permanent link to this record
 

 
Author Jaume Gibert; Ernest Valveny edit  doi
isbn  openurl
  Title Graph Embedding based on Nodes Attributes Representatives and a Graph of Words Representation. Type Conference Article
  Year 2010 Publication 13th International worshop on structural and syntactic pattern recognition and 8th international worshop on statistical pattern recognition Abbreviated Journal  
  Volume 6218 Issue Pages 223–232  
  Keywords  
  Abstract Although graph embedding has recently been used to extend statistical pattern recognition techniques to the graph domain, some existing embeddings are usually computationally expensive as they rely on classical graph-based operations. In this paper we present a new way to embed graphs into vector spaces by first encapsulating the information stored in the original graph under another graph representation by clustering the attributes of the graphs to be processed. This new representation makes the association of graphs to vectors an easy step by just arranging both node attributes and the adjacency matrix in the form of vectors. To test our method, we use two different databases of graphs whose nodes attributes are of different nature. A comparison with a reference method permits to show that this new embedding is better in terms of classification rates, while being much more faster.  
  Address  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor In E.R. Hancock, R.C. Wilson, T. Windeatt, I. Ulusoy and F. Escolano,  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-14979-5 Medium  
  Area Expedition Conference (down) S+SSPR  
  Notes DAG Approved no  
  Call Number DAG @ dag @ GiV2010 Serial 1416  
Permanent link to this record
 

 
Author Youssef El Rhabi; Simon Loic; Brun Luc; Josep Llados; Felipe Lumbreras edit  doi
openurl 
  Title Information Theoretic Rotationwise Robust Binary Descriptor Learning Type Conference Article
  Year 2016 Publication Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR) Abbreviated Journal  
  Volume Issue Pages 368-378  
  Keywords  
  Abstract In this paper, we propose a new data-driven approach for binary descriptor selection. In order to draw a clear analysis of common designs, we present a general information-theoretic selection paradigm. It encompasses several standard binary descriptor construction schemes, including a recent state-of-the-art one named BOLD. We pursue the same endeavor to increase the stability of the produced descriptors with respect to rotations. To achieve this goal, we have designed a novel offline selection criterion which is better adapted to the online matching procedure. The effectiveness of our approach is demonstrated on two standard datasets, where our descriptor is compared to BOLD and to several classical descriptors. In particular, it emerges that our approach can reproduce equivalent if not better performance as BOLD while relying on twice shorter descriptors. Such an improvement can be influential for real-time applications.  
  Address Mérida; Mexico; November 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down) S+SSPR  
  Notes DAG; ADAS; 600.097; 600.086 Approved no  
  Call Number Admin @ si @ RLL2016 Serial 2871  
Permanent link to this record
 

 
Author Sounak Dey; Anguelos Nicolaou; Josep Llados; Umapada Pal edit   pdf
doi  openurl
  Title Local Binary Pattern for Word Spotting in Handwritten Historical Document Type Conference Article
  Year 2016 Publication Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR) Abbreviated Journal  
  Volume Issue Pages 574-583  
  Keywords Local binary patterns; Spatial sampling; Learning-free; Word spotting; Handwritten; Historical document analysis; Large-scale data  
  Abstract Digital libraries store images which can be highly degraded and to index this kind of images we resort to word spotting as our information retrieval system. Information retrieval for handwritten document images is more challenging due to the difficulties in complex layout analysis, large variations of writing styles, and degradation or low quality of historical manuscripts. This paper presents a simple innovative learning-free method for word spotting from large scale historical documents combining Local Binary Pattern (LBP) and spatial sampling. This method offers three advantages: firstly, it operates in completely learning free paradigm which is very different from unsupervised learning methods, secondly, the computational time is significantly low because of the LBP features, which are very fast to compute, and thirdly, the method can be used in scenarios where annotations are not available. Finally, we compare the results of our proposed retrieval method with other methods in the literature and we obtain the best results in the learning free paradigm.  
  Address Merida; Mexico; December 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down) S+SSPR  
  Notes DAG; 600.097; 602.006; 603.053 Approved no  
  Call Number Admin @ si @ DNL2016 Serial 2876  
Permanent link to this record
 

 
Author Juan Ignacio Toledo; Sebastian Sudholt; Alicia Fornes; Jordi Cucurull; A. Fink; Josep Llados edit   pdf
url  isbn
openurl 
  Title Handwritten Word Image Categorization with Convolutional Neural Networks and Spatial Pyramid Pooling Type Conference Article
  Year 2016 Publication Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR) Abbreviated Journal  
  Volume 10029 Issue Pages 543-552  
  Keywords Document image analysis; Word image categorization; Convolutional neural networks; Named entity detection  
  Abstract The extraction of relevant information from historical document collections is one of the key steps in order to make these documents available for access and searches. The usual approach combines transcription and grammars in order to extract semantically meaningful entities. In this paper, we describe a new method to obtain word categories directly from non-preprocessed handwritten word images. The method can be used to directly extract information, being an alternative to the transcription. Thus it can be used as a first step in any kind of syntactical analysis. The approach is based on Convolutional Neural Networks with a Spatial Pyramid Pooling layer to deal with the different shapes of the input images. We performed the experiments on a historical marriage record dataset, obtaining promising results.  
  Address Merida; Mexico; December 2016  
  Corporate Author Thesis  
  Publisher Springer International Publishing Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-3-319-49054-0 Medium  
  Area Expedition Conference (down) S+SSPR  
  Notes DAG; 600.097; 602.006 Approved no  
  Call Number Admin @ si @ TSF2016 Serial 2877  
Permanent link to this record
 

 
Author Thanh Ha Do; Salvatore Tabbone; Oriol Ramos Terrades edit  openurl
  Title Spotting Symbol over Graphical Documents Via Sparsity in Visual Vocabulary Type Book Chapter
  Year 2016 Publication Recent Trends in Image Processing and Pattern Recognition Abbreviated Journal  
  Volume 709 Issue Pages  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down) RTIP2R  
  Notes DAG Approved no  
  Call Number Admin @ si @ HTR2016 Serial 2956  
Permanent link to this record
 

 
Author Arnau Ramisa; David Aldavert; Shrihari Vasudevan; Ricardo Toledo; Ramon Lopez de Mantaras edit  url
openurl 
  Title The IIIA30 MObile Robot Object Recognition Datset Type Conference Article
  Year 2011 Publication 11th Portuguese Robotics Open Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Object perception is a key feature in order to make mobile robots able to perform high-level tasks. However, research aimed at addressing the constraints and limitations encountered in a mobile robotics scenario, like low image resolution, motion blur or tight computational constraints, is still very scarce. In order to facilitate future research in this direction, in this work we present an object detection and recognition dataset acquired using a mobile robotic platform. As a baseline for the dataset, we evaluated the cascade of weak classifiers object detection method from Viola and Jones.  
  Address Lisboa  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down) Robotica  
  Notes RV;ADAS Approved no  
  Call Number Admin @ si @ RAV2011 Serial 1777  
Permanent link to this record
 

 
Author Miguel Oliveira; Victor Santos; Angel Sappa; P. Dias edit   pdf
doi  openurl
  Title Scene Representations for Autonomous Driving: an approach based on polygonal primitives Type Conference Article
  Year 2015 Publication 2nd Iberian Robotics Conference ROBOT2015 Abbreviated Journal  
  Volume 417 Issue Pages 503-515  
  Keywords Scene reconstruction; Point cloud; Autonomous vehicles  
  Abstract In this paper, we present a novel methodology to compute a 3D scene
representation. The algorithm uses macro scale polygonal primitives to model the scene. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Results show that the approach is capable of producing accurate descriptions of the scene. In addition, the algorithm is very efficient when compared to other techniques.
 
  Address Lisboa; Portugal; November 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down) ROBOT  
  Notes ADAS; 600.076; 600.086 Approved no  
  Call Number Admin @ si @ OSS2015a Serial 2662  
Permanent link to this record
 

 
Author J.Poujol; Cristhian A. Aguilera-Carrasco; E.Danos; Boris X. Vintimilla; Ricardo Toledo; Angel Sappa edit   pdf
url  doi
isbn  openurl
  Title Visible-Thermal Fusion based Monocular Visual Odometry Type Conference Article
  Year 2015 Publication 2nd Iberian Robotics Conference ROBOT2015 Abbreviated Journal  
  Volume 417 Issue Pages 517-528  
  Keywords Monocular Visual Odometry; LWIR-RGB cross-spectral Imaging; Image Fusion.  
  Abstract The manuscript evaluates the performance of a monocular visual odometry approach when images from different spectra are considered, both independently and fused. The objective behind this evaluation is to analyze if classical approaches can be improved when the given images, which are from different spectra, are fused and represented in new domains. The images in these new domains should have some of the following properties: i) more robust to noisy data; ii) less sensitive to changes (e.g., lighting); iii) more rich in descriptive information, among other. In particular in the current work two different image fusion strategies are considered. Firstly, images from the visible and thermal spectrum are fused using a Discrete Wavelet Transform (DWT) approach. Secondly, a monochrome threshold strategy is considered. The obtained
representations are evaluated under a visual odometry framework, highlighting
their advantages and disadvantages, using different urban and semi-urban scenarios. Comparisons with both monocular-visible spectrum and monocular-infrared spectrum, are also provided showing the validity of the proposed approach.
 
  Address Lisboa; Portugal; November 2015  
  Corporate Author Thesis  
  Publisher Springer International Publishing Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 2194-5357 ISBN 978-3-319-27145-3 Medium  
  Area Expedition Conference (down) ROBOT  
  Notes ADAS; 600.076; 600.086 Approved no  
  Call Number Admin @ si @ PAD2015 Serial 2663  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: