toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Sergio Escalera; Jordi Gonzalez; Xavier Baro; Jamie Shotton edit  doi
openurl 
  Title (up) Guest Editor Introduction to the Special Issue on Multimodal Human Pose Recovery and Behavior Analysis Type Journal Article
  Year 2016 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
  Volume 28 Issue Pages 1489 - 1491  
  Keywords  
  Abstract The sixteen papers in this special section focus on human pose recovery and behavior analysis (HuPBA). This is one of the most challenging topics in computer vision, pattern analysis, and machine learning. It is of critical importance for application areas that include gaming, computer interaction, human robot interaction, security, commerce, assistive technologies and rehabilitation, sports, sign language recognition, and driver assistance technology, to mention just a few. In essence, HuPBA requires dealing with the articulated nature of the human body, changes in appearance due to clothing, and the inherent problems of clutter scenes, such as background artifacts, occlusions, and illumination changes. These papers represent the most recent research in this field, including new methods considering still images, image sequences, depth data, stereo vision, 3D vision, audio, and IMUs, among others.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA; ISE;MV; Approved no  
  Call Number Admin @ si @ Serial 2851  
Permanent link to this record
 

 
Author Ole Vilhelm-Larsen; Petia Radeva; Enric Marti edit   pdf
doi  openurl
  Title (up) Guidelines for choosing optimal parameters of elasticity for snakes Type Book Chapter
  Year 1995 Publication Computer Analysis Of Images And Patterns Abbreviated Journal LNCS  
  Volume 970 Issue Pages 106-113  
  Keywords  
  Abstract This paper proposes a guidance in the process of choosing and using the parameters of elasticity of a snake in order to obtain a precise segmentation. A new two step procedure is defined based on upper and lower bounds on the parameters. Formulas, by which these bounds can be calculated for real images where parts of the contour may be missing, are presented. Experiments on segmentation of bone structures in X-ray images have verified the usefulness of the new procedure.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Lecture Notes in Computer Science Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB;IAM Approved no  
  Call Number IAM @ iam @ LRM1995b Serial 1558  
Permanent link to this record
 

 
Author David Geronimo; Antonio Lopez; Daniel Ponsa; Angel Sappa edit   pdf
url  openurl
  Title (up) Haar Wavelets and Edge Orientation Histograms for On-Board Pedestrian Detection Type Conference Article
  Year 2007 Publication 3rd Iberian Conference on Pattern Recognition and Image Analysis, LNCS 4477 Abbreviated Journal  
  Volume 1 Issue Pages 418–425  
  Keywords Pedestrian detection  
  Abstract  
  Address Girona (Spain)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor J. Marti et al.  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ GLP2007a Serial 805  
Permanent link to this record
 

 
Author Carola Figueroa Flores; Bogdan Raducanu; David Berga; Joost Van de Weijer edit   pdf
openurl 
  Title (up) Hallucinating Saliency Maps for Fine-Grained Image Classification for Limited Data Domains Type Conference Article
  Year 2021 Publication 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications Abbreviated Journal  
  Volume 4 Issue Pages 163-171  
  Keywords  
  Abstract arXiv:2007.12562
Most of the saliency methods are evaluated on their ability to generate saliency maps, and not on their functionality in a complete vision pipeline, like for instance, image classification. In the current paper, we propose an approach which does not require explicit saliency maps to improve image classification, but they are learned implicitely, during the training of an end-to-end image classification task. We show that our approach obtains similar results as the case when the saliency maps are provided explicitely. Combining RGB data with saliency maps represents a significant advantage for object recognition, especially for the case when training data is limited. We validate our method on several datasets for fine-grained classification tasks (Flowers, Birds and Cars). In addition, we show that our saliency estimation method, which is trained without any saliency groundtruth data, obtains competitive results on real image saliency benchmark (Toronto), and outperforms deep saliency models with synthetic images (SID4VAM).
 
  Address Virtual; February 2021  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference VISAPP  
  Notes LAMP Approved no  
  Call Number Admin @ si @ FRB2021c Serial 3540  
Permanent link to this record
 

 
Author Josep Llados; Jaime Lopez-Krahe; Enric Marti edit   pdf
doi  openurl
  Title (up) Hand drawn document understanding using the straight line Hough transform and graph matching Type Conference Article
  Year 1996 Publication Proceedings of the 13th International Pattern Recognition Conference (ICPR’96) Abbreviated Journal  
  Volume 2 Issue Pages 497-501  
  Keywords  
  Abstract This paper presents a system to understand hand drawn architectural drawings in a CAD environment. The procedure is to identify in a floor plan the building elements, stored in a library of patterns, and their spatial relationships. The vectorized input document and the patterns to recognize are represented by attributed graphs. To recognize the patterns as such, we apply a structural approach based on subgraph isomorphism techniques. In spite of their value, graph matching techniques do not recognize adequately those building elements characterized by hatching patterns, i.e. walls. Here we focus on the recognition of hatching patterns and develop a straight line Hough transform based method in order to detect the regions filled in with parallel straight fines. This allows not only to recognize filling patterns, but it actually reduces the computational load associated with the subgraph isomorphism computation. The result is that the document can be redrawn by editing all the patterns recognized  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Vienna , Austria Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes DAG;IAM; Approved no  
  Call Number IAM @ iam @ LLM1996 Serial 1579  
Permanent link to this record
 

 
Author Alicia Fornes; Sergio Escalera; Josep Llados; Gemma Sanchez; Joan Mas edit  openurl
  Title (up) Hand Drawn Symbol Recognition by Blurred Shape Model Descriptor and a Multiclass Classifier Type Book Chapter
  Year 2008 Publication Graphics Recognition: Recent Advances and New Opportunities Abbreviated Journal  
  Volume 5046 Issue Pages 30–40  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor W. Liu, J. Llados, J.M. Ogier  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes DAG; HUPBA; MILAB Approved no  
  Call Number BCNPCL @ bcnpcl @ FEL2008 Serial 989  
Permanent link to this record
 

 
Author Enric Marti; Jordi Regincos;Jaime Lopez-Krahe; Juan J.Villanueva edit  url
doi  openurl
  Title (up) Hand line drawing interpretation as three-dimensional objects Type Journal Article
  Year 1993 Publication Signal Processing – Intelligent systems for signal and image understanding Abbreviated Journal  
  Volume 32 Issue 1-2 Pages 91-110  
  Keywords Line drawing interpretation; line labelling; scene analysis; man-machine interaction; CAD input; line extraction  
  Abstract In this paper we present a technique to interpret hand line drawings as objects in a three-dimensional space. The object domain considered is based on planar surfaces with straight edges, concretely, on ansextension of Origami world to hidden lines. The line drawing represents the object under orthographic projection and it is sensed using a scanner. Our method is structured in two modules: feature extraction and feature interpretation. In the first one, image processing techniques are applied under certain tolerance margins to detect lines and junctions on the hand line drawing. Feature interpretation module is founded on line labelling techniques using a labelled junction dictionary. A labelling algorithm is here proposed. It uses relaxation techniques to reduce the number of incompatible labels with the junction dictionary so that the convergence of solutions can be accelerated. We formulate some labelling hypotheses tending to eliminate elements in two sets of labelled interpretations. That is, those which are compatible with the dictionary but do not correspond to three-dimensional objects and those which represent objects not very probable to be specified by means of a line drawing. New entities arise on the line drawing as a result of the extension of Origami world. These are defined to enunciate the assumptions of our method as well as to clarify the algorithms proposed. This technique is framed in a project aimed to implement a system to create 3D objects to improve man-machine interaction in CAD systems.  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier North-Holland, Inc. Place of Publication Amsterdam, The Netherlands, The Netherlands Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0165-1684 ISBN Medium  
  Area Expedition Conference  
  Notes IAM;ISE; Approved no  
  Call Number IAM @ iam @ MRL1993 Serial 1611  
Permanent link to this record
 

 
Author Razieh Rastgoo; Kourosh Kiani; Sergio Escalera edit  url
openurl 
  Title (up) Hand pose aware multimodal isolated sign language recognition Type Journal Article
  Year 2020 Publication Multimedia Tools and Applications Abbreviated Journal MTAP  
  Volume 80 Issue Pages 127–163  
  Keywords  
  Abstract Isolated hand sign language recognition from video is a challenging research area in computer vision. Some of the most important challenges in this area include dealing with hand occlusion, fast hand movement, illumination changes, or background complexity. While most of the state-of-the-art results in the field have been achieved using deep learning-based models, the previous challenges are not completely solved. In this paper, we propose a hand pose aware model for isolated hand sign language recognition using deep learning approaches from two input modalities, RGB and depth videos. Four spatial feature types: pixel-level, flow, deep hand, and hand pose features, fused from both visual modalities, are input to LSTM for temporal sign recognition. While we use Optical Flow (OF) for flow information in RGB video inputs, Scene Flow (SF) is used for depth video inputs. By including hand pose features, we show a consistent performance improvement of the sign language recognition model. To the best of our knowledge, this is the first time that this discriminant spatiotemporal features, benefiting from the hand pose estimation features and multi-modal inputs, are fused for isolated hand sign language recognition. We perform a step-by-step analysis of the impact in terms of recognition performance of the hand pose features, different combinations of the spatial features, and different recurrent models, especially LSTM and GRU. Results on four public datasets confirm that the proposed model outperforms the current state-of-the-art models on Montalbano II, MSR Daily Activity 3D, and CAD-60 datasets with a relative accuracy improvement of 1.64%, 6.5%, and 7.6%. Furthermore, our model obtains a competitive results on isoGD dataset with only 0.22% margin lower than the current state-of-the-art model.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HUPBA; no menciona Approved no  
  Call Number Admin @ si @ RKE2020 Serial 3524  
Permanent link to this record
 

 
Author Razieh Rastgoo; Kourosh Kiani; Sergio Escalera edit  url
openurl 
  Title (up) Hand sign language recognition using multi-view hand skeleton Type Journal Article
  Year 2020 Publication Expert Systems With Applications Abbreviated Journal ESWA  
  Volume 150 Issue Pages 113336  
  Keywords Multi-view hand skeleton; Hand sign language recognition; 3DCNN; Hand pose estimation; RGB video; Hand action recognition  
  Abstract Hand sign language recognition from video is a challenging research area in computer vision, which performance is affected by hand occlusion, fast hand movement, illumination changes, or background complexity, just to mention a few. In recent years, deep learning approaches have achieved state-of-the-art results in the field, though previous challenges are not completely solved. In this work, we propose a novel deep learning-based pipeline architecture for efficient automatic hand sign language recognition using Single Shot Detector (SSD), 2D Convolutional Neural Network (2DCNN), 3D Convolutional Neural Network (3DCNN), and Long Short-Term Memory (LSTM) from RGB input videos. We use a CNN-based model which estimates the 3D hand keypoints from 2D input frames. After that, we connect these estimated keypoints to build the hand skeleton by using midpoint algorithm. In order to obtain a more discriminative representation of hands, we project 3D hand skeleton into three views surface images. We further employ the heatmap image of detected keypoints as input for refinement in a stacked fashion. We apply 3DCNNs on the stacked features of hand, including pixel level, multi-view hand skeleton, and heatmap features, to extract discriminant local spatio-temporal features from these stacked inputs. The outputs of the 3DCNNs are fused and fed to a LSTM to model long-term dynamics of hand sign gestures. Analyzing 2DCNN vs. 3DCNN using different number of stacked inputs into the network, we demonstrate that 3DCNN better capture spatio-temporal dynamics of hands. To the best of our knowledge, this is the first time that this multi-modal and multi-view set of hand skeleton features are applied for hand sign language recognition. Furthermore, we present a new large-scale hand sign language dataset, namely RKS-PERSIANSIGN, including 10′000 RGB videos of 100 Persian sign words. Evaluation results of the proposed model on three datasets, NYU, First-Person, and RKS-PERSIANSIGN, indicate that our model outperforms state-of-the-art models in hand sign language recognition, hand pose estimation, and hand action recognition.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA; no proj Approved no  
  Call Number Admin @ si @ RKE2020a Serial 3411  
Permanent link to this record
 

 
Author Ernest Valveny; Enric Marti edit   pdf
doi  isbn
openurl 
  Title (up) Hand-drawn symbol recognition in graphic documents using deformable template matching and a Bayesian framework Type Conference Article
  Year 2000 Publication Proc. 15th Int Pattern Recognition Conf Abbreviated Journal  
  Volume 2 Issue Pages 239-242  
  Keywords  
  Abstract Hand-drawn symbols can take many different and distorted shapes from their ideal representation. Then, very flexible methods are needed to be able to handle unconstrained drawings. We propose here to extend our previous work in hand-drawn symbol recognition based on a Bayesian framework and deformable template matching. This approach gets flexibility enough to fit distorted shapes in the drawing while keeping fidelity to the ideal shape of the symbol. In this work, we define the similarity measure between an image and a symbol based on the distance from every pixel in the image to the lines in the symbol. Matching is carried out using an implementation of the EM algorithm. Thus, we can improve recognition rates and computation time with respect to our previous formulation based on a simulated annealing algorithm.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 0-7695-0750-6 Medium  
  Area Expedition Conference  
  Notes DAG;IAM; Approved no  
  Call Number IAM @ iam @ VAM2000 Serial 1656  
Permanent link to this record
 

 
Author Juan Ignacio Toledo; Sounak Dey; Alicia Fornes; Josep Llados edit   pdf
openurl 
  Title (up) Handwriting Recognition by Attribute embedding and Recurrent Neural Networks Type Conference Article
  Year 2017 Publication 14th International Conference on Document Analysis and Recognition Abbreviated Journal  
  Volume Issue Pages 1038-1043  
  Keywords  
  Abstract Handwriting recognition consists in obtaining the transcription of a text image. Recent word spotting methods based on attribute embedding have shown good performance when recognizing words. However, they are holistic methods in the sense that they recognize the word as a whole (i.e. they find the closest word in the lexicon to the word image). Consequently,
these kinds of approaches are not able to deal with out of vocabulary words, which are common in historical manuscripts. Also, they cannot be extended to recognize text lines. In order to address these issues, in this paper we propose a handwriting recognition method that adapts the attribute embedding to sequence learning. Concretely, the method learns the attribute embedding of patches of word images with a convolutional neural network. Then, these embeddings are presented as a sequence to a recurrent neural network that produces the transcription. We obtain promising results even without the use of any kind of dictionary or language model
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICDAR  
  Notes DAG; 600.097; 601.225; 600.121 Approved no  
  Call Number Admin @ si @ TDF2017 Serial 3055  
Permanent link to this record
 

 
Author Volkmar Frinken; Andreas Fischer; Carlos David Martinez Hinarejos edit   pdf
doi  isbn
openurl 
  Title (up) Handwriting Recognition in Historical Documents using Very Large Vocabularies Type Conference Article
  Year 2013 Publication 2nd International Workshop on Historical Document Imaging and Processing Abbreviated Journal  
  Volume Issue Pages 67-72  
  Keywords  
  Abstract Language models are used in automatic transcription system to resolve ambiguities. This is done by limiting the vocabulary of words that can be recognized as well as estimating the n-gram probability of the words in the given text. In the context of historical documents, a non-unified spelling and the limited amount of written text pose a substantial problem for the selection of the recognizable vocabulary as well as the computation of the word probabilities. In this paper we propose for the transcription of historical Spanish text to keep the corpus for the n-gram limited to a sample of the target text, but expand the vocabulary with words gathered from external resources. We analyze the performance of such a transcription system with different sizes of external vocabularies and demonstrate the applicability and the significant increase in recognition accuracy of using up to 300 thousand external words.  
  Address Washington; USA; August 2013  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4503-2115-0 Medium  
  Area Expedition Conference HIP  
  Notes DAG; 600.056; 600.045; 600.061; 602.006; 602.101 Approved no  
  Call Number Admin @ si @ FFM2013 Serial 2296  
Permanent link to this record
 

 
Author V. Chapaprieta; Ernest Valveny edit  openurl
  Title (up) Handwritten Digit Recognition Using Point Distribution Models. Type Miscellaneous
  Year 2001 Publication Proceedings of the IX Spanish Symposium on Pattern Recognition and Image Analysis, 1:49–54. Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes DAG Approved no  
  Call Number DAG @ dag @ ChV2001 Serial 83  
Permanent link to this record
 

 
Author Arnau Baro; Alicia Fornes; Carles Badal edit   pdf
openurl 
  Title (up) Handwritten Historical Music Recognition by Sequence-to-Sequence with Attention Mechanism Type Conference Article
  Year 2020 Publication 17th International Conference on Frontiers in Handwriting Recognition Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Despite decades of research in Optical Music Recognition (OMR), the recognition of old handwritten music scores remains a challenge because of the variabilities in the handwriting styles, paper degradation, lack of standard notation, etc. Therefore, the research in OMR systems adapted to the particularities of old manuscripts is crucial to accelerate the conversion of music scores existing in archives into digital libraries, fostering the dissemination and preservation of our music heritage. In this paper we explore the adaptation of sequence-to-sequence models with attention mechanism (used in translation and handwritten text recognition) and the generation of specific synthetic data for recognizing old music scores. The experimental validation demonstrates that our approach is promising, especially when compared with long short-term memory neural networks.  
  Address Virtual ICFHR; September 2020  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICFHR  
  Notes DAG; 600.140; 600.121 Approved no  
  Call Number Admin @ si @ BFB2020 Serial 3448  
Permanent link to this record
 

 
Author Arnau Baro; Carles Badal; Pau Torras; Alicia Fornes edit   pdf
url  openurl
  Title (up) Handwritten Historical Music Recognition through Sequence-to-Sequence with Attention Mechanism Type Conference Article
  Year 2022 Publication 3rd International Workshop on Reading Music Systems (WoRMS2021) Abbreviated Journal  
  Volume Issue Pages 55-59  
  Keywords Optical Music Recognition; Digits; Image Classification  
  Abstract Despite decades of research in Optical Music Recognition (OMR), the recognition of old handwritten music scores remains a challenge because of the variabilities in the handwriting styles, paper degradation, lack of standard notation, etc. Therefore, the research in OMR systems adapted to the particularities of old manuscripts is crucial to accelerate the conversion of music scores existing in archives into digital libraries, fostering the dissemination and preservation of our music heritage. In this paper we explore the adaptation of sequence-to-sequence models with attention mechanism (used in translation and handwritten text recognition) and the generation of specific synthetic data for recognizing old music scores. The experimental validation demonstrates that our approach is promising, especially when compared with long short-term memory neural networks.  
  Address July 23, 2021, Alicante (Spain)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference WoRMS  
  Notes DAG; 600.121; 600.162; 602.230; 600.140 Approved no  
  Call Number Admin @ si @ BBT2022 Serial 3734  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: