toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Jaume Gibert; Ernest Valveny edit  doi
isbn  openurl
  Title Graph Embedding based on Nodes Attributes Representatives and a Graph of Words Representation. Type Conference Article
  Year 2010 Publication (down) 13th International worshop on structural and syntactic pattern recognition and 8th international worshop on statistical pattern recognition Abbreviated Journal  
  Volume 6218 Issue Pages 223–232  
  Keywords  
  Abstract Although graph embedding has recently been used to extend statistical pattern recognition techniques to the graph domain, some existing embeddings are usually computationally expensive as they rely on classical graph-based operations. In this paper we present a new way to embed graphs into vector spaces by first encapsulating the information stored in the original graph under another graph representation by clustering the attributes of the graphs to be processed. This new representation makes the association of graphs to vectors an easy step by just arranging both node attributes and the adjacency matrix in the form of vectors. To test our method, we use two different databases of graphs whose nodes attributes are of different nature. A comparison with a reference method permits to show that this new embedding is better in terms of classification rates, while being much more faster.  
  Address  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor In E.R. Hancock, R.C. Wilson, T. Windeatt, I. Ulusoy and F. Escolano,  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-14979-5 Medium  
  Area Expedition Conference S+SSPR  
  Notes DAG Approved no  
  Call Number DAG @ dag @ GiV2010 Serial 1416  
Permanent link to this record
 

 
Author Mohamed Ilyes Lakhal; Hakan Cevikalp; Sergio Escalera edit   pdf
doi  openurl
  Title CRN: End-to-end Convolutional Recurrent Network Structure Applied to Vehicle Classification Type Conference Article
  Year 2018 Publication (down) 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications Abbreviated Journal  
  Volume 5 Issue Pages 137-144  
  Keywords Vehicle Classification; Deep Learning; End-to-end Learning  
  Abstract Vehicle type classification is considered to be a central part of Intelligent Traffic Systems. In the recent years, deep learning methods have emerged in as being the state-of-the-art in many computer vision tasks. In this paper, we present a novel yet simple deep learning framework for the vehicle type classification problem. We propose an end-to-end trainable system, that combines convolution neural network for feature extraction and recurrent neural network as a classifier. The recurrent network structure is used to handle various types of feature inputs, and at the same time allows to produce a single or a set of class predictions. In order to assess the effectiveness of our solution, we have conducted a set of experiments in two public datasets, obtaining state of the art results. In addition, we also report results on the newly released MIO-TCD dataset.  
  Address Funchal; Madeira; Portugal; January 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference VISAPP  
  Notes HUPBA Approved no  
  Call Number Admin @ si @ LCE2018a Serial 3094  
Permanent link to this record
 

 
Author Eduardo Aguilar; Bhalaji Nagarajan; Rupali Khatun; Marc Bolaños; Petia Radeva edit  doi
openurl 
  Title Uncertainty Modeling and Deep Learning Applied to Food Image Analysis Type Conference Article
  Year 2020 Publication (down) 13th International Joint Conference on Biomedical Engineering Systems and Technologies Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Recently, computer vision approaches specially assisted by deep learning techniques have shown unexpected advancements that practically solve problems that never have been imagined to be automatized like face recognition or automated driving. However, food image recognition has received a little effort in the Computer Vision community. In this project, we review the field of food image analysis and focus on how to combine with two challenging research lines: deep learning and uncertainty modeling. After discussing our methodology to advance in this direction, we comment potential research, social and economic impact of the research on food image analysis.  
  Address Villetta; Malta; February 2020  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference BIODEVICES  
  Notes MILAB Approved no  
  Call Number Admin @ si @ ANK2020 Serial 3526  
Permanent link to this record
 

 
Author Diego Alejandro Cheda; Daniel Ponsa; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title Camera Egomotion Estimation in the ADAS Context Type Conference Article
  Year 2010 Publication (down) 13th International IEEE Annual Conference on Intelligent Transportation Systems Abbreviated Journal  
  Volume Issue Pages 1415–1420  
  Keywords  
  Abstract Camera-based Advanced Driver Assistance Systems (ADAS) have concentrated many research efforts in the last decades. Proposals based on monocular cameras require the knowledge of the camera pose with respect to the environment, in order to reach an efficient and robust performance. A common assumption in such systems is considering the road as planar, and the camera pose with respect to it as approximately known. However, in real situations, the camera pose varies along time due to the vehicle movement, the road slope, and irregularities on the road surface. Thus, the changes in the camera position and orientation (i.e., the egomotion) are critical information that must be estimated at every frame to avoid poor performances. This work focuses on egomotion estimation from a monocular camera under the ADAS context. We review and compare egomotion methods with simulated and real ADAS-like sequences. Basing on the results of our experiments, we show which of the considered nonlinear and linear algorithms have the best performance in this domain.  
  Address Madeira Island (Portugal)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 2153-0009 ISBN 978-1-4244-7657-2 Medium  
  Area Expedition Conference ITSC  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ CPL2010 Serial 1425  
Permanent link to this record
 

 
Author M. Ivasic-Kos; M. Pobar; Jordi Gonzalez edit   pdf
doi  openurl
  Title Active Player Detection in Handball Videos Using Optical Flow and STIPs Based Measures Type Conference Article
  Year 2019 Publication (down) 13th International Conference on Signal Processing and Communication Systems Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract In handball videos recorded during the training, multiple players are present in the scene at the same time. Although they all might move and interact, not all players contribute to the currently relevant exercise nor practice the given handball techniques. The goal of this experiment is to automatically determine players on training footage that perform given handball techniques and are therefore considered active. It is a very challenging task for which a precise object detector is needed that can handle cluttered scenes with poor illumination, with many players present in different sizes and distances from the camera, partially occluded, moving fast. To determine which of the detected players are active, additional information is needed about the level of player activity. Since many handball actions are characterized by considerable changes in speed, position, and variations in the player's appearance, we propose using spatio-temporal interest points (STIPs) and optical flow (OF). Therefore, we propose an active player detection method combining the YOLO object detector and two activity measures based on STIPs and OF. The performance of the proposed method and activity measures are evaluated on a custom handball video dataset acquired during handball training lessons.  
  Address Gold Coast; Australia; December 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICSPCS2  
  Notes ISE; 600.098; 600.119 Approved no  
  Call Number Admin @ si @ IPG2019 Serial 3415  
Permanent link to this record
 

 
Author Roberto Morales; Juan Quispe; Eduardo Aguilar edit  url
doi  openurl
  Title Exploring multi-food detection using deep learning-based algorithms Type Conference Article
  Year 2023 Publication (down) 13th International Conference on Pattern Recognition Systems Abbreviated Journal  
  Volume Issue Pages 1-7  
  Keywords  
  Abstract People are becoming increasingly concerned about their diet, whether for disease prevention, medical treatment or other purposes. In meals served in restaurants, schools or public canteens, it is not easy to identify the ingredients and/or the nutritional information they contain. Currently, technological solutions based on deep learning models have facilitated the recording and tracking of food consumed based on the recognition of the main dish present in an image. Considering that sometimes there may be multiple foods served on the same plate, food analysis should be treated as a multi-class object detection problem. EfficientDet and YOLOv5 are object detection algorithms that have demonstrated high mAP and real-time performance on general domain data. However, these models have not been evaluated and compared on public food datasets. Unlike general domain objects, foods have more challenging features inherent in their nature that increase the complexity of detection. In this work, we performed a performance evaluation of Efficient-Det and YOLOv5 on three public food datasets: UNIMIB2016, UECFood256 and ChileanFood64. From the results obtained, it can be seen that YOLOv5 provides a significant difference in terms of both mAP and response time compared to EfficientDet in all datasets. Furthermore, YOLOv5 outperforms the state-of-the-art on UECFood256, achieving an improvement of more than 4% in terms of mAP@.50 over the best reported.  
  Address Guayaquil; Ecuador; July 2023  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICPRS  
  Notes MILAB Approved no  
  Call Number Admin @ si @ MQA2023 Serial 3843  
Permanent link to this record
 

 
Author Gisel Bastidas-Guacho; Patricio Moreno; Boris X. Vintimilla; Angel Sappa edit  url
doi  openurl
  Title Application on the Loop of Multimodal Image Fusion: Trends on Deep-Learning Based Approaches Type Conference Article
  Year 2023 Publication (down) 13th International Conference on Pattern Recognition Systems Abbreviated Journal  
  Volume 14234 Issue Pages 25–36  
  Keywords  
  Abstract Multimodal image fusion allows the combination of information from different modalities, which is useful for tasks such as object detection, edge detection, and tracking, to name a few. Using the fused representation for applications results in better task performance. There are several image fusion approaches, which have been summarized in surveys. However, the existing surveys focus on image fusion approaches where the application on the loop of multimodal image fusion is not considered. On the contrary, this study summarizes deep learning-based multimodal image fusion for computer vision (e.g., object detection) and image processing applications (e.g., semantic segmentation), that is, approaches where the application module leverages the multimodal fusion process to enhance the final result. Firstly, we introduce image fusion and the existing general frameworks for image fusion tasks such as multifocus, multiexposure and multimodal. Then, we describe the multimodal image fusion approaches. Next, we review the state-of-the-art deep learning multimodal image fusion approaches for vision applications. Finally, we conclude our survey with the trends of task-driven multimodal image fusion.  
  Address Guayaquil; Ecuador; July 2023  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICPRS  
  Notes MSIAU Approved no  
  Call Number Admin @ si @ BMV2023 Serial 3932  
Permanent link to this record
 

 
Author David Vazquez; Antonio Lopez; Daniel Ponsa; Javier Marin edit   pdf
doi  isbn
openurl 
  Title Virtual Worlds and Active Learning for Human Detection Type Conference Article
  Year 2011 Publication (down) 13th International Conference on Multimodal Interaction Abbreviated Journal  
  Volume Issue Pages 393-400  
  Keywords Pedestrian Detection; Human detection; Virtual; Domain Adaptation; Active Learning  
  Abstract Image based human detection is of paramount interest due to its potential applications in fields such as advanced driving assistance, surveillance and media analysis. However, even detecting non-occluded standing humans remains a challenge of intensive research. The most promising human detectors rely on classifiers developed in the discriminative paradigm, i.e., trained with labelled samples. However, labeling is a manual intensive step, especially in cases like human detection where it is necessary to provide at least bounding boxes framing the humans for training. To overcome such problem, some authors have proposed the use of a virtual world where the labels of the different objects are obtained automatically. This means that the human models (classifiers) are learnt using the appearance of rendered images, i.e., using realistic computer graphics. Later, these models are used for human detection in images of the real world. The results of this technique are surprisingly good. However, these are not always as good as the classical approach of training and testing with data coming from the same camera, or similar ones. Accordingly, in this paper we address the challenge of using a virtual world for gathering (while playing a videogame) a large amount of automatically labelled samples (virtual humans and background) and then training a classifier that performs equal, in real-world images, than the one obtained by equally training from manually labelled real-world samples. For doing that, we cast the problem as one of domain adaptation. In doing so, we assume that a small amount of manually labelled samples from real-world images is required. To collect these labelled samples we propose a non-standard active learning technique. Therefore, ultimately our human model is learnt by the combination of virtual and real world labelled samples (Fig. 1), which has not been done before. We present quantitative results showing that this approach is valid.  
  Address Alicante, Spain  
  Corporate Author Thesis  
  Publisher ACM DL Place of Publication New York, NY, USA, USA Editor  
  Language English Summary Language English Original Title Virtual Worlds and Active Learning for Human Detection  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4503-0641-6 Medium  
  Area Expedition Conference ICMI  
  Notes ADAS Approved yes  
  Call Number ADAS @ adas @ VLP2011a Serial 1683  
Permanent link to this record
 

 
Author Ruth Aylett; Ginevra Castellano; Bogdan Raducanu; Ana Paiva; Marc Hanheide edit  url
doi  isbn
openurl 
  Title Long-term socially perceptive and interactive robot companions: challenges and future perspectives Type Conference Article
  Year 2011 Publication (down) 13th International Conference on Multimodal Interaction Abbreviated Journal  
  Volume Issue Pages 323-326  
  Keywords human-robot interaction, multimodal interaction, social robotics  
  Abstract This paper gives a brief overview of the challenges for multi-model perception and generation applied to robot companions located in human social environments. It reviews the current position in both perception and generation and the immediate technical challenges and goes on to consider the extra issues raised by embodiment and social context. Finally, it briefly discusses the impact of systems that must function continually over months rather than just for a few hours.  
  Address Alicante  
  Corporate Author Thesis  
  Publisher ACM Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4503-0641-6 Medium  
  Area Expedition Conference ICMI  
  Notes OR;MV Approved no  
  Call Number Admin @ si @ ACR2011 Serial 1888  
Permanent link to this record
 

 
Author Carlo Gatta; Simone Balocco; Francesco Ciompi; R. Hemetsberger; Oriol Rodriguez-Leor; Petia Radeva edit  openurl
  Title Real-time gating of IVUS sequences based on motion blur analysis: Method and quantitative validation Type Conference Article
  Year 2010 Publication (down) 13th international conference on Medical image computing and computer-assisted intervention Abbreviated Journal  
  Volume II Issue Pages 59-67  
  Keywords  
  Abstract Intravascular Ultrasound (IVUS) is an image-guiding technique for cardiovascular diagnostic, providing cross-sectional images of vessels. During the acquisition, the catheter is pulled back (pullback) at a constant speed in order to acquire spatially subsequent images of the artery. However, during this procedure, the heart twist produces a swinging fluctuation of the probe position along the vessel axis. In this paper we propose a real-time gating algorithm based on the analysis of motion blur variations during the IVUS sequence. Quantitative tests performed on an in-vitro ground truth data base shown that our method is superior to state of the art algorithms both in computational speed and accuracy.  
  Address  
  Corporate Author Thesis  
  Publisher Springer-Verlag Berlin Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference MICCAI  
  Notes MILAB Approved no  
  Call Number BCNPCL @ bcnpcl @ GBC2010 Serial 1447  
Permanent link to this record
 

 
Author Dani Rowe; Ignasi Rius; Jordi Gonzalez; Juan J. Villanueva edit  openurl
  Title Robust Particle Filtering for Object Tracking Type Miscellaneous
  Year 2005 Publication (down) 13th International Conference on Image Analysis and Processing (ICIAP’2005), LNCS 3617: 1158–1165, ISBN 3–540–28869–4 Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Cagliary (Italy)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number ISE @ ise @ RRG2005e Serial 577  
Permanent link to this record
 

 
Author Jon Almazan; David Fernandez; Alicia Fornes; Josep Llados; Ernest Valveny edit   pdf
doi  isbn
openurl 
  Title A Coarse-to-Fine Approach for Handwritten Word Spotting in Large Scale Historical Documents Collection Type Conference Article
  Year 2012 Publication (down) 13th International Conference on Frontiers in Handwriting Recognition Abbreviated Journal  
  Volume Issue Pages 453-458  
  Keywords  
  Abstract In this paper we propose an approach for word spotting in handwritten document images. We state the problem from a focused retrieval perspective, i.e. locating instances of a query word in a large scale dataset of digitized manuscripts. We combine two approaches, namely one based on word segmentation and another one segmentation-free. The first approach uses a hashing strategy to coarsely prune word images that are unlikely to be instances of the query word. This process is fast but has a low precision due to the errors introduced in the segmentation step. The regions containing candidate words are sent to the second process based on a state of the art technique from the visual object detection field. This discriminative model represents the appearance of the query word and computes a similarity score. In this way we propose a coarse-to-fine approach achieving a compromise between efficiency and accuracy. The validation of the model is shown using a collection of old handwritten manuscripts. We appreciate a substantial improvement in terms of precision regarding the previous proposed method with a low computational cost increase.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4673-2262-1 Medium  
  Area Expedition Conference ICFHR  
  Notes DAG Approved no  
  Call Number DAG @ dag @ AFF2012 Serial 1983  
Permanent link to this record
 

 
Author Marçal Rusiñol; Josep Llados edit  doi
isbn  openurl
  Title The Role of the Users in Handwritten Word Spotting Applications: Query Fusion and Relevance Feedback Type Conference Article
  Year 2012 Publication (down) 13th International Conference on Frontiers in Handwriting Recognition Abbreviated Journal  
  Volume Issue Pages 55-60  
  Keywords  
  Abstract In this paper we present the importance of including the user in the loop in a handwritten word spotting framework. Several off-the-shelf query fusion and relevance feedback strategies have been tested in the handwritten word spotting context. The increase in terms of precision when the user is included in the loop is assessed using two datasets of historical handwritten documents and a baseline word spotting approach based on a bag-of-visual-words model.  
  Address Bari, Italy  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4673-2262-1 Medium  
  Area Expedition Conference ICFHR  
  Notes DAG Approved no  
  Call Number Admin @ si @ RuL2012 Serial 2054  
Permanent link to this record
 

 
Author Volkmar Frinken; Markus Baumgartner; Andreas Fischer; Horst Bunke edit   pdf
isbn  openurl
  Title Semi-Supervised Learning for Cursive Handwriting Recognition using Keyword Spotting Type Conference Article
  Year 2012 Publication (down) 13th International Conference on Frontiers in Handwriting Recognition Abbreviated Journal  
  Volume Issue Pages 49-54  
  Keywords  
  Abstract State-of-the-art handwriting recognition systems are learning-based systems that require large sets of training data. The creation of training data, and consequently the creation of a well-performing recognition system, requires therefore a substantial amount of human work. This can be reduced with semi-supervised learning, which uses unlabeled text lines for training as well. Current approaches estimate the correct transcription of the unlabeled data via handwriting recognition which is not only extremely demanding as far as computational costs are concerned but also requires a good model of the target language. In this paper, we propose a different approach that makes use of keyword spotting, which is significantly faster and does not need any language model. In a set of experiments we demonstrate its superiority over existing approaches.  
  Address Bari, Italy  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 10.1109/ICFHR.2012.268 ISBN 978-1-4673-2262-1 Medium  
  Area Expedition Conference ICFHR  
  Notes DAG Approved no  
  Call Number Admin @ si @ FBF2012 Serial 2055  
Permanent link to this record
 

 
Author Emanuel Indermühle; Volkmar Frinken; Horst Bunke edit   pdf
doi  isbn
openurl 
  Title Mode Detection in Online Handwritten Documents using BLSTM Neural Networks Type Conference Article
  Year 2012 Publication (down) 13th International Conference on Frontiers in Handwriting Recognition Abbreviated Journal  
  Volume Issue Pages 302-307  
  Keywords  
  Abstract Mode detection in online handwritten documents refers to the process of distinguishing different types of contents, such as text, formulas, diagrams, or tables, one from another. In this paper a new approach to mode detection is proposed that uses bidirectional long-short term memory (BLSTM) neural networks. The BLSTM neural network is a novel type of recursive neural network that has been successfully applied in speech and handwriting recognition. In this paper we show that it has the potential to significantly outperform traditional methods for mode detection, which are usually based on stroke classification. As a further advantage over previous approaches, the proposed system is trainable and does not rely on user-defined heuristics. Moreover, it can be easily adapted to new or additional types of modes by just providing the system with new training data.  
  Address Bari, italy  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4673-2262-1 Medium  
  Area Expedition Conference ICFHR  
  Notes DAG Approved no  
  Call Number Admin @ si @ IFB2012 Serial 2056  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: