|   | 
Details
   web
Records
Author David Geronimo; Joan Serrat; Antonio Lopez; Ramon Baldrich
Title Traffic sign recognition for computer vision project-based learning Type Journal Article
Year 2013 Publication IEEE Transactions on Education Abbreviated Journal T-EDUC
Volume 56 Issue 3 Pages 364-371
Keywords (down) traffic signs
Abstract This paper presents a graduate course project on computer vision. The aim of the project is to detect and recognize traffic signs in video sequences recorded by an on-board vehicle camera. This is a demanding problem, given that traffic sign recognition is one of the most challenging problems for driving assistance systems. Equally, it is motivating for the students given that it is a real-life problem. Furthermore, it gives them the opportunity to appreciate the difficulty of real-world vision problems and to assess the extent to which this problem can be solved by modern computer vision and pattern classification techniques taught in the classroom. The learning objectives of the course are introduced, as are the constraints imposed on its design, such as the diversity of students' background and the amount of time they and their instructors dedicate to the course. The paper also describes the course contents, schedule, and how the project-based learning approach is applied. The outcomes of the course are discussed, including both the students' marks and their personal feedback.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0018-9359 ISBN Medium
Area Expedition Conference
Notes ADAS; CIC Approved no
Call Number Admin @ si @ GSL2013; ADAS @ adas @ Serial 2160
Permanent link to this record
 

 
Author Aura Hernandez-Sabate; Meritxell Joanpere; Nuria Gorgorio; Lluis Albarracin
Title Mathematics learning opportunities when playing a Tower Defense Game Type Journal
Year 2015 Publication International Journal of Serious Games Abbreviated Journal IJSG
Volume 2 Issue 4 Pages 57-71
Keywords (down) Tower Defense game; learning opportunities; mathematics; problem solving; game design
Abstract A qualitative research study is presented herein with the purpose of identifying mathematics learning opportunities in students between 10 and 12 years old while playing a commercial version of a Tower Defense game. These learning opportunities are understood as mathematicisable moments of the game and involve the establishment of relationships between the game and mathematical problem solving. Based on the analysis of these mathematicisable moments, we conclude that the game can promote problem-solving processes and learning opportunities that can be associated with different mathematical contents that appears in mathematics curricula, thought it seems that teacher or new game elements might be needed to facilitate the processes.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS; 600.076 Approved no
Call Number Admin @ si @ HJG2015 Serial 2730
Permanent link to this record
 

 
Author Shanxin Yuan; Guillermo Garcia-Hernando; Bjorn Stenger; Gyeongsik Moon; Ju Yong Chang; Kyoung Mu Lee; Pavlo Molchanov; Jan Kautz; Sina Honari; Liuhao Ge; Junsong Yuan; Xinghao Chen; Guijin Wang; Fan Yang; Kai Akiyama; Yang Wu; Qingfu Wan; Meysam Madadi; Sergio Escalera; Shile Li; Dongheui Lee; Iason Oikonomidis; Antonis Argyros; Tae-Kyun Kim
Title Depth-Based 3D Hand Pose Estimation: From Current Achievements to Future Goals Type Conference Article
Year 2018 Publication 31st IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal
Volume Issue Pages 2636 - 2645
Keywords (down) Three-dimensional displays; Task analysis; Pose estimation; Two dimensional displays; Joints; Training; Solid modeling
Abstract In this paper, we strive to answer two questions: What is the current state of 3D hand pose estimation from depth images? And, what are the next challenges that need to be tackled? Following the successful Hands In the Million Challenge (HIM2017), we investigate the top 10 state-of-the-art methods on three tasks: single frame 3D pose estimation, 3D hand tracking, and hand pose estimation during object interaction. We analyze the performance of different CNN structures with regard to hand shape, joint visibility, view point and articulation distributions. Our findings include: (1) isolated 3D hand pose estimation achieves low mean errors (10 mm) in the view point range of [70, 120] degrees, but it is far from being solved for extreme view points; (2) 3D volumetric representations outperform 2D CNNs, better capturing the spatial structure of the depth data; (3) Discriminative methods still generalize poorly to unseen hand shapes; (4) While joint occlusions pose a challenge for most methods, explicit modeling of structure constraints can significantly narrow the gap between errors on visible and occluded joints.
Address Salt Lake City; USA; June 2018
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPR
Notes HUPBA; no proj Approved no
Call Number Admin @ si @ YGS2018 Serial 3115
Permanent link to this record
 

 
Author Marta Diez-Ferrer; Debora Gil; Elena Carreño; Susana Padrones; Samantha Aso
Title Positive Airway Pressure-Enhanced CT to Improve Virtual Bronchoscopic Navigation Type Journal Article
Year 2017 Publication Journal of Thoracic Oncology Abbreviated Journal JTO
Volume 12 Issue 1S Pages S596-S597
Keywords (down) Thorax CT; diagnosis; Peripheral Pulmonary Nodule
Abstract A main weakness of virtual bronchoscopic navigation (VBN) is unsuccessful segmentation of distal branches approaching peripheral pulmonary nodules (PPN). CT scan acquisition protocol is pivotal for segmentation covering the utmost periphery. We hypothesize that application of continuous positive airway pressure (CPAP) during CT acquisition could improve visualization and segmentation of peripheral bronchi. The purpose of the present pilot study is to compare quality of segmentations under 4 CT acquisition modes: inspiration (INSP), expiration (EXP) and both with CPAP (INSP-CPAP and EXP-CPAP).
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes IAM; 600.096; 600.075; 600.145 Approved no
Call Number Admin @ si @ DGC2017a Serial 2883
Permanent link to this record
 

 
Author Anders Skaarup Johansen; Kamal Nasrollahi; Sergio Escalera; Thomas B. Moeslund
Title Who Cares about the Weather? Inferring Weather Conditions for Weather-Aware Object Detection in Thermal Images Type Journal Article
Year 2023 Publication Applied Sciences Abbreviated Journal AS
Volume 13 Issue 18 Pages
Keywords (down) thermal; object detection; concept drift; conditioning; weather recognition
Abstract Deployments of real-world object detection systems often experience a degradation in performance over time due to concept drift. Systems that leverage thermal cameras are especially susceptible because the respective thermal signatures of objects and their surroundings are highly sensitive to environmental changes. In this study, two types of weather-aware latent conditioning methods are investigated. The proposed method aims to guide two object detectors, (YOLOv5 and Deformable DETR) to become weather-aware. This is achieved by leveraging an auxiliary branch that predicts weather-related information while conditioning intermediate layers of the object detector. While the conditioning methods proposed do not directly improve the accuracy of baseline detectors, it can be observed that conditioned networks manage to extract a weather-related signal from the thermal images, thus resulting in a decreased miss rate at the cost of increased false positives. The extracted signal appears noisy and is thus challenging to regress accurately. This is most likely a result of the qualitative nature of the thermal sensor; thus, further work is needed to identify an ideal method for optimizing the conditioning branch, as well as to further improve the accuracy of the system.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HUPBA Approved no
Call Number Admin @ si @ SNE2023 Serial 3983
Permanent link to this record
 

 
Author Rafael E. Rivadeneira; Angel Sappa; Boris X. Vintimilla
Title Multi-Image Super-Resolution for Thermal Images Type Conference Article
Year 2022 Publication 17th International Conference on Computer Vision Theory and Applications (VISAPP 2022) Abbreviated Journal
Volume 4 Issue Pages 635-642
Keywords (down) Thermal Images; Multi-view; Multi-frame; Super-Resolution; Deep Learning; Attention Block
Abstract This paper proposes a novel CNN architecture for the multi-thermal image super-resolution problem. In the proposed scheme, the multi-images are synthetically generated by downsampling and slightly shifting the given image; noise is also added to each of these synthesized images. The proposed architecture uses two
attention blocks paths to extract high-frequency details taking advantage of the large information extracted from multiple images of the same scene. Experimental results are provided, showing the proposed scheme has overcome the state-of-the-art approaches.
Address Online; Feb 6-8, 2022
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference VISAPP
Notes MSIAU; 601.349 Approved no
Call Number Admin @ si @ RSV2022a Serial 3690
Permanent link to this record
 

 
Author Rafael E. Rivadeneira; Angel Sappa; Boris X. Vintimilla; Riad I. Hammoud
Title A Novel Domain Transfer-Based Approach for Unsupervised Thermal Image Super-Resolution Type Journal Article
Year 2022 Publication Sensors Abbreviated Journal SENS
Volume 22 Issue 6 Pages 2254
Keywords (down) Thermal image super-resolution; unsupervised super-resolution; thermal images; attention module; semiregistered thermal images
Abstract This paper presents a transfer domain strategy to tackle the limitations of low-resolution thermal sensors and generate higher-resolution images of reasonable quality. The proposed technique employs a CycleGAN architecture and uses a ResNet as an encoder in the generator along with an attention module and a novel loss function. The network is trained on a multi-resolution thermal image dataset acquired with three different thermal sensors. Results report better performance benchmarking results on the 2nd CVPR-PBVS-2021 thermal image super-resolution challenge than state-of-the-art methods. The code of this work is available online.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MSIAU; Approved no
Call Number Admin @ si @ RSV2022b Serial 3688
Permanent link to this record
 

 
Author Oriol Pujol; Petia Radeva
Title Texture Segmentation by Statistical Deformable Models Type Journal
Year 2004 Publication International Journal of Image and Graphics Abbreviated Journal IJIG
Volume 4 Issue 3 Pages 433-452
Keywords (down) Texture segmentation, parametric active contours, statistic snakes
Abstract Deformable models have received much popularity due to their ability to include high-level knowledge on the application domain into low-level image processing. Still, most proposed active contour models do not sufficiently profit from the application information and they are too generalized, leading to non-optimal final results of segmentation, tracking or 3D reconstruction processes. In this paper we propose a new deformable model defined in a statistical framework to segment objects of natural scenes. We perform a supervised learning of local appearance of the textured objects and construct a feature space using a set of co-occurrence matrix measures. Linear Discriminant Analysis allows us to obtain an optimal reduced feature space where a mixture model is applied to construct a likelihood map. Instead of using a heuristic potential field, our active model is deformed on a regularized version of the likelihood map in order to segment objects characterized by the same texture pattern. Different tests on synthetic images, natural scene and medical images show the advantages of our statistic deformable model.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB;HuPBA Approved no
Call Number BCNPCL @ bcnpcl @ PuR2004a Serial 505
Permanent link to this record
 

 
Author Bojana Gajic; Eduard Vazquez; Ramon Baldrich
Title Evaluation of Deep Image Descriptors for Texture Retrieval Type Conference Article
Year 2017 Publication Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2017) Abbreviated Journal
Volume Issue Pages 251-257
Keywords (down) Texture Representation; Texture Retrieval; Convolutional Neural Networks; Psychophysical Evaluation
Abstract The increasing complexity learnt in the layers of a Convolutional Neural Network has proven to be of great help for the task of classification. The topic has received great attention in recently published literature.
Nonetheless, just a handful of works study low-level representations, commonly associated with lower layers. In this paper, we explore recent findings which conclude, counterintuitively, the last layer of the VGG convolutional network is the best to describe a low-level property such as texture. To shed some light on this issue, we are proposing a psychophysical experiment to evaluate the adequacy of different layers of the VGG network for texture retrieval. Results obtained suggest that, whereas the last convolutional layer is a good choice for a specific task of classification, it might not be the best choice as a texture descriptor, showing a very poor performance on texture retrieval. Intermediate layers show the best performance, showing a good combination of basic filters, as in the primary visual cortex, and also a degree of higher level information to describe more complex textures.
Address Porto, Portugal; 27 February – 1 March 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference VISIGRAPP
Notes CIC; 600.087 Approved no
Call Number Admin @ si @ Serial 3710
Permanent link to this record
 

 
Author Antonio Clavelli; Dimosthenis Karatzas; Josep Llados; Mario Ferraro; Giuseppe Boccignone
Title Towards Modelling an Attention-Based Text Localization Process Type Conference Article
Year 2013 Publication 6th Iberian Conference on Pattern Recognition and Image Analysis Abbreviated Journal
Volume 7887 Issue Pages 296-303
Keywords (down) text localization; visual attention; eye guidance
Abstract This note introduces a visual attention model of text localization in real-world scenes. The core of the model built upon the proto-object concept is discussed. It is shown how such dynamic mid-level representation of the scene can be derived in the framework of an action-perception loop engaging salience, text information value computation, and eye guidance mechanisms.
Preliminary results that compare model generated scanpaths with those eye-tracked from human subjects are presented.
Address Madeira; Portugal; June 2013
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-642-38627-5 Medium
Area Expedition Conference IbPRIA
Notes DAG Approved no
Call Number Admin @ si @ CKL2013 Serial 2291
Permanent link to this record
 

 
Author Christophe Rigaud; Dimosthenis Karatzas; Joost Van de Weijer; Jean-Christophe Burie; Jean-Marc Ogier
Title Automatic text localisation in scanned comic books Type Conference Article
Year 2013 Publication Proceedings of the International Conference on Computer Vision Theory and Applications Abbreviated Journal
Volume Issue Pages 814-819
Keywords (down) Text localization; comics; text/graphic separation; complex background; unstructured document
Abstract Comic books constitute an important cultural heritage asset in many countries. Digitization combined with subsequent document understanding enable direct content-based search as opposed to metadata only search (e.g. album title or author name). Few studies have been done in this direction. In this work we detail a novel approach for the automatic text localization in scanned comics book pages, an essential step towards a fully automatic comics book understanding. We focus on speech text as it is semantically important and represents the majority of the text present in comics. The approach is compared with existing methods of text localization found in the literature and results are presented.
Address Barcelona; February 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference VISAPP
Notes DAG; CIC; 600.056 Approved no
Call Number Admin @ si @ RKW2013b Serial 2261
Permanent link to this record
 

 
Author David Fernandez; Josep Llados; Alicia Fornes
Title A graph-based approach for segmenting touching lines in historical handwritten documents Type Journal Article
Year 2014 Publication International Journal on Document Analysis and Recognition Abbreviated Journal IJDAR
Volume 17 Issue 3 Pages 293-312
Keywords (down) Text line segmentation; Handwritten documents; Document image processing; Historical document analysis
Abstract Text line segmentation in handwritten documents is an important task in the recognition of historical documents. Handwritten document images contain text lines with multiple orientations, touching and overlapping characters between consecutive text lines and different document structures, making line segmentation a difficult task. In this paper, we present a new approach for handwritten text line segmentation solving the problems of touching components, curvilinear text lines and horizontally overlapping components. The proposed algorithm formulates line segmentation as finding the central path in the area between two consecutive lines. This is solved as a graph traversal problem. A graph is constructed using the skeleton of the image. Then, a path-finding algorithm is used to find the optimum path between text lines. The proposed algorithm has been evaluated on a comprehensive dataset consisting of five databases: ICDAR2009, ICDAR2013, UMD, the George Washington and the Barcelona Marriages Database. The proposed method outperforms the state-of-the-art considering the different types and difficulties of the benchmarking data.
Address
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1433-2833 ISBN Medium
Area Expedition Conference
Notes DAG; 600.056; 600.061; 602.006; 600.077 Approved no
Call Number Admin @ si @ FLF2014 Serial 2459
Permanent link to this record
 

 
Author David Aldavert; Marçal Rusiñol
Title Manuscript text line detection and segmentation using second-order derivatives analysis Type Conference Article
Year 2018 Publication 13th IAPR International Workshop on Document Analysis Systems Abbreviated Journal
Volume Issue Pages 293 - 298
Keywords (down) text line detection; text line segmentation; text region detection; second-order derivatives
Abstract In this paper, we explore the use of second-order derivatives to detect text lines on handwritten document images. Taking advantage that the second derivative gives a minimum response when a dark linear element over a
bright background has the same orientation as the filter, we use this operator to create a map with the local orientation and strength of putative text lines in the document. Then, we detect line segments by selecting and merging the filter responses that have a similar orientation and scale. Finally, text lines are found by merging the segments that are within the same text region. The proposed segmentation algorithm, is learning-free while showing a performance similar to the state of the art methods in publicly available datasets.
Address Viena; Austria; April 2018
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference DAS
Notes DAG; 600.084; 600.129; 302.065; 600.121 Approved no
Call Number Admin @ si @ AlR2018a Serial 3104
Permanent link to this record
 

 
Author Oscar Argudo; Marc Comino; Antonio Chica; Carlos Andujar; Felipe Lumbreras
Title Segmentation of aerial images for plausible detail synthesis Type Journal Article
Year 2018 Publication Computers & Graphics Abbreviated Journal CG
Volume 71 Issue Pages 23-34
Keywords (down) Terrain editing; Detail synthesis; Vegetation synthesis; Terrain rendering; Image segmentation
Abstract The visual enrichment of digital terrain models with plausible synthetic detail requires the segmentation of aerial images into a suitable collection of categories. In this paper we present a complete pipeline for segmenting high-resolution aerial images into a user-defined set of categories distinguishing e.g. terrain, sand, snow, water, and different types of vegetation. This segmentation-for-synthesis problem implies that per-pixel categories must be established according to the algorithms chosen for rendering the synthetic detail. This precludes the definition of a universal set of labels and hinders the construction of large training sets. Since artists might choose to add new categories on the fly, the whole pipeline must be robust against unbalanced datasets, and fast on both training and inference. Under these constraints, we analyze the contribution of common per-pixel descriptors, and compare the performance of state-of-the-art supervised learning algorithms. We report the findings of two user studies. The first one was conducted to analyze human accuracy when manually labeling aerial images. The second user study compares detailed terrains built using different segmentation strategies, including official land cover maps. These studies demonstrate that our approach can be used to turn digital elevation models into fully-featured, detailed terrains with minimal authoring efforts.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0097-8493 ISBN Medium
Area Expedition Conference
Notes ADAS; 600.086; 600.118 Approved no
Call Number Admin @ si @ ACC2018 Serial 3147
Permanent link to this record
 

 
Author Estefania Talavera; Mariella Dimiccoli; Marc Bolaños; Maedeh Aghaei; Petia Radeva
Title R-clustering for egocentric video segmentation Type Conference Article
Year 2015 Publication Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 Abbreviated Journal
Volume 9117 Issue Pages 327-336
Keywords (down) Temporal video segmentation; Egocentric videos; Clustering
Abstract In this paper, we present a new method for egocentric video temporal segmentation based on integrating a statistical mean change detector and agglomerative clustering(AC) within an energy-minimization framework. Given the tendency of most AC methods to oversegment video sequences when clustering their frames, we combine the clustering with a concept drift detection technique (ADWIN) that has rigorous guarantee of performances. ADWIN serves as a statistical upper bound for the clustering-based video segmentation. We integrate both techniques in an energy-minimization framework that serves to disambiguate the decision of both techniques and to complete the segmentation taking into account the temporal continuity of video frames descriptors. We present experiments over egocentric sets of more than 13.000 images acquired with different wearable cameras, showing that our method outperforms state-of-the-art clustering methods.
Address Santiago de Compostela; España; June 2015
Corporate Author Thesis
Publisher Springer International Publishing Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-319-19389-2 Medium
Area Expedition Conference IbPRIA
Notes MILAB Approved no
Call Number Admin @ si @ TDB2015 Serial 2597
Permanent link to this record