|   | 
Details
   web
Records
Author Lluis Gomez; Marçal Rusiñol; Dimosthenis Karatzas
Title (down) Cutting Sayre's Knot: Reading Scene Text without Segmentation. Application to Utility Meters Type Conference Article
Year 2018 Publication 13th IAPR International Workshop on Document Analysis Systems Abbreviated Journal
Volume Issue Pages 97-102
Keywords Robust Reading; End-to-end Systems; CNN; Utility Meters
Abstract In this paper we present a segmentation-free system for reading text in natural scenes. A CNN architecture is trained in an end-to-end manner, and is able to directly output readings without any explicit text localization step. In order to validate our proposal, we focus on the specific case of reading utility meters. We present our results in a large dataset of images acquired by different users and devices, so text appears in any location, with different sizes, fonts and lengths, and the images present several distortions such as
dirt, illumination highlights or blur.
Address Viena; Austria; April 2018
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference DAS
Notes DAG; 600.084; 600.121; 600.129 Approved no
Call Number Admin @ si @ GRK2018 Serial 3102
Permanent link to this record
 

 
Author Debora Gil; Petia Radeva
Title (down) Curvature Vector Flow to Assure Convergent Deformable Models for Shape Modelling Type Book Chapter
Year 2003 Publication Energy Minimization Methods In Computer Vision And Pattern Recognition Abbreviated Journal LNCS
Volume 2683 Issue Pages 357-372
Keywords Initial condition; Convex shape; Non convex analysis; Increase; Segmentation; Gradient; Standard; Standards; Concave shape; Flow models; Tracking; Edge detection; Curvature
Abstract Poor convergence to concave shapes is a main limitation of snakes as a standard segmentation and shape modelling technique. The gradient of the external energy of the snake represents a force that pushes the snake into concave regions, as its internal energy increases when new inexion points are created. In spite of the improvement of the external energy by the gradient vector ow technique, highly non convex shapes can not be obtained, yet. In the present paper, we develop a new external energy based on the geometry of the curve to be modelled. By tracking back the deformation of a curve that evolves by minimum curvature ow, we construct a distance map that encapsulates the natural way of adapting to non convex shapes. The gradient of this map, which we call curvature vector ow (CVF), is capable of attracting a snake towards any contour, whatever its geometry. Our experiments show that, any initial snake condition converges to the curve to be modelled in optimal time.
Address
Corporate Author Thesis
Publisher Springer, Berlin Place of Publication Lisbon, PORTUGAL Editor Springer, B.
Language Summary Language Original Title
Series Editor Series Title Lecture Notes in Computer Science Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 3-540-40498-8 Medium
Area Expedition Conference
Notes IAM;MILAB Approved no
Call Number IAM @ iam @ GIR2003b Serial 1535
Permanent link to this record
 

 
Author Debora Gil; Petia Radeva
Title (down) Curvature based Distance Maps Type Report
Year 2003 Publication CVC Technical Report Abbreviated Journal
Volume Issue 70 Pages
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Computer Vision Center Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes IAM;MILAB Approved no
Call Number IAM @ iam @ GIR2003a Serial 1534
Permanent link to this record
 

 
Author Iban Berganzo-Besga; Hector A. Orengo; Felipe Lumbreras; Aftab Alam; Rosie Campbell; Petrus J Gerrits; Jonas Gregorio de Souza; Afifa Khan; Maria Suarez Moreno; Jack Tomaney; Rebecca C Roberts; Cameron A Petrie
Title (down) Curriculum learning-based strategy for low-density archaeological mound detection from historical maps in India and Pakistan Type Journal Article
Year 2023 Publication Scientific Reports Abbreviated Journal ScR
Volume 13 Issue Pages 11257
Keywords
Abstract This paper presents two algorithms for the large-scale automatic detection and instance segmentation of potential archaeological mounds on historical maps. Historical maps present a unique source of information for the reconstruction of ancient landscapes. The last 100 years have seen unprecedented landscape modifications with the introduction and large-scale implementation of mechanised agriculture, channel-based irrigation schemes, and urban expansion to name but a few. Historical maps offer a window onto disappearing landscapes where many historical and archaeological elements that no longer exist today are depicted. The algorithms focus on the detection and shape extraction of mound features with high probability of being archaeological settlements, mounds being one of the most commonly documented archaeological features to be found in the Survey of India historical map series, although not necessarily recognised as such at the time of surveying. Mound features with high archaeological potential are most commonly depicted through hachures or contour-equivalent form-lines, therefore, an algorithm has been designed to detect each of those features. Our proposed approach addresses two of the most common issues in archaeological automated survey, the low-density of archaeological features to be detected, and the small amount of training data available. It has been applied to all types of maps available of the historic 1″ to 1-mile series, thus increasing the complexity of the detection. Moreover, the inclusion of synthetic data, along with a Curriculum Learning strategy, has allowed the algorithm to better understand what the mound features look like. Likewise, a series of filters based on topographic setting, form, and size have been applied to improve the accuracy of the models. The resulting algorithms have a recall value of 52.61% and a precision of 82.31% for the hachure mounds, and a recall value of 70.80% and a precision of 70.29% for the form-line mounds, which allowed the detection of nearly 6000 mound features over an area of 470,500 km2, the largest such approach to have ever been applied. If we restrict our focus to the maps most similar to those used in the algorithm training, we reach recall values greater than 60% and precision values greater than 90%. This approach has shown the potential to implement an adaptive algorithm that allows, after a small amount of retraining with data detected from a new map, a better general mound feature detection in the same map.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MSIAU Approved no
Call Number Admin @ si @ BOL2023 Serial 3976
Permanent link to this record
 

 
Author Jorge Bernal; F. Javier Sanchez; Fernando Vilariño
Title (down) Current Challenges on Polyp Detection in Colonoscopy Videos: From Region Segmentation to Region Classification. a Pattern Recognition-based Approach.ased Approach Type Conference Article
Year 2011 Publication 2nd International Workshop on Medical Image Analysis and Descriptionfor Diagnosis Systems Abbreviated Journal
Volume Issue Pages 62-71
Keywords Medical Imaging, Colonoscopy, Pattern Recognition, Segmentation, Polyp Detection, Region Description, Machine Learning, Real-time.
Abstract In this paper we present our approach on real-time polyp detection in colonoscopy videos. Our method consists of three stages: Image Segmentation, Region Description and Image Classification. Taking into account the constraints of our project, we introduce our segmentation system that is based on the model of appearance of the polyp that we have defined after observing real videos from colonoscopy processes. The output of this stage will ideally be a low number of regions of which one of them should cover the whole polyp region (if there is one in the image). This regions will be described in terms of features and, as a result of a machine learning schema, classified based on the values that they have for the several features that we will use on their description. Although we are still on the early stages of the project, we present some preliminary segmentation results that indicates that we are going in a good direction.
Address Rome, Italy
Corporate Author Thesis
Publisher SciTePress Place of Publication Editor Djemal, Khalifa
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area 800 Expedition Conference MIAD
Notes MV;SIAI Approved no
Call Number IAM @ iam @ BSV2011a Serial 1695
Permanent link to this record
 

 
Author Robert Benavente; Laura Igual; Fernando Vilariño
Title (down) Current Challenges in Computer Vision Type Book Whole
Year 2008 Publication Proccedings of the Third Internal Workshop Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-84-936529-0-6 Medium
Area Expedition Conference CVCRD
Notes MILAB;CIC;SIAI Approved no
Call Number BCNPCL @ bcnpcl @ BIV2008 Serial 1110
Permanent link to this record
 

 
Author Md. Mostafa Kamal Sarker; Mohammed Jabreel; Hatem A. Rashwan; Syeda Furruka Banu; Antonio Moreno; Petia Radeva; Domenec Puig
Title (down) CuisineNet: Food Attributes Classification using Multi-scale Convolution Network. Type Miscellaneous
Year 2018 Publication Arxiv Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Diversity of food and its attributes represents the culinary habits of peoples from different countries. Thus, this paper addresses the problem of identifying food culture of people around the world and its flavor by classifying two main food attributes, cuisine and flavor. A deep learning model based on multi-scale convotuional networks is proposed for extracting more accurate features from input images. The aggregation of multi-scale convolution layers with different kernel size is also used for weighting the features results from different scales. In addition, a joint loss function based on Negative Log Likelihood (NLL) is used to fit the model probability to multi labeled classes for multi-modal classification task. Furthermore, this work provides a new dataset for food attributes, so-called Yummly48K, extracted from the popular food website, Yummly. Our model is assessed on the constructed Yummly48K dataset. The experimental results show that our proposed method yields 65% and 62% average F1 score on validation and test set which outperforming the state-of-the-art models.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB; no proj Approved no
Call Number Admin @ si @ KJR2018 Serial 3235
Permanent link to this record
 

 
Author Md. Mostafa Kamal Sarker; Mohammed Jabreel; Hatem A. Rashwan; Syeda Furruka Banu; Petia Radeva; Domenec Puig
Title (down) CuisineNet: Food Attributes Classification using Multi-scale Convolution Network Type Conference Article
Year 2018 Publication 21st International Conference of the Catalan Association for Artificial Intelligence Abbreviated Journal
Volume Issue Pages 365-372
Keywords
Abstract Diversity of food and its attributes represents the culinary habits of peoples from different countries. Thus, this paper addresses the problem of identifying food culture of people around the world and its flavor by classifying two main food attributes, cuisine and flavor. A deep learning model based on multi-scale convotuional networks is proposed for extracting more accurate features from input images. The aggregation of multi-scale convolution layers with different kernel size is also used for weighting the features results from different scales. In addition, a joint loss function based on Negative Log Likelihood (NLL) is used to fit the model probability to multi labeled classes for multi-modal classification task. Furthermore, this work provides a new dataset for food attributes, so-called Yummly48K, extracted from the popular food website, Yummly. Our model is assessed on the constructed Yummly48K dataset. The experimental results show that our proposed method yields 65% and 62% average F1 score on validation and test set which outperforming the state-of-the-art models.
Address Roses; catalonia; October 2018
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CCIA
Notes MILAB; no menciona Approved no
Call Number Admin @ si @ SJR2018 Serial 3113
Permanent link to this record
 

 
Author O. Rodriguez; J. Mauri; E Fernandez-Nofrerias; J. Lopez; A. Tovar; V. Valle; David Rotger; Petia Radeva
Title (down) Cuantificacion tridimensional de la longitud de segmentos coronarios a partir de secuencias de ecografia intracoronaria Type Journal
Year 2003 Publication Revista Española de Cardiologia (IF: 0.959), 56(2), Congreso de las Enfermedades Cardiovasculares Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address Sevilla (Spain)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB Approved no
Call Number BCNPCL @ bcnpcl @ RMF2003g Serial 414
Permanent link to this record
 

 
Author Marwa Dhiaf; Mohamed Ali Souibgui; Kai Wang; Yuyang Liu; Yousri Kessentini; Alicia Fornes; Ahmed Cheikh Rouhou
Title (down) CSSL-MHTR: Continual Self-Supervised Learning for Scalable Multi-script Handwritten Text Recognition Type Miscellaneous
Year 2023 Publication Arxiv Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Self-supervised learning has recently emerged as a strong alternative in document analysis. These approaches are now capable of learning high-quality image representations and overcoming the limitations of supervised methods, which require a large amount of labeled data. However, these methods are unable to capture new knowledge in an incremental fashion, where data is presented to the model sequentially, which is closer to the realistic scenario. In this paper, we explore the potential of continual self-supervised learning to alleviate the catastrophic forgetting problem in handwritten text recognition, as an example of sequence recognition. Our method consists in adding intermediate layers called adapters for each task, and efficiently distilling knowledge from the previous model while learning the current task. Our proposed framework is efficient in both computation and memory complexity. To demonstrate its effectiveness, we evaluate our method by transferring the learned model to diverse text recognition downstream tasks, including Latin and non-Latin scripts. As far as we know, this is the first application of continual self-supervised learning for handwritten text recognition. We attain state-of-the-art performance on English, Italian and Russian scripts, whilst adding only a few parameters per task. The code and trained models will be publicly available.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes DAG Approved no
Call Number Admin @ si @ DSW2023 Serial 3851
Permanent link to this record
 

 
Author Reuben Dorent; Aaron Kujawa; Marina Ivory; Spyridon Bakas; Nikola Rieke; Samuel Joutard; Ben Glocker; Jorge Cardoso; Marc Modat; Kayhan Batmanghelich; Arseniy Belkov; Maria Baldeon Calisto; Jae Won Choi; Benoit M. Dawant; Hexin Dong; Sergio Escalera; Yubo Fan; Lasse Hansen; Mattias P. Heinrich; Smriti Joshi; Victoriya Kashtanova; Hyeon Gyu Kim; Satoshi Kondo; Christian N. Kruse; Susana K. Lai-Yuen; Hao Li; Han Liu; Buntheng Ly; Ipek Oguz; Hyungseob Shin; Boris Shirokikh; Zixian Su; Guotai Wang; Jianghao Wu; Yanwu Xu; Kai Yao; Li Zhang; Sebastien Ourselin,
Title (down) CrossMoDA 2021 challenge: Benchmark of Cross-Modality Domain Adaptation techniques for Vestibular Schwannoma and Cochlea Segmentation Type Journal Article
Year 2023 Publication Medical Image Analysis Abbreviated Journal MIA
Volume 83 Issue Pages 102628
Keywords Domain Adaptation; Segmen tation; Vestibular Schwnannoma
Abstract Domain Adaptation (DA) has recently raised strong interests in the medical imaging community. While a large variety of DA techniques has been proposed for image segmentation, most of these techniques have been validated either on private datasets or on small publicly available datasets. Moreover, these datasets mostly addressed single-class problems. To tackle these limitations, the Cross-Modality Domain Adaptation (crossMoDA) challenge was organised in conjunction with the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021). CrossMoDA is the first large and multi-class benchmark for unsupervised cross-modality DA. The challenge's goal is to segment two key brain structures involved in the follow-up and treatment planning of vestibular schwannoma (VS): the VS and the cochleas. Currently, the diagnosis and surveillance in patients with VS are performed using contrast-enhanced T1 (ceT1) MRI. However, there is growing interest in using non-contrast sequences such as high-resolution T2 (hrT2) MRI. Therefore, we created an unsupervised cross-modality segmentation benchmark. The training set provides annotated ceT1 (N=105) and unpaired non-annotated hrT2 (N=105). The aim was to automatically perform unilateral VS and bilateral cochlea segmentation on hrT2 as provided in the testing set (N=137). A total of 16 teams submitted their algorithm for the evaluation phase. The level of performance reached by the top-performing teams is strikingly high (best median Dice – VS:88.4%; Cochleas:85.7%) and close to full supervision (median Dice – VS:92.5%; Cochleas:87.7%). All top-performing methods made use of an image-to-image translation approach to transform the source-domain images into pseudo-target-domain images. A segmentation network was then trained using these generated images and the manual annotations provided for the source image.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HUPBA Approved no
Call Number Admin @ si @ DKI2023 Serial 3706
Permanent link to this record
 

 
Author Naveen Onkarappa; Cristhian A. Aguilera-Carrasco; Boris X. Vintimilla; Angel Sappa
Title (down) Cross-spectral Stereo Correspondence using Dense Flow Fields Type Conference Article
Year 2014 Publication 9th International Conference on Computer Vision Theory and Applications Abbreviated Journal
Volume 3 Issue Pages 613-617
Keywords Cross-spectral Stereo Correspondence; Dense Optical Flow; Infrared and Visible Spectrum
Abstract This manuscript addresses the cross-spectral stereo correspondence problem. It proposes the usage of a dense flow field based representation instead of the original cross-spectral images, which have a low correlation. In this way, working in the flow field space, classical cost functions can be used as similarity measures. Preliminary experimental results on urban environments have been obtained showing the validity of the proposed approach.
Address Lisboa; Portugal; January 2014
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference VISAPP
Notes ADAS; 600.055; 600.076 Approved no
Call Number Admin @ si @ OAV2014 Serial 2477
Permanent link to this record
 

 
Author Cristhian A. Aguilera-Carrasco; Angel Sappa; Cristhian Aguilera; Ricardo Toledo
Title (down) Cross-Spectral Local Descriptors via Quadruplet Network Type Journal Article
Year 2017 Publication Sensors Abbreviated Journal SENS
Volume 17 Issue 4 Pages 873
Keywords
Abstract This paper presents a novel CNN-based architecture, referred to as Q-Net, to learn local feature descriptors that are useful for matching image patches from two different spectral bands. Given correctly matched and non-matching cross-spectral image pairs, a quadruplet network is trained to map input image patches to a common Euclidean space, regardless of the input spectral band. Our approach is inspired by the recent success of triplet networks in the visible spectrum, but adapted for cross-spectral scenarios, where, for each matching pair, there are always two possible non-matching patches: one for each spectrum. Experimental evaluations on a public cross-spectral VIS-NIR dataset shows that the proposed approach improves the state-of-the-art. Moreover, the proposed technique can also be used in mono-spectral settings, obtaining a similar performance to triplet network descriptors, but requiring less training data.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS; 600.086; 600.118 Approved no
Call Number Admin @ si @ ASA2017 Serial 2914
Permanent link to this record
 

 
Author M. Cruz; Cristhian A. Aguilera-Carrasco; Boris X. Vintimilla; Ricardo Toledo; Angel Sappa
Title (down) Cross-spectral image registration and fusion: an evaluation study Type Conference Article
Year 2015 Publication 2nd International Conference on Machine Vision and Machine Learning Abbreviated Journal
Volume Issue Pages
Keywords multispectral imaging; image registration; data fusion; infrared and visible spectra
Abstract This paper presents a preliminary study on the registration and fusion of cross-spectral imaging. The objective is to evaluate the validity of widely used computer vision approaches when they are applied at different
spectral bands. In particular, we are interested in merging images from the infrared (both long wave infrared: LWIR and near infrared: NIR) and visible spectrum (VS). Experimental results with different data sets are presented.
Address Barcelona; July 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference MVML
Notes ADAS; 600.076 Approved no
Call Number Admin @ si @ CAV2015 Serial 2629
Permanent link to this record
 

 
Author Michael Teutsch; Angel Sappa; Riad I. Hammoud
Title (down) Cross-Spectral Image Processing Type Book Chapter
Year 2022 Publication Computer Vision in the Infrared Spectrum. Synthesis Lectures on Computer Vision Abbreviated Journal
Volume Issue Pages 23-34
Keywords
Abstract Although this book is on IR computer vision and its main focus lies on IR image and video processing and analysis, a special attention is dedicated to cross-spectral image processing due to the increasing number of publications and applications in this domain. In these cross-spectral frameworks, IR information is used together with information from other spectral bands to tackle some specific problems by developing more robust solutions. Tasks considered for cross-spectral processing are for instance dehazing, segmentation, vegetation index estimation, or face recognition. This increasing number of applications is motivated by cross- and multi-spectral camera setups available already on the market like for example smartphones, remote sensing multispectral cameras, or multi-spectral cameras for automotive systems or drones. In this chapter, different cross-spectral image processing techniques will be reviewed together with possible applications. Initially, image registration approaches for the cross-spectral case are reviewed: the registration stage is the first image processing task, which is needed to align images acquired by different sensors within the same reference coordinate system. Then, recent cross-spectral image colorization approaches, which are intended to colorize infrared images for different applications are presented. Finally, the cross-spectral image enhancement problem is tackled by including guided super resolution techniques, image dehazing approaches, cross-spectral filtering and edge detection. Figure 3.1 illustrates cross-spectral image processing stages as well as their possible connections. Table 3.1 presents some of the available public cross-spectral datasets generally used as reference data to evaluate cross-spectral image registration, colorization, enhancement, or exploitation results.
Address
Corporate Author Thesis
Publisher Springer Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title SLCV
Series Volume Series Issue Edition
ISSN ISBN 978-3-031-00698-2 Medium
Area Expedition Conference
Notes MSIAU; MACO Approved no
Call Number Admin @ si @ TSH2022b Serial 3805
Permanent link to this record