|   | 
Details
   web
Records
Author Ole Vilhelm-Larsen; Petia Radeva; Enric Marti
Title Guidelines for choosing optimal parameters of elasticity for snakes Type Book Chapter
Year 1995 Publication Computer Analysis Of Images And Patterns Abbreviated Journal LNCS
Volume 970 Issue Pages 106-113
Keywords
Abstract (down) This paper proposes a guidance in the process of choosing and using the parameters of elasticity of a snake in order to obtain a precise segmentation. A new two step procedure is defined based on upper and lower bounds on the parameters. Formulas, by which these bounds can be calculated for real images where parts of the contour may be missing, are presented. Experiments on segmentation of bone structures in X-ray images have verified the usefulness of the new procedure.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Lecture Notes in Computer Science Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB;IAM Approved no
Call Number IAM @ iam @ LRM1995b Serial 1558
Permanent link to this record
 

 
Author Xavier Soria; Edgar Riba; Angel Sappa
Title Dense Extreme Inception Network: Towards a Robust CNN Model for Edge Detection Type Conference Article
Year 2020 Publication IEEE Winter Conference on Applications of Computer Vision Abbreviated Journal
Volume Issue Pages
Keywords
Abstract (down) This paper proposes a Deep Learning based edge detector, which is inspired on both HED (Holistically-Nested Edge Detection) and Xception networks. The proposed approach generates thin edge-maps that are plausible for human eyes; it can be used in any edge detection task without previous training or fine tuning process. As a second contribution, a large dataset with carefully annotated edges has been generated. This dataset has been used for training the proposed approach as well the state-of-the-art algorithms for comparisons. Quantitative and qualitative evaluations have been performed on different benchmarks showing improvements with the proposed method when F-measure of ODS and OIS are considered.
Address Aspen; USA; March 2020
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference WACV
Notes MSIAU; 600.130; 601.349; 600.122 Approved no
Call Number Admin @ si @ SRS2020 Serial 3434
Permanent link to this record
 

 
Author Iban Berganzo-Besga; Hector A. Orengo; Felipe Lumbreras; Aftab Alam; Rosie Campbell; Petrus J Gerrits; Jonas Gregorio de Souza; Afifa Khan; Maria Suarez Moreno; Jack Tomaney; Rebecca C Roberts; Cameron A Petrie
Title Curriculum learning-based strategy for low-density archaeological mound detection from historical maps in India and Pakistan Type Journal Article
Year 2023 Publication Scientific Reports Abbreviated Journal ScR
Volume 13 Issue Pages 11257
Keywords
Abstract (down) This paper presents two algorithms for the large-scale automatic detection and instance segmentation of potential archaeological mounds on historical maps. Historical maps present a unique source of information for the reconstruction of ancient landscapes. The last 100 years have seen unprecedented landscape modifications with the introduction and large-scale implementation of mechanised agriculture, channel-based irrigation schemes, and urban expansion to name but a few. Historical maps offer a window onto disappearing landscapes where many historical and archaeological elements that no longer exist today are depicted. The algorithms focus on the detection and shape extraction of mound features with high probability of being archaeological settlements, mounds being one of the most commonly documented archaeological features to be found in the Survey of India historical map series, although not necessarily recognised as such at the time of surveying. Mound features with high archaeological potential are most commonly depicted through hachures or contour-equivalent form-lines, therefore, an algorithm has been designed to detect each of those features. Our proposed approach addresses two of the most common issues in archaeological automated survey, the low-density of archaeological features to be detected, and the small amount of training data available. It has been applied to all types of maps available of the historic 1″ to 1-mile series, thus increasing the complexity of the detection. Moreover, the inclusion of synthetic data, along with a Curriculum Learning strategy, has allowed the algorithm to better understand what the mound features look like. Likewise, a series of filters based on topographic setting, form, and size have been applied to improve the accuracy of the models. The resulting algorithms have a recall value of 52.61% and a precision of 82.31% for the hachure mounds, and a recall value of 70.80% and a precision of 70.29% for the form-line mounds, which allowed the detection of nearly 6000 mound features over an area of 470,500 km2, the largest such approach to have ever been applied. If we restrict our focus to the maps most similar to those used in the algorithm training, we reach recall values greater than 60% and precision values greater than 90%. This approach has shown the potential to implement an adaptive algorithm that allows, after a small amount of retraining with data detected from a new map, a better general mound feature detection in the same map.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MSIAU Approved no
Call Number Admin @ si @ BOL2023 Serial 3976
Permanent link to this record
 

 
Author Ozan Caglayan; Walid Aransa; Yaxing Wang; Marc Masana; Mercedes Garcıa-Martinez; Fethi Bougares; Loic Barrault; Joost Van de Weijer
Title Does Multimodality Help Human and Machine for Translation and Image Captioning? Type Conference Article
Year 2016 Publication 1st conference on machine translation Abbreviated Journal
Volume Issue Pages
Keywords
Abstract (down) This paper presents the systems developed by LIUM and CVC for the WMT16 Multimodal Machine Translation challenge. We explored various comparative methods, namely phrase-based systems and attentional recurrent neural networks models trained using monomodal or multimodal data. We also performed a human evaluation in order to estimate theusefulness of multimodal data for human machine translation and image description generation. Our systems obtained the best results for both tasks according to the automatic evaluation metrics BLEU and METEOR.
Address Berlin; Germany; August 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference WMT
Notes LAMP; 600.106 ; 600.068 Approved no
Call Number Admin @ si @ CAW2016 Serial 2761
Permanent link to this record
 

 
Author Rafael E. Rivadeneira; Angel Sappa; Boris X. Vintimilla; Chenyang Wang; Junjun Jiang; Xianming Liu; Zhiwei Zhong; Dai Bin; Li Ruodi; Li Shengye
Title Thermal Image Super-Resolution Challenge Results-PBVS 2023 Type Conference Article
Year 2023 Publication Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops Abbreviated Journal
Volume Issue Pages 470-478
Keywords
Abstract (down) This paper presents the results of two tracks from the fourth Thermal Image Super-Resolution (TISR) challenge, held at the Perception Beyond the Visible Spectrum (PBVS) 2023 workshop. Track-1 uses the same thermal image dataset as previous challenges, with 951 training images and 50 validation images at each resolution. In this track, two evaluations were conducted: the first consists of generating a SR image from a HR thermal noisy image downsampled by four, and the second consists of generating a SR image from a mid-resolution image and compare it with its semi-registered HR image (acquired with another camera). The results of Track-1 outperformed those from last year’s challenge. On the other hand, Track-2 uses a new acquired dataset consisting of 160 registered visible and thermal images of the same scenario for training and 30 validation images. This year, more than 150 teams participated in the challenge tracks, demonstrating the community’s ongoing interest in this topic.
Address Vancouver; Canada; June 2023
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPRW
Notes MSIAU Approved no
Call Number Admin @ si @ RSV2023 Serial 3914
Permanent link to this record
 

 
Author Aniol Lidon; Xavier Giro; Marc Bolaños; Petia Radeva; Markus Seidl; Matthias Zeppelzauer
Title UPC-UB-STP @ MediaEval 2015 diversity task: iterative reranking of relevant images Type Conference Article
Year 2015 Publication 2015 MediaEval Retrieving Diverse Images Task Abbreviated Journal
Volume Issue Pages
Keywords
Abstract (down) This paper presents the results of the UPC-UB-STP team in the 2015 MediaEval Retrieving Diverse Images Task. The goal of the challenge is to provide a ranked list of Flickr photos for a predefined set of queries. Our approach firstly generates a ranking of images based on a query-independent estimation of its relevance. Only top results are kept and iteratively re-ranked based on their intra-similarity to introduce diversity.
Address Wurzen; Germany; September 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference MediaEval
Notes MILAB Approved no
Call Number Admin @ si @LGB2016 Serial 2793
Permanent link to this record
 

 
Author Dimosthenis Karatzas; Sergi Robles; Joan Mas; Farshad Nourbakhsh; Partha Pratim Roy
Title ICDAR 2011 Robust Reading Competition – Challege 1: Reading Text in Born-Digital Images (Web and Email) Type Conference Article
Year 2011 Publication 11th International Conference on Document Analysis and Recognition Abbreviated Journal
Volume Issue Pages 1485-1490
Keywords
Abstract (down) This paper presents the results of the first Challenge of ICDAR 2011 Robust Reading Competition. Challenge 1 is focused on the extraction of text from born-digital images, specifically from images found in Web pages and emails. The challenge was organized in terms of three tasks that look at different stages of the process: text localization, text segmentation and word recognition. In this paper we present the results of the challenge for all three tasks, and make an open call for continuous participation outside the context of ICDAR 2011.
Address Beijing, China
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1520-5363 ISBN 978-1-4577-1350-7 Medium
Area Expedition Conference ICDAR
Notes DAG Approved no
Call Number Admin @ si @ KRM2011 Serial 1793
Permanent link to this record
 

 
Author Spencer Low; Oliver Nina; Angel Sappa; Erik Blasch; Nathan Inkawhich
Title Multi-Modal Aerial View Object Classification Challenge Results-PBVS 2023 Type Conference Article
Year 2023 Publication Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops Abbreviated Journal
Volume Issue Pages 412-421
Keywords
Abstract (down) This paper presents the findings and results of the third edition of the Multi-modal Aerial View Object Classification (MAVOC) challenge in a detailed and comprehensive manner. The challenge consists of two tracks. The primary aim of both tracks is to encourage research into building recognition models that utilize both synthetic aperture radar (SAR) and electro-optical (EO) imagery. Participating teams are encouraged to develop multi-modal approaches that incorporate complementary information from both domains. While the 2021 challenge demonstrated the feasibility of combining both modalities, the 2022 challenge expanded on the capability of multi-modal models. The 2023 challenge introduces a refined version of the UNICORN dataset and demonstrates significant improvements made. The 2023 challenge adopts an updated UNIfied CO-incident Optical and Radar for recognitioN (UNICORN V2) dataset and competition format. Two tasks are featured: SAR classification and SAR + EO classification. In addition to measuring accuracy of models, we also introduce out-of-distribution measures to encourage model robustness.The majority of this paper is dedicated to discussing the top performing methods and evaluating their performance on our blind test set. It is worth noting that all of the top ten teams outperformed the Resnet-50 baseline. The top team for SAR classification achieved a 173% performance improvement over the baseline, while the top team for SAR + EO classification achieved a 175% improvement.
Address Vancouver; Canada; June 2023
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPRW
Notes MSIAU Approved no
Call Number Admin @ si @ LNS2023b Serial 3915
Permanent link to this record
 

 
Author Marçal Rusiñol; J. Chazalon; Katerine Diaz
Title Augmented Songbook: an Augmented Reality Educational Application for Raising Music Awareness Type Journal Article
Year 2018 Publication Multimedia Tools and Applications Abbreviated Journal MTAP
Volume 77 Issue 11 Pages 13773-13798
Keywords Augmented reality; Document image matching; Educational applications
Abstract (down) This paper presents the development of an Augmented Reality mobile application which aims at sensibilizing young children to abstract concepts of music. Such concepts are, for instance, the musical notation or the idea of rhythm. Recent studies in Augmented Reality for education suggest that such technologies have multiple benefits for students, including younger ones. As mobile document image acquisition and processing gains maturity on mobile platforms, we explore how it is possible to build a markerless and real-time application to augment the physical documents with didactic animations and interactive virtual content. Given a standard image processing pipeline, we compare the performance of different local descriptors at two key stages of the process. Results suggest alternatives to the SIFT local descriptors, regarding result quality and computational efficiency, both for document model identification and perspective transform estimation. All experiments are performed on an original and public dataset we introduce here.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes DAG; ADAS; 600.084; 600.121; 600.118; 600.129 Approved no
Call Number Admin @ si @ RCD2018 Serial 2996
Permanent link to this record
 

 
Author Henry Velesaca; Steven Araujo; Patricia Suarez; Angel Sanchez; Angel Sappa
Title Off-the-Shelf Based System for Urban Environment Video Analytics Type Conference Article
Year 2020 Publication 27th International Conference on Systems, Signals and Image Processing Abbreviated Journal
Volume Issue Pages
Keywords greenhouse gases; carbon footprint; object detection; object tracking; website framework; off-the-shelf video analytics
Abstract (down) This paper presents the design and implementation details of a system build-up by using off-the-shelf algorithms for urban video analytics. The system allows the connection to
public video surveillance camera networks to obtain the necessary information to generate statistics from urban scenarios (e.g., amount of vehicles, type of cars, direction, numbers of persons, etc.). The obtained information could be used not only for traffic management but also to estimate the carbon footprint of urban scenarios. As a case study, a university campus is selected to evaluate the performance of the proposed system. The system is implemented in a modular way so that it is being used as a testbed to evaluate different algorithms. Implementation results are provided showing the validity and utility of the proposed approach.
Address Virtual IWSSIP
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference IWSSIP
Notes MSIAU; 600.130; 601.349; 600.122 Approved no
Call Number Admin @ si @ VAS2020 Serial 3429
Permanent link to this record
 

 
Author Eugenio Alcala; Laura Sellart; Vicenc Puig; Joseba Quevedo; Jordi Saludes; David Vazquez; Antonio Lopez
Title Comparison of two non-linear model-based control strategies for autonomous vehicles Type Conference Article
Year 2016 Publication 24th Mediterranean Conference on Control and Automation Abbreviated Journal
Volume Issue Pages 846-851
Keywords Autonomous Driving; Control
Abstract (down) This paper presents the comparison of two nonlinear model-based control strategies for autonomous cars. A control oriented model of vehicle based on a bicycle model is used. The two control strategies use a model reference approach. Using this approach, the error dynamics model is developed. Both controllers receive as input the longitudinal, lateral and orientation errors generating as control outputs the steering angle and the velocity of the vehicle. The first control approach is based on a non-linear control law that is designed by means of the Lyapunov direct approach. The second approach is based on a sliding mode-control that defines a set of sliding surfaces over which the error trajectories will converge. The main advantage of the sliding-control technique is the robustness against non-linearities and parametric uncertainties in the model. However, the main drawback of first order sliding mode is the chattering, so it has been implemented a high order sliding mode control. To test and compare the proposed control strategies, different path following scenarios are used in simulation.
Address Athens; Greece; June 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference MED
Notes ADAS; 600.085; 600.082; 600.076 Approved no
Call Number ADAS @ adas @ ASP2016 Serial 2750
Permanent link to this record
 

 
Author Fernando Barrera; Felipe Lumbreras; Angel Sappa
Title Multimodal Template Matching based on Gradient and Mutual Information using Scale-Space Type Conference Article
Year 2010 Publication 17th IEEE International Conference on Image Processing Abbreviated Journal
Volume Issue Pages 2749–2752
Keywords
Abstract (down) This paper presents the combined use of gradient and mutual information for infrared and intensity templates matching. We propose to joint: (i) feature matching in a multiresolution context and (ii) information propagation through scale-space representations. Our method consists in combining mutual information with a shape descriptor based on gradient, and propagate them following a coarse-to-fine strategy. The main contributions of this work are: to offer a theoretical formulation towards a multimodal stereo matching; to show that gradient and mutual information can be reinforced while they are propagated between consecutive levels; and to show that they are valid cost functions in multimodal template matchings. Comparisons are presented showing the improvements and viability of the proposed approach.
Address Hong-Kong
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1522-4880 ISBN 978-1-4244-7992-4 Medium
Area Expedition Conference ICIP
Notes ADAS Approved no
Call Number ADAS @ adas @ BLS2010 Serial 1358
Permanent link to this record
 

 
Author Minesh Mathew; Ruben Tito; Dimosthenis Karatzas; R.Manmatha; C.V. Jawahar
Title Document Visual Question Answering Challenge 2020 Type Conference Article
Year 2020 Publication 33rd IEEE Conference on Computer Vision and Pattern Recognition – Short paper Abbreviated Journal
Volume Issue Pages
Keywords
Abstract (down) This paper presents results of Document Visual Question Answering Challenge organized as part of “Text and Documents in the Deep Learning Era” workshop, in CVPR 2020. The challenge introduces a new problem – Visual Question Answering on document images. The challenge comprised two tasks. The first task concerns with asking questions on a single document image. On the other hand, the second task is set as a retrieval task where the question is posed over a collection of images. For the task 1 a new dataset is introduced comprising 50,000 questions-answer(s) pairs defined over 12,767 document images. For task 2 another dataset has been created comprising 20 questions over 14,362 document images which share the same document template.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPR
Notes DAG; 600.121 Approved no
Call Number Admin @ si @ MTK2020 Serial 3558
Permanent link to this record
 

 
Author Rafael E. Rivadeneira; Angel Sappa; Boris X. Vintimilla; Jin Kim; Dogun Kim; Zhihao Li; Yingchun Jian; Bo Yan; Leilei Cao; Fengliang Qi; Hongbin Wang Rongyuan Wu; Lingchen Sun; Yongqiang Zhao; Lin Li; Kai Wang; Yicheng Wang; Xuanming Zhang; Huiyuan Wei; Chonghua Lv; Qigong Sun; Xiaolin Tian; Zhuang Jia; Jiakui Hu; Chenyang Wang; Zhiwei Zhong; Xianming Liu; Junjun Jiang
Title Thermal Image Super-Resolution Challenge Results – PBVS 2022 Type Conference Article
Year 2022 Publication IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) Abbreviated Journal
Volume Issue Pages 418-426
Keywords
Abstract (down) This paper presents results from the third Thermal Image Super-Resolution (TISR) challenge organized in the Perception Beyond the Visible Spectrum (PBVS) 2022 workshop. The challenge uses the same thermal image dataset as the first two challenges, with 951 training images and 50 validation images at each resolution. A set of 20 images was kept aside for testing. The evaluation tasks were to measure the PSNR and SSIM between the SR image and the ground truth (HR thermal noisy image downsampled by four), and also to measure the PSNR and SSIM between the SR image and the semi-registered HR image (acquired with another camera). The results outperformed those from last year’s challenge, improving both evaluation metrics. This year, almost 100 teams participants registered for the challenge, showing the community’s interest in this hot topic.
Address New Orleans; USA; June 2022
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPRW
Notes MSIAU; no menciona Approved no
Call Number Admin @ si @ RSV2022c Serial 3775
Permanent link to this record
 

 
Author Rafael E. Rivadeneira; Angel Sappa; Boris X. Vintimilla; Sabari Nathan; Priya Kansal; Armin Mehri; Parichehr Behjati Ardakani; A.Dalal; A.Akula; D.Sharma; S.Pandey; B.Kumar; J.Yao; R.Wu; KFeng; N.Li; Y.Zhao; H.Patel; V. Chudasama; K.Pjajapati; A.Sarvaiya; K.Upla; K.Raja; R.Ramachandra; C.Bush; F.Almasri; T.Vandamme; O.Debeir; N.Gutierrez; Q.Nguyen; W.Beksi
Title Thermal Image Super-Resolution Challenge – PBVS 2021 Type Conference Article
Year 2021 Publication Conference on Computer Vision and Pattern Recognition Workshops Abbreviated Journal
Volume Issue Pages 4359-4367
Keywords
Abstract (down) This paper presents results from the second Thermal Image Super-Resolution (TISR) challenge organized in the framework of the Perception Beyond the Visible Spectrum (PBVS) 2021 workshop. For this second edition, the same thermal image dataset considered during the first challenge has been used; only mid-resolution (MR) and high-resolution (HR) sets have been considered. The dataset consists of 951 training images and 50 testing images for each resolution. A set of 20 images for each resolution is kept aside for evaluation. The two evaluation methodologies proposed for the first challenge are also considered in this opportunity. The first evaluation task consists of measuring the PSNR and SSIM between the obtained SR image and the corresponding ground truth (i.e., the HR thermal image downsampled by four). The second evaluation also consists of measuring the PSNR and SSIM, but in this case, considers the x2 SR obtained from the given MR thermal image; this evaluation is performed between the SR image with respect to the semi-registered HR image, which has been acquired with another camera. The results outperformed those from the first challenge, thus showing an improvement in both evaluation metrics.
Address Virtual; June 2021
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPRW
Notes MSIAU; 600.130; 600.122 Approved no
Call Number Admin @ si @ RSV2021 Serial 3581
Permanent link to this record