|   | 
Details
   web
Records
Author Fernando Vilariño; Panagiota Spyridonos; Jordi Vitria; Fernando Azpiroz; Petia Radeva
Title Automatic Detection of Intestinal Juices in Wireless Capsule Video Endoscopy Type Conference Article
Year 2006 Publication (down) 18th International Conference on Pattern Recognition Abbreviated Journal
Volume 4 Issue Pages 719-722
Keywords Clinical diagnosis , Endoscopes , Fluids and secretions , Gabor filters , Hospitals , Image sequence analysis , Intestines , Lighting , Shape , Visualization
Abstract Wireless capsule video endoscopy is a novel and challenging clinical technique, whose major reported drawback relates to the high amount of time needed for video visualization. In this paper, we propose a method for the rejection of the parts of the video resulting not valid for analysis by means of automatic detection of intestinal juices. We applied Gabor filters for the characterization of the bubble-like shape of intestinal juices in fasting patients. Our method achieves a significant reduction in visualization time, with no relevant loss of valid frames. The proposed approach is easily extensible to other image analysis scenarios where the described pattern of bubbles can be found.
Address Hong Kong
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1051-4651 ISBN 0-7695-2521-0 Medium
Area 800 Expedition Conference ICPR
Notes MV;OR;MILAB;SIAI Approved no
Call Number BCNPCL @ bcnpcl @ VSV2006b; IAM @ iam @ VSV2006g Serial 727
Permanent link to this record
 

 
Author Eduardo Aguilar; Petia Radeva
Title Class-Conditional Data Augmentation Applied to Image Classification Type Conference Article
Year 2019 Publication (down) 18th International Conference on Computer Analysis of Images and Patterns Abbreviated Journal
Volume 11679 Issue Pages 182-192
Keywords CNNs; Data augmentation; Deep learning; Epistemic uncertainty; Image classification; Food recognition
Abstract Image classification is widely researched in the literature, where models based on Convolutional Neural Networks (CNNs) have provided better results. When data is not enough, CNN models tend to be overfitted. To deal with this, often, traditional techniques of data augmentation are applied, such as: affine transformations, adjusting the color balance, among others. However, we argue that some techniques of data augmentation may be more appropriate for some of the classes. In order to select the techniques that work best for particular class, we propose to explore the epistemic uncertainty for the samples within each class. From our experiments, we can observe that when the data augmentation is applied class-conditionally, we improve the results in terms of accuracy and also reduce the overall epistemic uncertainty. To summarize, in this paper we propose a class-conditional data augmentation procedure that allows us to obtain better results and improve robustness of the classification in the face of model uncertainty.
Address Salermo; Italy; September 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CAIP
Notes MILAB; no proj Approved no
Call Number Admin @ si @ AgR2019 Serial 3366
Permanent link to this record
 

 
Author Estefania Talavera; Nicolai Petkov; Petia Radeva
Title Unsupervised Routine Discovery in Egocentric Photo-Streams Type Conference Article
Year 2019 Publication (down) 18th International Conference on Computer Analysis of Images and Patterns Abbreviated Journal
Volume 11678 Issue Pages 576-588
Keywords Routine discovery; Lifestyle; Egocentric vision; Behaviour analysis
Abstract The routine of a person is defined by the occurrence of activities throughout different days, and can directly affect the person’s health. In this work, we address the recognition of routine related days. To do so, we rely on egocentric images, which are recorded by a wearable camera and allow to monitor the life of the user from a first-person view perspective. We propose an unsupervised model that identifies routine related days, following an outlier detection approach. We test the proposed framework over a total of 72 days in the form of photo-streams covering around 2 weeks of the life of 5 different camera wearers. Our model achieves an average of 76% Accuracy and 68% Weighted F-Score for all the users. Thus, we show that our framework is able to recognise routine related days and opens the door to the understanding of the behaviour of people.
Address Salermo; Italy; September 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CAIP
Notes MILAB; no proj Approved no
Call Number Admin @ si @ TPR2019a Serial 3367
Permanent link to this record
 

 
Author Mohammad Rouhani; Angel Sappa
Title Implicit B-Spline Fitting Using the 3L Algorithm Type Conference Article
Year 2011 Publication (down) 18th IEEE International Conference on Image Processing Abbreviated Journal
Volume Issue Pages 893-896
Keywords
Abstract
Address Brussels, Belgium
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICIP
Notes ADAS Approved no
Call Number Admin @ si @ RoS2011a; ADAS @ adas @ Serial 1782
Permanent link to this record
 

 
Author Lichao Zhang; Abel Gonzalez-Garcia; Joost Van de Weijer; Martin Danelljan; Fahad Shahbaz Khan
Title Learning the Model Update for Siamese Trackers Type Conference Article
Year 2019 Publication (down) 18th IEEE International Conference on Computer Vision Abbreviated Journal
Volume Issue Pages 4009-4018
Keywords
Abstract Siamese approaches address the visual tracking problem by extracting an appearance template from the current frame, which is used to localize the target in the next frame. In general, this template is linearly combined with the accumulated template from the previous frame, resulting in an exponential decay of information over time. While such an approach to updating has led to improved results, its simplicity limits the potential gain likely to be obtained by learning to update. Therefore, we propose to replace the handcrafted update function with a method which learns to update. We use a convolutional neural network, called UpdateNet, which given the initial template, the accumulated template and the template of the current frame aims to estimate the optimal template for the next frame. The UpdateNet is compact and can easily be integrated into existing Siamese trackers. We demonstrate the generality of the proposed approach by applying it to two Siamese trackers, SiamFC and DaSiamRPN. Extensive experiments on VOT2016, VOT2018, LaSOT, and TrackingNet datasets demonstrate that our UpdateNet effectively predicts the new target template, outperforming the standard linear update. On the large-scale TrackingNet dataset, our UpdateNet improves the results of DaSiamRPN with an absolute gain of 3.9% in terms of success score.
Address Seul; Corea; October 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCV
Notes LAMP; 600.109; 600.141; 600.120 Approved no
Call Number Admin @ si @ ZGW2019 Serial 3295
Permanent link to this record
 

 
Author Ali Furkan Biten; R. Tito; Andres Mafla; Lluis Gomez; Marçal Rusiñol; C.V. Jawahar; Ernest Valveny; Dimosthenis Karatzas
Title Scene Text Visual Question Answering Type Conference Article
Year 2019 Publication (down) 18th IEEE International Conference on Computer Vision Abbreviated Journal
Volume Issue Pages 4291-4301
Keywords
Abstract Current visual question answering datasets do not consider the rich semantic information conveyed by text within an image. In this work, we present a new dataset, ST-VQA, that aims to highlight the importance of exploiting highlevel semantic information present in images as textual cues in the Visual Question Answering process. We use this dataset to define a series of tasks of increasing difficulty for which reading the scene text in the context provided by the visual information is necessary to reason and generate an appropriate answer. We propose a new evaluation metric for these tasks to account both for reasoning errors as well as shortcomings of the text recognition module. In addition we put forward a series of baseline methods, which provide further insight to the newly released dataset, and set the scene for further research.
Address Seul; Corea; October 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCV
Notes DAG; 600.129; 600.135; 601.338; 600.121 Approved no
Call Number Admin @ si @ BTM2019b Serial 3285
Permanent link to this record
 

 
Author Axel Barroso-Laguna; Edgar Riba; Daniel Ponsa; Krystian Mikolajczyk
Title Key.Net: Keypoint Detection by Handcrafted and Learned CNN Filters Type Conference Article
Year 2019 Publication (down) 18th IEEE International Conference on Computer Vision Abbreviated Journal
Volume Issue Pages 5835-5843
Keywords
Abstract We introduce a novel approach for keypoint detection task that combines handcrafted and learned CNN filters within a shallow multi-scale architecture. Handcrafted filters provide anchor structures for learned filters, which localize, score and rank repeatable features. Scale-space representation is used within the network to extract keypoints at different levels. We design a loss function to detect robust features that exist across a range of scales and to maximize the repeatability score. Our Key.Net model is trained on data synthetically created from ImageNet and evaluated on HPatches benchmark. Results show that our approach outperforms state-of-the-art detectors in terms of repeatability, matching performance and complexity.
Address Seul; Corea; October 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCV
Notes MSIAU; 600.122 Approved no
Call Number Admin @ si @ BRP2019 Serial 3290
Permanent link to this record
 

 
Author Hamed H. Aghdam; Abel Gonzalez-Garcia; Joost Van de Weijer; Antonio Lopez
Title Active Learning for Deep Detection Neural Networks Type Conference Article
Year 2019 Publication (down) 18th IEEE International Conference on Computer Vision Abbreviated Journal
Volume Issue Pages 3672-3680
Keywords
Abstract The cost of drawing object bounding boxes (ie labeling) for millions of images is prohibitively high. For instance, labeling pedestrians in a regular urban image could take 35 seconds on average. Active learning aims to reduce the cost of labeling by selecting only those images that are informative to improve the detection network accuracy. In this paper, we propose a method to perform active learning of object detectors based on convolutional neural networks. We propose a new image-level scoring process to rank unlabeled images for their automatic selection, which clearly outperforms classical scores. The proposed method can be applied to videos and sets of still images. In the former case, temporal selection rules can complement our scoring process. As a relevant use case, we extensively study the performance of our method on the task of pedestrian detection. Overall, the experiments show that the proposed method performs better than random selection.
Address Seul; Korea; October 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCV
Notes ADAS; LAMP; 600.124; 600.109; 600.141; 600.120; 600.118 Approved no
Call Number Admin @ si @ AGW2019 Serial 3321
Permanent link to this record
 

 
Author Felipe Codevilla; Eder Santana; Antonio Lopez; Adrien Gaidon
Title Exploring the Limitations of Behavior Cloning for Autonomous Driving Type Conference Article
Year 2019 Publication (down) 18th IEEE International Conference on Computer Vision Abbreviated Journal
Volume Issue Pages 9328-9337
Keywords
Abstract Driving requires reacting to a wide variety of complex environment conditions and agent behaviors. Explicitly modeling each possible scenario is unrealistic. In contrast, imitation learning can, in theory, leverage data from large fleets of human-driven cars. Behavior cloning in particular has been successfully used to learn simple visuomotor policies end-to-end, but scaling to the full spectrum of driving behaviors remains an unsolved problem. In this paper, we propose a new benchmark to experimentally investigate the scalability and limitations of behavior cloning. We show that behavior cloning leads to state-of-the-art results, executing complex lateral and longitudinal maneuvers, even in unseen environments, without being explicitly programmed to do so. However, we confirm some limitations of the behavior cloning approach: some well-known limitations (eg, dataset bias and overfitting), new generalization issues (eg, dynamic objects and the lack of a causal modeling), and training instabilities, all requiring further research before behavior cloning can graduate to real-world driving. The code, dataset, benchmark, and agent studied in this paper can be found at github.
Address Seul; Korea; October 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCV
Notes ADAS; 600.124; 600.118 Approved no
Call Number Admin @ si @ CSL2019 Serial 3322
Permanent link to this record
 

 
Author David Berga; Xose R. Fernandez-Vidal; Xavier Otazu; Xose M. Pardo
Title SID4VAM: A Benchmark Dataset with Synthetic Images for Visual Attention Modeling Type Conference Article
Year 2019 Publication (down) 18th IEEE International Conference on Computer Vision Abbreviated Journal
Volume Issue Pages 8788-8797
Keywords
Abstract A benchmark of saliency models performance with a synthetic image dataset is provided. Model performance is evaluated through saliency metrics as well as the influence of model inspiration and consistency with human psychophysics. SID4VAM is composed of 230 synthetic images, with known salient regions. Images were generated with 15 distinct types of low-level features (e.g. orientation, brightness, color, size...) with a target-distractor popout type of synthetic patterns. We have used Free-Viewing and Visual Search task instructions and 7 feature contrasts for each feature category. Our study reveals that state-ofthe-art Deep Learning saliency models do not perform well with synthetic pattern images, instead, models with Spectral/Fourier inspiration outperform others in saliency metrics and are more consistent with human psychophysical experimentation. This study proposes a new way to evaluate saliency models in the forthcoming literature, accounting for synthetic images with uniquely low-level feature contexts, distinct from previous eye tracking image datasets.
Address Seul; Corea; October 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCV
Notes NEUROBIT; 600.128 Approved no
Call Number Admin @ si @ BFO2019b Serial 3372
Permanent link to this record
 

 
Author P. Andreeva; Maya Dimitrova; Petia Radeva
Title Data Mining Learning Models and Algorithms for Medical Applications Type Book Chapter
Year 2004 Publication (down) 18 Conference Systems for Automation of Engineering and Research (SEAR 2004) Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address Varna (Bulgaria)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB Approved no
Call Number BCNPCL @ bcnpcl @ ADR2004 Serial 474
Permanent link to this record
 

 
Author Patricia Suarez; Angel Sappa; Dario Carpio; Henry Velesaca; Francisca Burgos; Patricia Urdiales
Title Deep Learning Based Shrimp Classification Type Conference Article
Year 2022 Publication (down) 17th International Symposium on Visual Computing Abbreviated Journal
Volume 13598 Issue Pages 36–45
Keywords Pigmentation; Color space; Light weight network
Abstract This work proposes a novel approach based on deep learning to address the classification of shrimp (Pennaeus vannamei) into two classes, according to their level of pigmentation accepted by shrimp commerce. The main goal of this actual study is to support the shrimp industry in terms of price and process. An efficient CNN architecture is proposed to perform image classification through a program that could be set other in mobile devices or in fixed support in the shrimp supply chain. The proposed approach is a lightweight model that uses HSV color space shrimp images. A simple pipeline shows the most important stages performed to determine a pattern that identifies the class to which they belong based on their pigmentation. For the experiments, a database acquired with mobile devices of various brands and models has been used to capture images of shrimp. The results obtained with the images in the RGB and HSV color space allow for testing the effectiveness of the proposed model.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ISVC
Notes MSIAU; no proj Approved no
Call Number Admin @ si @ SAC2022 Serial 3772
Permanent link to this record
 

 
Author Bhalaji Nagarajan; Ricardo Marques; Marcos Mejia; Petia Radeva
Title Class-conditional Importance Weighting for Deep Learning with Noisy Labels Type Conference Article
Year 2022 Publication (down) 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications Abbreviated Journal
Volume 5 Issue Pages 679-686
Keywords Noisy Labeling; Loss Correction; Class-conditional Importance Weighting; Learning with Noisy Labels
Abstract Large-scale accurate labels are very important to the Deep Neural Networks to train them and assure high performance. However, it is very expensive to create a clean dataset since usually it relies on human interaction. To this purpose, the labelling process is made cheap with a trade-off of having noisy labels. Learning with Noisy Labels is an active area of research being at the same time very challenging. The recent advances in Self-supervised learning and robust loss functions have helped in advancing noisy label research. In this paper, we propose a loss correction method that relies on dynamic weights computed based on the model training. We extend the existing Contrast to Divide algorithm coupled with DivideMix using a new class-conditional weighted scheme. We validate the method using the standard noise experiments and achieved encouraging results.
Address Virtual; February 2022
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference VISAPP
Notes MILAB; no menciona Approved no
Call Number Admin @ si @ NMM2022 Serial 3798
Permanent link to this record
 

 
Author Patricia Suarez; Dario Carpio; Angel Sappa
Title Depth Map Estimation from a Single 2D Image Type Conference Article
Year 2023 Publication (down) 17th International Conference on Signal-Image Technology & Internet-Based Systems Abbreviated Journal
Volume Issue Pages 347-353
Keywords
Abstract This paper presents an innovative architecture based on a Cycle Generative Adversarial Network (CycleGAN) for the synthesis of high-quality depth maps from monocular images. The proposed architecture leverages a diverse set of loss functions, including cycle consistency, contrastive, identity, and least square losses, to facilitate the generation of depth maps that exhibit realism and high fidelity. A notable feature of the approach is its ability to synthesize depth maps from grayscale images without the need for paired training data. Extensive comparisons with different state-of-the-art methods show the superiority of the proposed approach in both quantitative metrics and visual quality. This work addresses the challenge of depth map synthesis and offers significant advancements in the field.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference SITIS
Notes MSIAU Approved no
Call Number Admin @ si @ SCS2023b Serial 4009
Permanent link to this record
 

 
Author Rafael E. Rivadeneira; Henry Velesaca; Angel Sappa
Title Object Detection in Very Low-Resolution Thermal Images through a Guided-Based Super-Resolution Approach Type Conference Article
Year 2023 Publication (down) 17th International Conference on Signal-Image Technology & Internet-Based Systems Abbreviated Journal
Volume Issue Pages
Keywords
Abstract This work proposes a novel approach that integrates super-resolution techniques with off-the-shelf object detection methods to tackle the problem of handling very low-resolution thermal images. The suggested approach begins by enhancing the low-resolution (LR) thermal images through a guided super-resolution strategy, leveraging a high-resolution (HR) visible spectrum image. Subsequently, object detection is performed on the high-resolution thermal image. The experimental results demonstrate tremendous improvements in comparison with both scenarios: when object detection is performed on the LR thermal image alone, as well as when object detection is conducted on the up-sampled LR thermal image. Moreover, the proposed approach proves highly valuable in camouflaged scenarios where objects might remain undetected in visible spectrum images.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference SITIS
Notes MSIAU Approved no
Call Number Admin @ si @ RVS2023 Serial 4010
Permanent link to this record