Georg Langs, Petia Radeva, David Rotger, & Francesc Carreras. (2004). Explorative Building of 3D Vessel Tree Models.
|
Firat Ismailoglu, Ida G. Sprinkhuizen-Kuyper, Evgueni Smirnov, Sergio Escalera, & Ralf Peeters. (2015). Fractional Programming Weighted Decoding for Error-Correcting Output Codes. In Multiple Classifier Systems, Proceedings of 12th International Workshop , MCS 2015 (pp. 38–50). Springer International Publishing.
Abstract: In order to increase the classification performance obtained using Error-Correcting Output Codes designs (ECOC), introducing weights in the decoding phase of the ECOC has attracted a lot of interest. In this work, we present a method for ECOC designs that focuses on increasing hypothesis margin on the data samples given a base classifier. While achieving this, we implicitly reward the base classifiers with high performance, whereas punish those with low performance. The resulting objective function is of the fractional programming type and we deal with this problem through the Dinkelbach’s Algorithm. The conducted tests over well known UCI datasets show that the presented method is superior to the unweighted decoding and that it outperforms the results of the state-of-the-art weighted decoding methods in most of the performed experiments.
|
Naila Murray, Luca Marchesotti, & Florent Perronnin. (2012). Learning to Rank Images using Semantic and Aesthetic Labels. In 23rd British Machine Vision Conference (110.pp. 1–110.10).
Abstract: Most works on image retrieval from text queries have addressed the problem of retrieving semantically relevant images. However, the ability to assess the aesthetic quality of an image is an increasingly important differentiating factor for search engines. In this work, given a semantic query, we are interested in retrieving images which are semantically relevant and score highly in terms of aesthetics/visual quality. We use large-margin classifiers and rankers to learn statistical models capable of ordering images based on the aesthetic and semantic information. In particular, we compare two families of approaches: while the first one attempts to learn a single ranker which takes into account both semantic and aesthetic information, the second one learns separate semantic and aesthetic models. We carry out a quantitative and qualitative evaluation on a recently-published large-scale dataset and we show that the second family of techniques significantly outperforms the first one.
|
Roberto Morales, Juan Quispe, & Eduardo Aguilar. (2023). Exploring multi-food detection using deep learning-based algorithms. In 13th International Conference on Pattern Recognition Systems (pp. 1–7).
Abstract: People are becoming increasingly concerned about their diet, whether for disease prevention, medical treatment or other purposes. In meals served in restaurants, schools or public canteens, it is not easy to identify the ingredients and/or the nutritional information they contain. Currently, technological solutions based on deep learning models have facilitated the recording and tracking of food consumed based on the recognition of the main dish present in an image. Considering that sometimes there may be multiple foods served on the same plate, food analysis should be treated as a multi-class object detection problem. EfficientDet and YOLOv5 are object detection algorithms that have demonstrated high mAP and real-time performance on general domain data. However, these models have not been evaluated and compared on public food datasets. Unlike general domain objects, foods have more challenging features inherent in their nature that increase the complexity of detection. In this work, we performed a performance evaluation of Efficient-Det and YOLOv5 on three public food datasets: UNIMIB2016, UECFood256 and ChileanFood64. From the results obtained, it can be seen that YOLOv5 provides a significant difference in terms of both mAP and response time compared to EfficientDet in all datasets. Furthermore, YOLOv5 outperforms the state-of-the-art on UECFood256, achieving an improvement of more than 4% in terms of mAP@.50 over the best reported.
|
Gisel Bastidas-Guacho, Patricio Moreno, Boris X. Vintimilla, & Angel Sappa. (2023). Application on the Loop of Multimodal Image Fusion: Trends on Deep-Learning Based Approaches. In 13th International Conference on Pattern Recognition Systems (Vol. 14234, 25–36).
Abstract: Multimodal image fusion allows the combination of information from different modalities, which is useful for tasks such as object detection, edge detection, and tracking, to name a few. Using the fused representation for applications results in better task performance. There are several image fusion approaches, which have been summarized in surveys. However, the existing surveys focus on image fusion approaches where the application on the loop of multimodal image fusion is not considered. On the contrary, this study summarizes deep learning-based multimodal image fusion for computer vision (e.g., object detection) and image processing applications (e.g., semantic segmentation), that is, approaches where the application module leverages the multimodal fusion process to enhance the final result. Firstly, we introduce image fusion and the existing general frameworks for image fusion tasks such as multifocus, multiexposure and multimodal. Then, we describe the multimodal image fusion approaches. Next, we review the state-of-the-art deep learning multimodal image fusion approaches for vision applications. Finally, we conclude our survey with the trends of task-driven multimodal image fusion.
|
Maria Vanrell, Felipe Lumbreras, A. Pujol, Ramon Baldrich, Josep Llados, & Juan J. Villanueva. (2001). Colour Normalisation Based on Background Information..
|
Jose Luis Alba, A. Pujol, & Juan J. Villanueva. (2001). Separating Geometry from Texture to Improve Face Analysis..
|
A. Pujol, Juan J. Villanueva, & Jose Luis Alba. (2001). Efficient Computation of Face Shape Similarity Using Distance Transform Eigendecomposition and Valleys..
|
Sonia Baeza, Debora Gil, Carles Sanchez, Guillermo Torres, Ignasi Garcia Olive, Ignasi Guasch, et al. (2023). Biopsia virtual radiomica para el diagnóstico histológico de nódulos pulmonares – Resultados intermedios del proyecto Radiolung. In SEPAR.
|
Esmitt Ramirez, Carles Sanchez, Agnes Borras, Marta Diez-Ferrer, Antoni Rosell, & Debora Gil. (2018). Image-Based Bronchial Anatomy Codification for Biopsy Guiding in Video Bronchoscopy. In OR 2.0 Context-Aware Operating Theaters, Computer Assisted Robotic Endoscopy, Clinical Image-Based Procedures, and Skin Image Analysis (Vol. 11041). LNCS.
Abstract: Bronchoscopy examinations allow biopsy of pulmonary nodules with minimum risk for the patient. Even for experienced bronchoscopists, it is difficult to guide the bronchoscope to most distal lesions and obtain an accurate diagnosis. This paper presents an image-based codification of the bronchial anatomy for bronchoscopy biopsy guiding. The 3D anatomy of each patient is codified as a binary tree with nodes representing bronchial levels and edges labeled using their position on images projecting the 3D anatomy from a set of branching points. The paths from the root to leaves provide a codification of navigation routes with spatially consistent labels according to the anatomy observes in video bronchoscopy explorations. We evaluate our labeling approach as a guiding system in terms of the number of bronchial levels correctly codified, also in the number of labels-based instructions correctly supplied, using generalized mixed models and computer-generated data. Results obtained for three independent observers prove the consistency and reproducibility of our guiding system. We trust that our codification based on viewer’s projection might be used as a foundation for the navigation process in Virtual Bronchoscopy systems.
Keywords: Biopsy guiding; Bronchoscopy; Lung biopsy; Intervention guiding; Airway codification
|
Md. Mostafa Kamal Sarker, Hatem A. Rashwan, Farhan Akram, Syeda Furruka Banu, Adel Saleh, Vivek Kumar Singh, et al. (2018). SLSDeep: Skin Lesion Segmentation Based on Dilated Residual and Pyramid Pooling Networks. In 21st International Conference on Medical Image Computing & Computer Assisted Intervention (Vol. 2, pp. 21–29).
Abstract: Skin lesion segmentation (SLS) in dermoscopic images is a crucial task for automated diagnosis of melanoma. In this paper, we present a robust deep learning SLS model, so-called SLSDeep, which is represented as an encoder-decoder network. The encoder network is constructed by dilated residual layers, in turn, a pyramid pooling network followed by three convolution layers is used for the decoder. Unlike the traditional methods employing a cross-entropy loss, we investigated a loss function by combining both Negative Log Likelihood (NLL) and End Point Error (EPE) to accurately segment the melanoma regions with sharp boundaries. The robustness of the proposed model was evaluated on two public databases: ISBI 2016 and 2017 for skin lesion analysis towards melanoma detection challenge. The proposed model outperforms the state-of-the-art methods in terms of segmentation accuracy. Moreover, it is capable to segment more than 100 images of size 384x384 per second on a recent GPU.
|
J. Mauri, E Fernandez-Nofrerias, E. Esplugas, A. Cequier, David Rotger, Ricardo Toledo, et al. (2000). Ecografia Intracoronaria: Navegacion Informatica por el cubo de datos de las imagenes..
|
David Vazquez, Antonio Lopez, Daniel Ponsa, & Javier Marin. (2011). Cool world: domain adaptation of virtual and real worlds for human detection using active learning. In NIPS Domain Adaptation Workshop: Theory and Application. Granada, Spain.
Abstract: Image based human detection is of paramount interest for different applications. The most promising human detectors rely on discriminatively learnt classifiers, i.e., trained with labelled samples. However, labelling is a manual intensive task, especially in cases like human detection where it is necessary to provide at least bounding boxes framing the humans for training. To overcome such problem, in Marin et al. we have proposed the use of a virtual world where the labels of the different objects are obtained automatically. This means that the human models (classifiers) are learnt using the appearance of realistic computer graphics. Later, these models are used for human detection in images of the real world. The results of this technique are surprisingly good. However, these are not always as good as the classical approach of training and testing with data coming from the same camera and the same type of scenario. Accordingly, in Vazquez et al. we cast the problem as one of supervised domain adaptation. In doing so, we assume that a small amount of manually labelled samples from real-world images is required. To collect these labelled samples we use an active learning technique. Thus, ultimately our human model is learnt by the combination of virtual- and real-world labelled samples which, to the best of our knowledge, was not done before. Here, we term such combined space cool world. In this extended abstract we summarize our proposal, and include quantitative results from Vazquez et al. showing its validity.
Keywords: Pedestrian Detection; Virtual; Domain Adaptation; Active Learning
|
Oriol Pujol, Petia Radeva, & Jordi Vitria. (2005). Traffic sign recognition using an adaptive boosting multiclass framework.
|
Martha Mackay, Fernando Alonso, Pere Salamero, Xavier Baro, Jordi Gonzalez, & Sergio Escalera. (2015). Care and caring: future proofing the new demographics. In 6th International Carers Conference.
Abstract: With an ageing population, the issue of care provision is becoming increasingly important. The simple aspiration of the majority of older people is to live safely and well at home. Housing will be part of health & care integration in the following years and decades. A higher proportion of people will have to rely on informal care through family, friends, neighbors and others who
provide care to an older person in need of assistance (around 80% of care across the EU). They do not usually have a formal status and are usually unpaid. We need to ensure that all disabled or chronically ill people can get the help they need without overburdening their families.
The physical and emotional stress of carers is one of the dangers that this dependency can bring. To prevent carers burnout it is necessary to provide new solutions that are affordable and user friendly for the families and caregivers.
|