|
Juan Borrego-Carazo, Carles Sanchez, David Castells, Jordi Carrabina, & Debora Gil. (2023). BronchoPose: an analysis of data and model configuration for vision-based bronchoscopy pose estimation. CMPB - Computer Methods and Programs in Biomedicine, 228, 107241.
Abstract: Vision-based bronchoscopy (VB) models require the registration of the virtual lung model with the frames from the video bronchoscopy to provide effective guidance during the biopsy. The registration can be achieved by either tracking the position and orientation of the bronchoscopy camera or by calibrating its deviation from the pose (position and orientation) simulated in the virtual lung model. Recent advances in neural networks and temporal image processing have provided new opportunities for guided bronchoscopy. However, such progress has been hindered by the lack of comparative experimental conditions.
In the present paper, we share a novel synthetic dataset allowing for a fair comparison of methods. Moreover, this paper investigates several neural network architectures for the learning of temporal information at different levels of subject personalization. In order to improve orientation measurement, we also present a standardized comparison framework and a novel metric for camera orientation learning. Results on the dataset show that the proposed metric and architectures, as well as the standardized conditions, provide notable improvements to current state-of-the-art camera pose estimation in video bronchoscopy.
Keywords: Videobronchoscopy guiding; Deep learning; Architecture optimization; Datasets; Standardized evaluation framework; Pose estimation
|
|
|
Juan Borrego-Carazo, Carles Sanchez, David Castells, Jordi Carrabina, & Debora Gil. (2022). A benchmark for the evaluation of computational methods for bronchoscopic navigation. IJCARS - International Journal of Computer Assisted Radiology and Surgery, 17(1).
|
|
|
Juan Andrade, T. Alejandra Vidal, & A. Sanfeliu. (2005). Multirobot C-SLAM: Simultaneous localization, control, and mapping.
|
|
|
Juan Andrade, T. Alejandra Vidal, & A. Sanfeliu. (2005). Stochastic state estimation for simultaneous localization and map building in mobile robotics. In Vedran Kordic, Aleksandar Lazinica, and Munir Merdan (Eds.), Cutting Edge Robotics, Advanced Robotic Systems Press, 3.3:223–242.
|
|
|
Juan Andrade, T. Alejandra Vidal, & A. Sanfeliu. (2005). Unscented transformation of vehicle states in SLAM.
|
|
|
Juan Andrade, & F. Thomas. (2006). Wire-Based Tracking using Mutual Information.
|
|
|
Juan Andrade, & A. Sanfeliu. (2005). The effects of partial observability when building fully correlated maps. IEEE Transactions on Robotics, 21(4):771–777 (IF: 1.486).
|
|
|
Juan A. Carvajal Ayala, Dennis Romero, & Angel Sappa. (2016). Fine-tuning based deep convolutional networks for lepidopterous genus recognition. In 21st Ibero American Congress on Pattern Recognition (pp. 467–475). LNCS.
Abstract: This paper describes an image classification approach oriented to identify specimens of lepidopterous insects at Ecuadorian ecological reserves. This work seeks to contribute to studies in the area of biology about genus of butterflies and also to facilitate the registration of unrecognized specimens. The proposed approach is based on the fine-tuning of three widely used pre-trained Convolutional Neural Networks (CNNs). This strategy is intended to overcome the reduced number of labeled images. Experimental results with a dataset labeled by expert biologists is presented, reaching a recognition accuracy above 92%.
|
|
|
Josep M. Gonfaus, Xavier Boix, Joost Van de Weijer, Andrew Bagdanov, Joan Serrat, & Jordi Gonzalez. (2010). Harmony Potentials for Joint Classification and Segmentation. In 23rd IEEE Conference on Computer Vision and Pattern Recognition (3280–3287).
Abstract: Hierarchical conditional random fields have been successfully applied to object segmentation. One reason is their ability to incorporate contextual information at different scales. However, these models do not allow multiple labels to be assigned to a single node. At higher scales in the image, this yields an oversimplified model, since multiple classes can be reasonable expected to appear within one region. This simplified model especially limits the impact that observations at larger scales may have on the CRF model. Neglecting the information at larger scales is undesirable since class-label estimates based on these scales are more reliable than at smaller, noisier scales. To address this problem, we propose a new potential, called harmony potential, which can encode any possible combination of class labels. We propose an effective sampling strategy that renders tractable the underlying optimization problem. Results show that our approach obtains state-of-the-art results on two challenging datasets: Pascal VOC 2009 and MSRC-21.
|
|
|
Josep M. Gonfaus, Theo Gevers, Arjan Gijsenij, Xavier Roca, & Jordi Gonzalez. (2012). Edge Classification using Photo-Geo metric features. In 21st International Conference on Pattern Recognition (pp. 1497–1500).
Abstract: Edges are caused by several imaging cues such as shadow, material and illumination transitions. Classification methods have been proposed which are solely based on photometric information, ignoring geometry to classify the physical nature of edges in images. In this paper, the aim is to present a novel strategy to handle both photometric and geometric information for edge classification. Photometric information is obtained through the use of quasi-invariants while geometric information is derived from the orientation and contrast of edges. Different combination frameworks are compared with a new principled approach that captures both information into the same descriptor. From large scale experiments on different datasets, it is shown that, in addition to photometric information, the geometry of edges is an important visual cue to distinguish between different edge types. It is concluded that by combining both cues the performance improves by more than 7% for shadows and highlights.
|
|
|
Josep M. Gonfaus, Marco Pedersoli, Jordi Gonzalez, Andrea Vedaldi, & Xavier Roca. (2015). Factorized appearances for object detection. CVIU - Computer Vision and Image Understanding, 138, 92–101.
Abstract: Deformable object models capture variations in an object’s appearance that can be represented as image deformations. Other effects such as out-of-plane rotations, three-dimensional articulations, and self-occlusions are often captured by considering mixture of deformable models, one per object aspect. A more scalable approach is representing instead the variations at the level of the object parts, applying the concept of a mixture locally. Combining a few part variations can in fact cheaply generate a large number of global appearances.
A limited version of this idea was proposed by Yang and Ramanan [1], for human pose dectection. In this paper we apply it to the task of generic object category detection and extend it in several ways. First, we propose a model for the relationship between part appearances more general than the tree of Yang and Ramanan [1], which is more suitable for generic categories. Second, we treat part locations as well as their appearance as latent variables so that training does not need part annotations but only the object bounding boxes. Third, we modify the weakly-supervised learning of Felzenszwalb et al. and Girshick et al. [2], [3] to handle a significantly more complex latent structure.
Our model is evaluated on standard object detection benchmarks and is found to improve over existing approaches, yielding state-of-the-art results for several object categories.
Keywords: Object recognition; Deformable part models; Learning and sharing parts; Discovering discriminative parts
|
|
|
Josep M. Gonfaus. (2009). Semantic Segmentation of Images Using Random Ferns (Vol. 132). Master's thesis, , Bellaterra, Barcelona.
|
|
|
Josep M. Gonfaus. (2012). Towards Deep Image Understanding: From pixels to semantics (Jordi Gonzalez, & Theo Gevers, Eds.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: Understanding the content of the images is one of the greatest challenges of computer vision. Recognition of objects appearing in images, identifying and interpreting their actions are the main purposes of Image Understanding. This thesis seeks to identify what is present in a picture by categorizing and locating all the objects in the scene.
Images are composed by pixels, and one possibility consists of assigning to each pixel an object category, which is commonly known as semantic segmentation. By incorporating information as a contextual cue, we are able to resolve the ambiguity within categories at the pixel-level. We propose three levels of scale in order to resolve such ambiguity.
Another possibility to represent the objects is the object detection task. In this case, the aim is to recognize and localize the whole object by accurately placing a bounding box around it. We present two new approaches. The first one is focused on improving the object representation of deformable part models with the concept of factorized appearances. The second approach addresses the issue of reducing the computational cost for multi-class recognition. The results given have been validated on several commonly used datasets, reaching international recognition and state-of-the-art within the field
|
|
|
Josep Llados, Horst Bunke, & Enric Marti. (1997). Using Cyclic String Matching to Find Rotational and Reflectional Symmetries in Shapes. In Intelligent Robots: Sensing, Modeling and Planning (pp. 164–179). World Scientific Press.
Abstract: Dagstuhl Workshop
|
|
|
Josep Llados, & Young-Bin Kwon. (2004). Graphics Recognition. Recent Advances and Perspectives.
|
|