|
Carles Sanchez, Antonio Esteban Lansaque, Agnes Borras, Marta Diez-Ferrer, Antoni Rosell, & Debora Gil. (2017). Towards a Videobronchoscopy Localization System from Airway Centre Tracking. In 12th International Conference on Computer Vision Theory and Applications (pp. 352–359).
Abstract: Bronchoscopists use fluoroscopy to guide flexible bronchoscopy to the lesion to be biopsied without any kind of incision. Being fluoroscopy an imaging technique based on X-rays, the risk of developmental problems and cancer is increased in those subjects exposed to its application, so minimizing radiation is crucial. Alternative guiding systems such as electromagnetic navigation require specific equipment, increase the cost of the clinical procedure and still require fluoroscopy. In this paper we propose an image based guiding system based on the extraction of airway centres from intra-operative videos. Such anatomical landmarks are matched to the airway centreline extracted from a pre-planned CT to indicate the best path to the nodule. We present a
feasibility study of our navigation system using simulated bronchoscopic videos and a multi-expert validation of landmarks extraction in 3 intra-operative ultrathin explorations.
Keywords: Video-bronchoscopy; Lung cancer diagnosis; Airway lumen detection; Region tracking; Guided bronchoscopy navigation
|
|
|
Mariella Dimiccoli, Cathal Gurrin, David J. Crandall, Xavier Giro, & Petia Radeva. (2018). Introduction to the special issue: Egocentric Vision and Lifelogging. JVCIR - Journal of Visual Communication and Image Representation, 55, 352–353.
|
|
|
Francisco Javier Orozco, Xavier Roca, & Jordi Gonzalez. (2008). Real-Time Gaze Tracking with Appearance-Based Models. MVAP - Machine Vision Applications, 20(6), 353–364.
Abstract: Psychological evidence has emphasized the importance of eye gaze analysis in human computer interaction and emotion interpretation. To this end, current image analysis algorithms take into consideration eye-lid and iris motion detection using colour information and edge detectors. However, eye movement is fast and and hence difficult to use to obtain a precise and robust tracking. Instead, our
method proposed to describe eyelid and iris movements as continuous variables using appearance-based tracking. This approach combines the strengths of adaptive appearance models, optimization methods and backtracking techniques.Thus,
in the proposed method textures are learned on-line from near frontal images and illumination changes, occlusions and fast movements are managed. The method achieves real-time performance by combining two appearance-based trackers to a
backtracking algorithm for eyelid estimation and another for iris estimation. These contributions represent a significant advance towards a reliable gaze motion description for HCI and expression analysis, where the strength of complementary
methodologies are combined to avoid using high quality images, colour information, texture training, camera settings and other time-consuming processes.
Keywords: Keywords Eyelid and iris tracking, Appearance models, Blinking, Iris saccade, Real-time gaze tracking
|
|
|
Patricia Suarez, Angel Sappa, & Boris X. Vintimilla. (2018). Vegetation Index Estimation from Monospectral Images. In 15th International Conference on Images Analysis and Recognition (Vol. 10882, pp. 353–362). LNCS.
Abstract: This paper proposes a novel approach to estimate Normalized Difference Vegetation Index (NDVI) from just the red channel of a RGB image. The NDVI index is defined as the ratio of the difference of the red and infrared radiances over their sum. In other words, information from the red channel of a RGB image and the corresponding infrared spectral band are required for its computation. In the current work the NDVI index is estimated just from the red channel by training a Conditional Generative Adversarial Network (CGAN). The architecture proposed for the generative network consists of a single level structure, which combines at the final layer results from convolutional operations together with the given red channel with Gaussian noise to enhance
details, resulting in a sharp NDVI image. Then, the discriminative model
estimates the probability that the NDVI generated index came from the training dataset, rather than the index automatically generated. Experimental results with a large set of real images are provided showing that a Conditional GAN single level model represents an acceptable approach to estimate NDVI index.
|
|
|
Ernest Valveny, & Philippe Dosch. (2004). Performance Evaluation of Symbol Recognition. In A. D.(E.) S. Marinai (Ed.), Document Analysis Systems (Vol. 3163, 354–365).
|
|
|
Fadi Dornaika, & Angel Sappa. (2007). Improving Appearance-Based 3D Face Tracking Using Sparse Stereo Data. In H. Araujo and J. Jorge A. R. J. Braz (Ed.), Advances in Computer Graphics and Computer Vision, (354–366). Springer Verlag.
|
|
|
Susana Alvarez, Anna Salvatella, Maria Vanrell, & Xavier Otazu. (2010). 3D Texton Spaces for color-texture retrieval. In A.C. Campilho and M.S. Kamel (Ed.), 7th International Conference on Image Analysis and Recognition (Vol. 6111, 354–363). LNCS. Springer Berlin Heidelberg.
Abstract: Color and texture are visual cues of different nature, their integration in an useful visual descriptor is not an easy problem. One way to combine both features is to compute spatial texture descriptors independently on each color channel. Another way is to do the integration at the descriptor level. In this case the problem of normalizing both cues arises. In this paper we solve the latest problem by fusing color and texture through distances in texton spaces. Textons are the attributes of image blobs and they are responsible for texture discrimination as defined in Julesz’s Texton theory. We describe them in two low-dimensional and uniform spaces, namely, shape and color. The dissimilarity between color texture images is computed by combining the distances in these two spaces. Following this approach, we propose our TCD descriptor which outperforms current state of art methods in the two different approaches mentioned above, early combination with LBP and late combination with MPEG-7. This is done on an image retrieval experiment over a highly diverse texture dataset from Corel.
|
|
|
Gioacchino Vino, & Angel Sappa. (2013). Revisiting Harris Corner Detector Algorithm: a Gradual Thresholding Approach. In 10th International Conference on Image Analysis and Recognition (Vol. 7950, pp. 354–363). LNCS. Springer Berlin Heidelberg.
Abstract: This paper presents an adaptive thresholding approach intended to increase the number of detected corners, while reducing the amount of those ones corresponding to noisy data. The proposed approach works by using the classical Harris corner detector algorithm and overcome the difficulty in finding a general threshold that work well for all the images in a given data set by proposing a novel adaptive thresholding scheme. Initially, two thresholds are used to discern between strong corners and flat regions. Then, a region based criteria is used to discriminate between weak corners and noisy points in the midway interval. Experimental results show that the proposed approach has a better capability to reject false corners and, at the same time, to detect weak ones. Comparisons with the state of the art are provided showing the validity of the proposed approach.
|
|
|
Yuhua Luo, Francisco Jose Perales, & Juan J. Villanueva. (1992). An automatic Rotoscopy System for Human Motion Based on a Biomedical Graphical Model. Computer & Graphics, 16(4), 355–362.
|
|
|
A. Sanfeliu, & Juan J. Villanueva. (2005). An approach of visual motion analysis. PRL - Pattern Recognition Letters, 26(3), 355–368.
|
|
|
Marco Pedersoli, Jordi Gonzalez, Xu Hu, & Xavier Roca. (2014). Toward Real-Time Pedestrian Detection Based on a Deformable Template Model. TITS - IEEE Transactions on Intelligent Transportation Systems, 15(1), 355–364.
Abstract: Most advanced driving assistance systems already include pedestrian detection systems. Unfortunately, there is still a tradeoff between precision and real time. For a reliable detection, excellent precision-recall such a tradeoff is needed to detect as many pedestrians as possible while, at the same time, avoiding too many false alarms; in addition, a very fast computation is needed for fast reactions to dangerous situations. Recently, novel approaches based on deformable templates have been proposed since these show a reasonable detection performance although they are computationally too expensive for real-time performance. In this paper, we present a system for pedestrian detection based on a hierarchical multiresolution part-based model. The proposed system is able to achieve state-of-the-art detection accuracy due to the local deformations of the parts while exhibiting a speedup of more than one order of magnitude due to a fast coarse-to-fine inference technique. Moreover, our system explicitly infers the level of resolution available so that the detection of small examples is feasible with a very reduced computational cost. We conclude this contribution by presenting how a graphics processing unit-optimized implementation of our proposed system is suitable for real-time pedestrian detection in terms of both accuracy and speed.
|
|
|
Alejandro Gonzalez Alzate, Gabriel Villalonga, Jiaolong Xu, David Vazquez, Jaume Amores, & Antonio Lopez. (2015). Multiview Random Forest of Local Experts Combining RGB and LIDAR data for Pedestrian Detection. In IEEE Intelligent Vehicles Symposium IV2015 (pp. 356–361).
Abstract: Despite recent significant advances, pedestrian detection continues to be an extremely challenging problem in real scenarios. In order to develop a detector that successfully operates under these conditions, it becomes critical to leverage upon multiple cues, multiple imaging modalities and a strong multi-view classifier that accounts for different pedestrian views and poses. In this paper we provide an extensive evaluation that gives insight into how each of these aspects (multi-cue, multimodality and strong multi-view classifier) affect performance both individually and when integrated together. In the multimodality component we explore the fusion of RGB and depth maps obtained by high-definition LIDAR, a type of modality that is only recently starting to receive attention. As our analysis reveals, although all the aforementioned aspects significantly help in improving the performance, the fusion of visible spectrum and depth information allows to boost the accuracy by a much larger margin. The resulting detector not only ranks among the top best performers in the challenging KITTI benchmark, but it is built upon very simple blocks that are easy to implement and computationally efficient. These simple blocks can be easily replaced with more sophisticated ones recently proposed, such as the use of convolutional neural networks for feature representation, to further improve the accuracy.
Keywords: Pedestrian Detection
|
|
|
Debora Gil, & Petia Radeva. (2003). Curvature Vector Flow to Assure Convergent Deformable Models for Shape Modelling. In B. Springer (Ed.), Energy Minimization Methods In Computer Vision And Pattern Recognition (Vol. 2683, pp. 357–372). LNCS. Lisbon, PORTUGAL: Springer, Berlin.
Abstract: Poor convergence to concave shapes is a main limitation of snakes as a standard segmentation and shape modelling technique. The gradient of the external energy of the snake represents a force that pushes the snake into concave regions, as its internal energy increases when new inexion points are created. In spite of the improvement of the external energy by the gradient vector ow technique, highly non convex shapes can not be obtained, yet. In the present paper, we develop a new external energy based on the geometry of the curve to be modelled. By tracking back the deformation of a curve that evolves by minimum curvature ow, we construct a distance map that encapsulates the natural way of adapting to non convex shapes. The gradient of this map, which we call curvature vector ow (CVF), is capable of attracting a snake towards any contour, whatever its geometry. Our experiments show that, any initial snake condition converges to the curve to be modelled in optimal time.
Keywords: Initial condition; Convex shape; Non convex analysis; Increase; Segmentation; Gradient; Standard; Standards; Concave shape; Flow models; Tracking; Edge detection; Curvature
|
|
|
Pau Rodriguez, Josep M. Gonfaus, Guillem Cucurull, Xavier Roca, & Jordi Gonzalez. (2018). Attend and Rectify: A Gated Attention Mechanism for Fine-Grained Recovery. In 15th European Conference on Computer Vision (Vol. 11212, pp. 357–372). LNCS.
Abstract: We propose a novel attention mechanism to enhance Convolutional Neural Networks for fine-grained recognition. It learns to attend to lower-level feature activations without requiring part annotations and uses these activations to update and rectify the output likelihood distribution. In contrast to other approaches, the proposed mechanism is modular, architecture-independent and efficient both in terms of parameters and computation required. Experiments show that networks augmented with our approach systematically improve their classification accuracy and become more robust to clutter. As a result, Wide Residual Networks augmented with our proposal surpasses the state of the art classification accuracies in CIFAR-10, the Adience gender recognition task, Stanford dogs, and UEC Food-100.
Keywords: Deep Learning; Convolutional Neural Networks; Attention
|
|
|
Sergi Garcia Bordils, Andres Mafla, Ali Furkan Biten, Oren Nuriel, Aviad Aberdam, Shai Mazor, et al. (2022). Out-of-Vocabulary Challenge Report. In Proceedings European Conference on Computer Vision Workshops (Vol. 13804, 359–375). LNCS.
Abstract: This paper presents final results of the Out-Of-Vocabulary 2022 (OOV) challenge. The OOV contest introduces an important aspect that is not commonly studied by Optical Character Recognition (OCR) models, namely, the recognition of unseen scene text instances at training time. The competition compiles a collection of public scene text datasets comprising of 326,385 images with 4,864,405 scene text instances, thus covering a wide range of data distributions. A new and independent validation and test set is formed with scene text instances that are out of vocabulary at training time. The competition was structured in two tasks, end-to-end and cropped scene text recognition respectively. A thorough analysis of results from baselines and different participants is presented. Interestingly, current state-of-the-art models show a significant performance gap under the newly studied setting. We conclude that the OOV dataset proposed in this challenge will be an essential area to be explored in order to develop scene text models that achieve more robust and generalized predictions.
|
|