Patricia Marquez, Debora Gil, & Aura Hernandez-Sabate. (2012). Error Analysis for Lucas-Kanade Based Schemes. In 9th International Conference on Image Analysis and Recognition (Vol. 7324, pp. 184–191). LNCS. Springer-Verlag Berlin Heidelberg.
Abstract: Optical flow is a valuable tool for motion analysis in medical imaging sequences. A reliable application requires determining the accuracy of the computed optical flow. This is a main challenge given the absence of ground truth in medical sequences. This paper presents an error analysis of Lucas-Kanade schemes in terms of intrinsic design errors and numerical stability of the algorithm. Our analysis provides a confidence measure that is naturally correlated to the accuracy of the flow field. Our experiments show the higher predictive value of our confidence measure compared to existing measures.
Keywords: Optical flow, Confidence measure, Lucas-Kanade, Cardiac Magnetic Resonance
|
Fernando Barrera, Felipe Lumbreras, & Angel Sappa. (2012). Evaluation of Similarity Functions in Multimodal Stereo. In 9th International Conference on Image Analysis and Recognition (Vol. 7324, pp. 320–329). LNCS. Springer Berlin Heidelberg.
Abstract: This paper presents an evaluation framework for multimodal stereo matching, which allows to compare the performance of four similarity functions. Additionally, it presents details of a multimodal stereo head that supply thermal infrared and color images, as well as, aspects of its calibration and rectification. The pipeline includes a novel method for the disparity selection, which is suitable for evaluating the similarity functions. Finally, a benchmark for comparing different initializations of the proposed framework is presented. Similarity functions are based on mutual information, gradient orientation and scale space representations. Their evaluation is performed using two metrics: i) disparity error, and ii) number of correct matches on planar regions. In addition to the proposed evaluation, the current paper also shows that 3D sparse representations can be recovered from such a multimodal stereo head.
Keywords: Aveiro, Portugal
|
Carles Sanchez, F. Javier Sanchez, Antoni Rosell, & Debora Gil. (2012). An illumination model of the trachea appearance in videobronchoscopy images. In Image Analysis and Recognition (Vol. 7325, pp. 313–320). LNCS. Springer Berlin Heidelberg.
Abstract: Videobronchoscopy is a medical imaging technique that allows interactive navigation inside the respiratory pathways. This imaging modality provides realistic images and allows non-invasive minimal intervention procedures. Tracheal procedures are routinary interventions that require assessment of the percentage of obstructed pathway for injury (stenosis) detection. Visual assessment in videobronchoscopic sequences requires high expertise of trachea anatomy and is prone to human error.
This paper introduces an automatic method for the estimation of steneosed trachea percentage reduction in videobronchoscopic images. We look for tracheal rings , whose deformation determines the degree of obstruction. For ring extraction , we present a ring detector based on an illumination and appearance model. This model allows us to parametrise the ring detection. Finally, we can infer optimal estimation parameters for any video resolution.
Keywords: Bronchoscopy, tracheal ring, stenosis assesment, trachea appearance model, segmentation
|
Ricard Borras, Agata Lapedriza, & Laura Igual. (2012). Depth Information in Human Gait Analysis: An Experimental Study on Gender Recognition. In 9th International Conference on Image Analysis and Recognition (Vol. 7325, pp. 98–105). Springer Berlin Heidelberg.
Abstract: This work presents DGait, a new gait database acquired with a depth camera. This database contains videos from 53 subjects walking in different directions. The intent of this database is to provide a public set to explore whether the depth can be used as an additional information source for gait classification purposes. Each video is labelled according to subject, gender and age. Furthermore, for each subject and view point, we provide initial and final frames of an entire walk cycle. On the other hand, we perform gait-based gender classification experiments with DGait database, in order to illustrate the usefulness of depth information for this purpose. In our experiments, we extract 2D and 3D gait features based on shape descriptors, and compare the performance of these features for gender identification, using a Kernel SVM. The obtained results show that depth can be an information source of great relevance for gait classification problems.
|
Laura Igual, Joan Carles Soliva, Roger Gimeno, Sergio Escalera, Oscar Vilarroya, & Petia Radeva. (2012). Automatic Internal Segmentation of Caudate Nucleus for Diagnosis of Attention Deficit Hyperactivity Disorder. In 9th International Conference on Image Analysis and Recognition (Vol. 7325, pp. 222–229). LNCS.
Abstract: Poster
Studies on volumetric brain Magnetic Resonance Imaging (MRI) showed neuroanatomical abnormalities in pediatric Attention-Deficit/Hyperactivity Disorder (ADHD). In particular, the diminished right caudate volume is one of the most replicated findings among ADHD samples in morphometric MRI studies. In this paper, we propose a fully-automatic method for internal caudate nucleus segmentation based on machine learning. Moreover, the ratio between right caudate body volume and the bilateral caudate body volume is applied in a ADHD diagnostic test. We separately validate the automatic internal segmentation of caudate in head and body structures and the diagnostic test using real data from ADHD and control subjects. As a result, we show accurate internal caudate segmentation and similar performance among the proposed automatic diagnostic test and the manual annotation.
|
Albert Clapes, Miguel Reyes, & Sergio Escalera. (2012). User Identification and Object Recognition in Clutter Scenes Based on RGB-Depth Analysis. In 7th Conference on Articulated Motion and Deformable Objects (Vol. 7378, pp. 1–11). LNCS. Springer Berlin Heidelberg.
Abstract: We propose an automatic system for user identification and object recognition based on multi-modal RGB-Depth data analysis. We model a RGBD environment learning a pixel-based background Gaussian distribution. Then, user and object candidate regions are detected and recognized online using robust statistical approaches over RGBD descriptions. Finally, the system saves the historic of user-object assignments, being specially useful for surveillance scenarios. The system has been evaluated on a novel data set containing different indoor/outdoor scenarios, objects, and users, showing accurate recognition and better performance than standard state-of-the-art approaches.
|
Wenjuan Gong, Jordi Gonzalez, Joao Manuel R. S. Taveres, & Xavier Roca. (2012). A New Image Dataset on Human Interactions. In 7th Conference on Articulated Motion and Deformable Objects (Vol. 7378, pp. 204–209). Springer Berlin Heidelberg.
Abstract: This article describes a new collection of still image dataset which are dedicated to interactions between people. Human action recognition from still images have been a hot topic recently, but most of them are actions performed by a single person, like running, walking, riding bikes, phoning and so on and there is no interactions between people in one image. The dataset collected in this paper are concentrating on human interaction between two people aiming to explore this new topic in the research area of action recognition from still images.
|
Sergio Escalera. (2012). Human Behavior Analysis From Depth Maps. In F.J. Perales, R.B. Fisher, & T.B. Moeslund (Eds.), 7th Conference on Articulated Motion and Deformable Objects (Vol. 7378, pp. 282–292). Springer Heidelberg.
Abstract: Pose Recovery (PR) and Human Behavior Analysis (HBA) have been a main focus of interest from the beginnings of Computer Vision and Machine Learning. PR and HBA were originally addressed by the analysis of still images and image sequences. More recent strategies consisted of Motion Capture technology (MOCAP), based on the synchronization of multiple cameras in controlled environments; and the analysis of depth maps from Time-of-Flight (ToF) technology, based on range image recording from distance sensor measurements. Recently, with the appearance of the multi-modal RGBD information provided by the low cost Kinect \textsfTM sensor (from RGB and Depth, respectively), classical methods for PR and HBA have been redefined, and new strategies have been proposed. In this paper, the recent contributions and future trends of multi-modal RGBD data analysis for PR and HBA are reviewed and discussed.
|
Yainuvis Socarras, David Vazquez, Antonio Lopez, David Geronimo, & Theo Gevers. (2012). Improving HOG with Image Segmentation: Application to Human Detection. In J. Blanc-Talon et al. (Ed.), 11th International Conference on Advanced Concepts for Intelligent Vision Systems (Vol. 7517, pp. 178–189). LNCS. Springer Berlin Heidelberg.
Abstract: In this paper we improve the histogram of oriented gradients (HOG), a core descriptor of state-of-the-art object detection, by the use of higher-level information coming from image segmentation. The idea is to re-weight the descriptor while computing it without increasing its size. The benefits of the proposal are two-fold: (i) to improve the performance of the detector by enriching the descriptor information and (ii) take advantage of the information of image segmentation, which in fact is likely to be used in other stages of the detection system such as candidate generation or refinement.
We test our technique in the INRIA person dataset, which was originally developed to test HOG, embedding it in a human detection system. The well-known segmentation method, mean-shift (from smaller to larger super-pixels), and different methods to re-weight the original descriptor (constant, region-luminance, color or texture-dependent) has been evaluated. We achieve performance improvements of 4:47% in detection rate through the use of differences of color between contour pixel neighborhoods as re-weighting function.
Keywords: Segmentation; Pedestrian Detection
|
Ekaterina Zaytseva, Santiago Segui, & Jordi Vitria. (2012). Sketchable Histograms of Oriented Gradients for Object Detection. In 17th Iberomerican Conference on Pattern Recognition (Vol. 7441, pp. 374–381). Springer Berlin Heidelberg.
Abstract: In this paper we investigate a new representation approach for visual object recognition. The new representation, called sketchable-HoG, extends the classical histogram of oriented gradients (HoG) feature by adding two different aspects: the stability of the majority orientation and the continuity of gradient orientations. In this way, the sketchable-HoG locally characterizes the complexity of an object model and introduces global structure information while still keeping simplicity, compactness and robustness. We evaluated the proposed image descriptor on publicly Catltech 101 dataset. The obtained results outperforms classical HoG descriptor as well as other reported descriptors in the literature.
|
Marina Alberti, Simone Balocco, Xavier Carrillo, J. Mauri, & Petia Radeva. (2012). Automatic Non-Rigid Temporal Alignment of IVUS Sequences. In 15th International Conference on Medical Image Computing and Computer Assisted Intervention (Vol. 1, pp. 642–650). Springer-Verlag Berlin, Heidelberg.
Abstract: Clinical studies on atherosclerosis regression/progression performed by Intravascular Ultrasound analysis require the alignment of pullbacks of the same patient before and after clinical interventions. In this paper, a methodology for the automatic alignment of IVUS sequences based on the Dynamic Time Warping technique is proposed. The method is adapted to the specific IVUS alignment task by applying the non-rigid alignment technique to multidimensional morphological signals, and by introducing a sliding window approach together with a regularization term. To show the effectiveness of our method, an extensive validation is performed both on synthetic data and in-vivo IVUS sequences. The proposed method is robust to stent deployment and post dilation surgery and reaches an alignment error of approximately 0.7 mm for in-vivo data, which is comparable to the inter-observer variability.
|
Sergio Vera, Miguel Angel Gonzalez Ballester, & Debora Gil. (2012). Optimal Medial Surface Generation for Anatomical Volume Representations. In MichaelW. David and Vannier H. and H. Yoshida (Ed.), Abdominal Imaging. Computational and Clinical Applications (Vol. 7601, pp. 265–273). Lecture Notes in Computer Science. Springer Berlin Heidelberg.
Abstract: Medial representations are a widely used technique in abdominal organ shape representation and parametrization. Those methods require good medial manifolds as a starting point. Any medial
surface used to parametrize a volume should be simple enough to allow an easy manipulation and complete enough to allow an accurate reconstruction of the volume. Obtaining good quality medial
surfaces is still a problem with current iterative thinning methods. This forces the usage of generic, pre-calculated medial templates that are adapted to the final shape at the cost of a drop in volume reconstruction.
This paper describes an operator for generation of medial structures that generates clean and complete manifolds well suited for their further use in medial representations of abdominal organ volumes. While being simpler than thinning surfaces, experiments show its high performance in volume reconstruction and preservation of medial surface main branching topology.
Keywords: Medial surface representation; volume reconstruction
|
Hamdi Dibeklioglu, Theo Gevers, & Albert Ali Salah. (2012). Are You Really Smiling at Me? Spontaneous versus Posed Enjoyment Smiles. In 12th European Conference on Computer Vision (Vol. 7574, pp. 525–538). LNCS. Springer Berlin Heidelberg.
Abstract: Smiling is an indispensable element of nonverbal social interaction. Besides, automatic distinction between spontaneous and posed expressions is important for visual analysis of social signals. Therefore, in this paper, we propose a method to distinguish between spontaneous and posed enjoyment smiles by using the dynamics of eyelid, cheek, and lip corner movements. The discriminative power of these movements, and the effect of different fusion levels are investigated on multiple databases. Our results improve the state-of-the-art. We also introduce the largest spontaneous/posed enjoyment smile database collected to date, and report new empirical and conceptual findings on smile dynamics. The collected database consists of 1240 samples of 400 subjects. Moreover, it has the unique property of having an age range from 8 to 76 years. Large scale experiments on the new database indicate that eyelid dynamics are highly relevant for smile classification, and there are age-related differences in smile dynamics.
|
Ivo Everts, Jan van Gemert, & Theo Gevers. (2012). Per-patch Descriptor Selection using Surface and Scene Properties. In 12th European Conference on Computer Vision (Vol. 7577, pp. 172–186). LNCS. Springer Berlin Heidelberg.
Abstract: Local image descriptors are generally designed for describing all possible image patches. Such patches may be subject to complex variations in appearance due to incidental object, scene and recording conditions. Because of this, a single-best descriptor for accurate image representation under all conditions does not exist. Therefore, we propose to automatically select from a pool of descriptors the one that is best suitable based on object surface and scene properties. These properties are measured on the fly from a single image patch through a set of attributes. Attributes are input to a classifier which selects the best descriptor. Our experiments on a large dataset of colored object patches show that the proposed selection method outperforms the best single descriptor and a-priori combinations of the descriptor pool.
|
Jose Manuel Alvarez, Theo Gevers, Y. LeCun, & Antonio Lopez. (2012). Road Scene Segmentation from a Single Image. In 12th European Conference on Computer Vision (Vol. 7578, pp. 376–389). LNCS. Springer Berlin Heidelberg.
Abstract: Road scene segmentation is important in computer vision for different applications such as autonomous driving and pedestrian detection. Recovering the 3D structure of road scenes provides relevant contextual information to improve their understanding.
In this paper, we use a convolutional neural network based algorithm to learn features from noisy labels to recover the 3D scene layout of a road image. The novelty of the algorithm relies on generating training labels by applying an algorithm trained on a general image dataset to classify on–board images. Further, we propose a novel texture descriptor based on a learned color plane fusion to obtain maximal uniformity in road areas. Finally, acquired (off–line) and current (on–line) information are combined to detect road areas in single images.
From quantitative and qualitative experiments, conducted on publicly available datasets, it is concluded that convolutional neural networks are suitable for learning 3D scene layout from noisy labels and provides a relative improvement of 7% compared to the baseline. Furthermore, combining color planes provides a statistical description of road areas that exhibits maximal uniformity and provides a relative improvement of 8% compared to the baseline. Finally, the improvement is even bigger when acquired and current information from a single image are combined
Keywords: road detection
|