Sergio Escalera, Josep Moya, Laura Igual, Veronica Violant, & Maria Teresa Anguera. (2012). Análisis Comportamental Automatizado de TDAH: la Influencia de la Variable Motivación. In IPSI – Cosmocaixa, Jornadas "Empremtes del present, efectes en la psicoanàlisi, la cultura i la societat.
|
Laura Igual, Joan Carles Soliva, Antonio Hernandez, Sergio Escalera, Oscar Vilarroya, & Petia Radeva. (2012). A Supervised Graph-cut Deformable Model for Brain MRI Segmentation. Deformation models: tracking, animation and applications. In Computational Vision and Biomechanics. LNCS. Springer Netherlands.
|
Angel Sappa, & George A. Triantafyllid. (2012). Computer Graphics and Imaging.
|
Theo Gevers, Arjan Gijsenij, Joost Van de Weijer, & J.M. Geusebroek. (2012). Color in Computer Vision: Fundamentals and Applications. The Wiley-IS&T Series in Imaging Science and Technology.
|
Mario Hernandez, Joao Sanchez, & Jordi Vitria. (2012). Selected papers from Iberian Conference on Pattern Recognition and Image Analysis (Vol. 45).
|
Ernest Valveny, Robert Benavente, Agata Lapedriza, Miquel Ferrer, Jaume Garcia, & Gemma Sanchez. (2012). Adaptation of a computer programming course to the EXHE requirements: evaluation five years later (Vol. 37).
|
Michal Drozdzal, Petia Radeva, Santiago Segui, Laura Igual, Carolina Malagelada, Fernando Azpiroz, et al. (2012). System and method for automatic detection of in vivo contraction video sequences.
Abstract: Publication date: 2012/3/8
|
Marçal Rusiñol, Lluis Pere de las Heras, Joan Mas, Oriol Ramos Terrades, Dimosthenis Karatzas, Anjan Dutta, et al. (2012). CVC-UAB's participation in the Flowchart Recognition Task of CLEF-IP 2012. In Conference and Labs of the Evaluation Forum.
|
Miguel Angel Bautista, Antonio Hernandez, Victor Ponce, Xavier Perez Sala, Xavier Baro, Oriol Pujol, et al. (2012). Probability-based Dynamic TimeWarping for Gesture Recognition on RGB-D data. In 21st International Conference on Pattern Recognition International Workshop on Depth Image Analysis (Vol. 7854, pp. 126–135). Springer Berlin Heidelberg.
Abstract: Dynamic Time Warping (DTW) is commonly used in gesture recognition tasks in order to tackle the temporal length variability of gestures. In the DTW framework, a set of gesture patterns are compared one by one to a maybe infinite test sequence, and a query gesture category is recognized if a warping cost below a certain threshold is found within the test sequence. Nevertheless, either taking one single sample per gesture category or a set of isolated samples may not encode the variability of such gesture category. In this paper, a probability-based DTW for gesture recognition is proposed. Different samples of the same gesture pattern obtained from RGB-Depth data are used to build a Gaussian-based probabilistic model of the gesture. Finally, the cost of DTW has been adapted accordingly to the new model. The proposed approach is tested in a challenging scenario, showing better performance of the probability-based DTW in comparison to state-of-the-art approaches for gesture recognition on RGB-D data.
|
Miguel Reyes, Albert Clapes, Luis Felipe Mejia, Jose Ramirez, Juan R Revilla, & Sergio Escalera. (2012). Posture Analysis and Range of Movement Estimation using Depth Maps. In 21st International Conference on Pattern Recognition International Workshop on Depth Image Analysis (Vol. 7854, pp. 97–105). Springer Berlin Heidelberg.
Abstract: World Health Organization estimates that 80% of the world population is affected of back pain during his life. Current practices to analyze back problems are expensive, subjective, and invasive. In this work, we propose a novel tool for posture and range of movement estimation based on the analysis of 3D information from depth maps. Given a set of keypoints defined by the user, RGB and depth data are aligned, depth surface is reconstructed, keypoints are matching using a novel point-to-point fitting procedure, and accurate measurements about posture, spinal curvature, and range of movement are computed. The system shows high precision and reliable measurements, being useful for posture reeducation purposes to prevent musculoskeletal disorders, such as back pain, as well as tracking the posture evolution of patients in rehabilitation treatments.
|
Antonio Hernandez, Miguel Angel Bautista, Xavier Perez Sala, Victor Ponce, Xavier Baro, Oriol Pujol, et al. (2012). BoVDW: Bag-of-Visual-and-Depth-Words for Gesture Recognition. In 21st International Conference on Pattern Recognition.
Abstract: We present a Bag-of-Visual-and-Depth-Words (BoVDW) model for gesture recognition, an extension of the Bag-of-Visual-Words (BoVW) model, that benefits from the multimodal fusion of visual and depth features. State-of-the-art RGB and depth features, including a new proposed depth descriptor, are analysed and combined in a late fusion fashion. The method is integrated in a continuous gesture recognition pipeline, where Dynamic Time Warping (DTW) algorithm is used to perform prior segmentation of gestures. Results of the method in public data sets, within our gesture recognition pipeline, show better performance in comparison to a standard BoVW model.
|
Carles Sanchez, Debora Gil, Antoni Rosell, Albert Andaluz, & F. Javier Sanchez. (2013). Segmentation of Tracheal Rings in Videobronchoscopy combining Geometry and Appearance. In Sebastiano Battiato and José Braz (Ed.), Proceedings of the International Conference on Computer Vision Theory and Applications (Vol. 1, pp. 153–161). LNCS. Portugal: SciTePress.
Abstract: Videobronchoscopy is a medical imaging technique that allows interactive navigation inside the respiratory pathways and minimal invasive interventions. Tracheal procedures are ordinary interventions that require measurement of the percentage of obstructed pathway for injury (stenosis) assessment. Visual assessment of stenosis in videobronchoscopic sequences requires high expertise of trachea anatomy and is prone to human error. Accurate detection of tracheal rings is the basis for automated estimation of the size of stenosed trachea. Processing of videobronchoscopic images acquired at the operating room is a challenging task due to the wide range of artifacts and acquisition conditions. We present a model of the geometric-appearance of tracheal rings for its detection in videobronchoscopic videos. Experiments on sequences acquired at the operating room, show a performance close to inter-observer variability
Keywords: Video-bronchoscopy, tracheal ring segmentation, trachea geometric and appearance model
|
Santiago Segui, Michal Drozdzal, Fernando Vilariño, Carolina Malagelada, Fernando Azpiroz, Petia Radeva, et al. (2012). Categorization and Segmentation of Intestinal Content Frames for Wireless Capsule Endoscopy. TITB - IEEE Transactions on Information Technology in Biomedicine, 16(6), 1341–1352.
Abstract: Wireless capsule endoscopy (WCE) is a device that allows the direct visualization of gastrointestinal tract with minimal discomfort for the patient, but at the price of a large amount of time for screening. In order to reduce this time, several works have proposed to automatically remove all the frames showing intestinal content. These methods label frames as {intestinal content – clear} without discriminating between types of content (with different physiological meaning) or the portion of image covered. In addition, since the presence of intestinal content has been identified as an indicator of intestinal motility, its accurate quantification can show a potential clinical relevance. In this paper, we present a method for the robust detection and segmentation of intestinal content in WCE images, together with its further discrimination between turbid liquid and bubbles. Our proposal is based on a twofold system. First, frames presenting intestinal content are detected by a support vector machine classifier using color and textural information. Second, intestinal content frames are segmented into {turbid, bubbles, and clear} regions. We show a detailed validation using a large dataset. Our system outperforms previous methods and, for the first time, discriminates between turbid from bubbles media.
|
Anjan Dutta, Jaume Gibert, Josep Llados, Horst Bunke, & Umapada Pal. (2012). Combination of Product Graph and Random Walk Kernel for Symbol Spotting in Graphical Documents. In 21st International Conference on Pattern Recognition (pp. 1663–1666).
Abstract: This paper explores the utilization of product graph for spotting symbols on graphical documents. Product graph is intended to find the candidate subgraphs or components in the input graph containing the paths similar to the query graph. The acute angle between two edges and their length ratio are considered as the node labels. In a second step, each of the candidate subgraphs in the input graph is assigned with a distance measure computed by a random walk kernel. Actually it is the minimum of the distances of the component to all the components of the model graph. This distance measure is then used to eliminate dissimilar components. The remaining neighboring components are grouped and the grouped zone is considered as a retrieval zone of a symbol similar to the queried one. The entire method works online, i.e., it doesn't need any preprocessing step. The present paper reports the initial results of the method, which are very encouraging.
|
Klaus Broelemann, Anjan Dutta, Xiaoyi Jiang, & Josep Llados. (2012). Hierarchical graph representation for symbol spotting in graphical document images. In Structural, Syntactic, and Statistical Pattern Recognition, Joint IAPR International Workshop (Vol. 7626, pp. 529–538). LNCS. Springer Berlin Heidelberg.
Abstract: Symbol spotting can be defined as locating given query symbol in a large collection of graphical documents. In this paper we present a hierarchical graph representation for symbols. This representation allows graph matching methods to deal with low-level vectorization errors and, thus, to perform a robust symbol spotting. To show the potential of this approach, we conduct an experiment with the SESYD dataset.
|