|
Pau Baiget, Carles Fernandez, Xavier Roca, & Jordi Gonzalez. (2007). Automatic Learning of Conceptual Knowledge for the Interpretation of Human Behavior in Video Sequences. In 3rd Iberian Conference on Pattern Recognition and Image Analysis (IbPRIA 2007), J. Marti et al. (Eds.) LNCS 4477:507–514.
|
|
|
Carles Fernandez, Jordi Gonzalez, & Xavier Roca. (2010). Automatic Learning of Background Semantics in Generic Surveilled Scenes. In 11th European Conference on Computer Vision (Vol. 6313, 678–692). LNCS. Springer Berlin Heidelberg.
Abstract: Advanced surveillance systems for behavior recognition in outdoor traffic scenes depend strongly on the particular configuration of the scenario. Scene-independent trajectory analysis techniques statistically infer semantics in locations where motion occurs, and such inferences are typically limited to abnormality. Thus, it is interesting to design contributions that automatically categorize more specific semantic regions. State-of-the-art approaches for unsupervised scene labeling exploit trajectory data to segment areas like sources, sinks, or waiting zones. Our method, in addition, incorporates scene-independent knowledge to assign more meaningful labels like crosswalks, sidewalks, or parking spaces. First, a spatiotemporal scene model is obtained from trajectory analysis. Subsequently, a so-called GI-MRF inference process reinforces spatial coherence, and incorporates taxonomy-guided smoothness constraints. Our method achieves automatic and effective labeling of conceptual regions in urban scenarios, and is robust to tracking errors. Experimental validation on 5 surveillance databases has been conducted to assess the generality and accuracy of the segmentations. The resulting scene models are used for model-based behavior analysis.
|
|
|
Ignasi Rius, Jordi Gonzalez, Mikhail Mozerov, & Xavier Roca. (2008). Automatic Learning of 3D Pose Variability in Walking Performances for Gait Analysis. International Journal for Computational Vision and Biomechanics, 33–43.
|
|
|
Fernando Vilariño, Gerard Lacey, Jiang Zhou, Hugh Mulcahy, & Stephen Patchett. (2007). Automatic Labeling of Colonoscopy Video for Cancer Detection. In In Proc. berian Conference, IbPRIA (pp. 290–297).
|
|
|
Jordi Gonzalez, Javier Varona, Xavier Roca, & Juan J. Villanueva. (2003). Automatic Keyframing of Human Actions for Computer Animation.
|
|
|
Jordi Gonzalez, Javier Varona, Xavier Roca, & Juan J. Villanueva. (2003). Automatic Keyframing of Human Actions for Computer Animation.
|
|
|
Wenjuan Gong, Andrew Bagdanov, Xavier Roca, & Jordi Gonzalez. (2010). Automatic Key Pose Selection for 3D Human Action Recognition. In 6th International Conference on Articulated Motion and Deformable Objects (Vol. 6169, 290–299). Springer Verlag.
Abstract: This article describes a novel approach to the modeling of human actions in 3D. The method we propose is based on a “bag of poses” model that represents human actions as histograms of key-pose occurrences over the course of a video sequence. Actions are first represented as 3D poses using a sequence of 36 direction cosines corresponding to the angles 12 joints form with the world coordinate frame in an articulated human body model. These pose representations are then projected to three-dimensional, action-specific principal eigenspaces which we refer to as aSpaces. We introduce a method for key-pose selection based on a local-motion energy optimization criterion and we show that this method is more stable and more resistant to noisy data than other key-poses selection criteria for action recognition.
|
|
|
Francesco Ciompi, Oriol Pujol, Simone Balocco, Xavier Carrillo, J. Mauri, & Petia Radeva. (2011). Automatic Key Frames Detection in Intravascular Ultrasound Sequences. In In MICCAI 2011 Workshop on Computing and Visualization for Intra Vascular Imaging.
Abstract: We present a method for the automatic detection of key frames in Intravascular Ultrasound (IVUS) sequences. The key frames are markers delimiting morphological changes along the vessel. The aim of defining key frames is two-fold: (1) they allow to summarize the content of the pullback into few representative frames; (2) they represent the basis for the automatic detection of clinical events in IVUS. The proposed approach achieved a compression ratio of 0.016 with respect to the original sequence and an average inter-frame distance of 61.76 frame, minimizing the number of missed clinical events.
|
|
|
Ellen J.L. Brunenberg, Oriol Pujol, Bart M. Ter Haar Romeny, & Petia Radeva. (2006). Automatic IVUS Segmentation of Atherosclerotic Plaque with Stop & Go Snake.
|
|
|
Jose Antonio Rodriguez, Gemma Sanchez, & Josep Llados. (2006). Automatic Interpretation of Proofreading Sketches.
|
|
|
Laura Igual, Joan Carles Soliva, Roger Gimeno, Sergio Escalera, Oscar Vilarroya, & Petia Radeva. (2012). Automatic Internal Segmentation of Caudate Nucleus for Diagnosis of Attention Deficit Hyperactivity Disorder. In 9th International Conference on Image Analysis and Recognition (Vol. 7325, pp. 222–229). LNCS.
Abstract: Poster
Studies on volumetric brain Magnetic Resonance Imaging (MRI) showed neuroanatomical abnormalities in pediatric Attention-Deficit/Hyperactivity Disorder (ADHD). In particular, the diminished right caudate volume is one of the most replicated findings among ADHD samples in morphometric MRI studies. In this paper, we propose a fully-automatic method for internal caudate nucleus segmentation based on machine learning. Moreover, the ratio between right caudate body volume and the bilateral caudate body volume is applied in a ADHD diagnostic test. We separately validate the automatic internal segmentation of caudate in head and body structures and the diagnostic test using real data from ADHD and control subjects. As a result, we show accurate internal caudate segmentation and similar performance among the proposed automatic diagnostic test and the manual annotation.
|
|
|
Marçal Rusiñol, R.Roset, Josep Llados, & C.Montaner. (2011). Automatic Index Generation of Digitized Map Series by Coordinate Extraction and Interpretation. ePER - e-Perimetron, 219–229.
Abstract: By means of computer vision algorithms scanned images of maps are processed in order to extract relevant geographic information from printed coordinate pairs. The meaningful information is then transformed into georeferencing information for each single map sheet, and the complete set is compiled to produce a graphical index sheet for the map series along with relevant metadata. The whole process is fully automated and trained to attain maximum effectivity and throughput.
|
|
|
Marçal Rusiñol, R.Roset, Josep Llados, & C.Montaner. (2011). Automatic Index Generation of Digitized Map Series by Coordinate Extraction and Interpretation. In In Proceedings of the Sixth International Workshop on Digital Technologies in Cartographic Heritage.
|
|
|
Victoria Ruiz, Angel Sanchez, Jose F. Velez, & Bogdan Raducanu. (2019). Automatic Image-Based Waste Classification. In International Work-Conference on the Interplay Between Natural and Artificial Computation. From Bioinspired Systems and Biomedical Applications to Machine Learning (Vol. 11487, 422–431). LNCS.
Abstract: The management of solid waste in large urban environments has become a complex problem due to increasing amount of waste generated every day by citizens and companies. Current Computer Vision and Deep Learning techniques can help in the automatic detection and classification of waste types for further recycling tasks. In this work, we use the TrashNet dataset to train and compare different deep learning architectures for automatic classification of garbage types. In particular, several Convolutional Neural Networks (CNN) architectures were compared: VGG, Inception and ResNet. The best classification results were obtained using a combined Inception-ResNet model that achieved 88.6% of accuracy. These are the best results obtained with the considered dataset.
Keywords: Computer Vision; Deep learning; Convolutional neural networks; Waste classification
|
|
|
Sergio Escalera, Josep Moya, Laura Igual, Veronica Violant, & Maria Teresa Anguera. (2012). Automatic Human Behavior Analysis in ADHD. In Eunethydis 2nd International ADHD Conference.
|
|