|
F. Javier Sanchez, Jorge Bernal, Cristina Sanchez Montes, Cristina Rodriguez de Miguel, & Gloria Fernandez Esparrach. (2017). Bright spot regions segmentation and classification for specular highlights detection in colonoscopy videos. MVAP - Machine Vision and Applications, , 1–20.
Abstract: A novel specular highlights detection method in colonoscopy videos is presented. The method is based on a model of appearance dening specular
highlights as bright spots which are highly contrasted with respect to adjacent regions. Our approach proposes two stages; segmentation, and then classication
of bright spot regions. The former denes a set of candidate regions obtained through a region growing process with local maxima as initial region seeds. This process creates a tree structure which keeps track, at each growing iteration, of the region frontier contrast; nal regions provided depend on restrictions over contrast value. Non-specular regions are ltered through a classication stage performed by a linear SVM classier using model-based features from each region. We introduce a new validation database with more than 25; 000 regions along with their corresponding pixel-wise annotations. We perform a comparative study against other approaches. Results show that our method is superior to other approaches, with our segmented regions being
closer to actual specular regions in the image. Finally, we also present how our methodology can also be used to obtain an accurate prediction of polyp histology.
Keywords: Specular highlights; bright spot regions segmentation; region classification; colonoscopy
|
|
|
Lu Yu, Lichao Zhang, Joost Van de Weijer, Fahad Shahbaz Khan, Yongmei Cheng, & C. Alejandro Parraga. (2018). Beyond Eleven Color Names for Image Understanding. MVAP - Machine Vision and Applications, 29(2), 361–373.
Abstract: Color description is one of the fundamental problems of image understanding. One of the popular ways to represent colors is by means of color names. Most existing work on color names focuses on only the eleven basic color terms of the English language. This could be limiting the discriminative power of these representations, and representations based on more color names are expected to perform better. However, there exists no clear strategy to choose additional color names. We collect a dataset of 28 additional color names. To ensure that the resulting color representation has high discriminative power we propose a method to order the additional color names according to their complementary nature with the basic color names. This allows us to compute color name representations with high discriminative power of arbitrary length. In the experiments we show that these new color name descriptors outperform the existing color name descriptor on the task of visual tracking, person re-identification and image classification.
Keywords: Color name; Discriminative descriptors; Image classification; Re-identification; Tracking
|
|
|
Fahad Shahbaz Khan, Joost Van de Weijer, Muhammad Anwer Rao, Andrew Bagdanov, Michael Felsberg, & Jorma. (2018). Scale coding bag of deep features for human attribute and action recognition. MVAP - Machine Vision and Applications, 29(1), 55–71.
Abstract: Most approaches to human attribute and action recognition in still images are based on image representation in which multi-scale local features are pooled across scale into a single, scale-invariant encoding. Both in bag-of-words and the recently popular representations based on convolutional neural networks, local features are computed at multiple scales. However, these multi-scale convolutional features are pooled into a single scale-invariant representation. We argue that entirely scale-invariant image representations are sub-optimal and investigate approaches to scale coding within a bag of deep features framework. Our approach encodes multi-scale information explicitly during the image encoding stage. We propose two strategies to encode multi-scale information explicitly in the final image representation. We validate our two scale coding techniques on five datasets: Willow, PASCAL VOC 2010, PASCAL VOC 2012, Stanford-40 and Human Attributes (HAT-27). On all datasets, the proposed scale coding approaches outperform both the scale-invariant method and the standard deep features of the same network. Further, combining our scale coding approaches with standard deep features leads to consistent improvement over the state of the art.
Keywords: Action recognition; Attribute recognition; Bag of deep features
|
|
|
Albert Clapes, Alex Pardo, Oriol Pujol, & Sergio Escalera. (2018). Action detection fusing multiple Kinects and a WIMU: an application to in-home assistive technology for the elderly. MVAP - Machine Vision and Applications, 29(5), 765–788.
Abstract: We present a vision-inertial system which combines two RGB-Depth devices together with a wearable inertial movement unit in order to detect activities of the daily living. From multi-view videos, we extract dense trajectories enriched with a histogram of normals description computed from the depth cue and bag them into multi-view codebooks. During the later classification step a multi-class support vector machine with a RBF- 2 kernel combines the descriptions at kernel level. In order to perform action detection from the videos, a sliding window approach is utilized. On the other hand, we extract accelerations, rotation angles, and jerk features from the inertial data collected by the wearable placed on the user’s dominant wrist. During gesture spotting, a dynamic time warping is applied and the aligning costs to a set of pre-selected gesture sub-classes are thresholded to determine possible detections. The outputs of the two modules are combined in a late-fusion fashion. The system is validated in a real-case scenario with elderly from an elder home. Learning-based fusion results improve the ones from the single modalities, demonstrating the success of such multimodal approach.
Keywords: Multimodal activity detection; Computer vision; Inertial sensors; Dense trajectories; Dynamic time warping; Assistive technology
|
|
|
A. Pujol, A.F. Sole, Daniel Ponsa, X. Varona, & Juan J. Villanueva. (1999). Satellite Image Segmentation Trough Rotational Invariant Feature Eigenvector Projection..
|
|
|
Petia Radeva, A.F. Sole, Antonio Lopez, & Joan Serrat. (1999). Detecting Nets of Linear Structures in Satellite Images..
|
|
|
Marta Nuñez-Garcia, Sonja Simpraga, M.Angeles Jurado, Maite Garolera, Roser Pueyo, & Laura Igual. (2015). FADR: Functional-Anatomical Discriminative Regions for rest fMRI Characterization. In Machine Learning in Medical Imaging, Proceedings of 6th International Workshop, MLMI 2015, Held in Conjunction with MICCAI 2015 (pp. 61–68).
|
|
|
Diego Porres. (2021). Discriminator Synthesis: On reusing the other half of Generative Adversarial Networks. In Machine Learning for Creativity and Design, Neurips Workshop.
Abstract: Generative Adversarial Networks have long since revolutionized the world of computer vision and, tied to it, the world of art. Arduous efforts have gone into fully utilizing and stabilizing training so that outputs of the Generator network have the highest possible fidelity, but little has gone into using the Discriminator after training is complete. In this work, we propose to use the latter and show a way to use the features it has learned from the training dataset to both alter an image and generate one from scratch. We name this method Discriminator Dreaming, and the full code can be found at this https URL.
|
|
|
Josep Llados, & Enric Marti. (1999). A graph-edit algorithm for hand-drawn graphical document recognition and their automatic introduction into CAD systems..
|
|
|
Josep Llados, & Enric Marti. (1999). Graph-edit algorithms for hand-drawn graphical document recognition and their automatic introduction. Machine Graphics & Vision journal, special issue on Graph transformation, .
|
|
|
Josep Llados, & Enric Marti. (1999). A graph-edit algorithm for hand-drawn graphical document recognition and their automatic introduction into CAD systems. Machine Graphics & Vision, 8, 195–211.
|
|
|
Hassan Ahmed Sial, Ramon Baldrich, Maria Vanrell, & Dimitris Samaras. (2020). Light Direction and Color Estimation from Single Image with Deep Regression. In London Imaging Conference.
Abstract: We present a method to estimate the direction and color of the scene light source from a single image. Our method is based on two main ideas: (a) we use a new synthetic dataset with strong shadow effects with similar constraints to the SID dataset; (b) we define a deep architecture trained on the mentioned dataset to estimate the direction and color of the scene light source. Apart from showing good performance on synthetic images, we additionally propose a preliminary procedure to obtain light positions of the Multi-Illumination dataset, and, in this way, we also prove that our trained model achieves good performance when it is applied to real scenes.
|
|
|
Josep Llados, & Young-Bin Kwon. (2004). Graphics Recognition. Recent Advances and Perspectives.
|
|
|
F. Lopez, J.M. Valiente, Ramon Baldrich, & Maria Vanrell. (2005). Fast surface grading using color statistics in the CIELab space. In LNCS 1: 666–673.
|
|
|
Alicia Fornes, Josep Llados, Oriol Ramos Terrades, & Marçal Rusiñol. (2016). La Visió per Computador com a Eina per a la Interpretació Automàtica de Fonts Documentals. Lligall, Revista Catalana d'Arxivística, 20–46.
|
|