|
Jürgen Brauer, Wenjuan Gong, Jordi Gonzalez, & Michael Arens. (2011). On the Effect of Temporal Information on Monocular 3D Human Pose Estimation. In 2nd IEEE International Workshop on Analysis and Retrieval of Tracked Events and Motion in Imagery Streams (pp. 906–913).
Abstract: We address the task of estimating 3D human poses from monocular camera sequences. Many works make use of multiple consecutive frames for the estimation of a 3D pose in a frame. Although such an approach should ease the pose estimation task substantially since multiple consecutive frames allow to solve for 2D projection ambiguities in principle, it has not yet been investigated systematically how much we can improve the 3D pose estimates when using multiple consecutive frames opposed to single frame information. In this paper we analyze the difference in quality of 3D pose estimates based on different numbers of consecutive frames from which 2D pose estimates are available. We validate the use of temporal information on two major different approaches for human pose estimation – modeling and learning approaches. The results of our experiments show that both learning and modeling approaches benefit from using multiple frames opposed to single frame input but that the benefit is small when the 2D pose estimates show a high quality in terms of precision.
|
|
|
G.D. Evangelidis, Ferran Diego, Joan Serrat, & Antonio Lopez. (2011). Slice Matching for Accurate Spatio-Temporal Alignment. In In ICCV Workshop on Visual Surveillance.
Abstract: Video synchronization and alignment is a rather recent topic in computer vision. It usually deals with the problem of aligning sequences recorded simultaneously by static, jointly- or independently-moving cameras. In this paper, we investigate the more difficult problem of matching videos captured at different times from independently-moving cameras, whose trajectories are approximately coincident or parallel. To this end, we propose a novel method that pixel-wise aligns videos and allows thus to automatically highlight their differences. This primarily aims at visual surveillance but the method can be adopted as is by other related video applications, like object transfer (augmented reality) or high dynamic range video. We build upon a slice matching scheme to first synchronize the sequences, while we develop a spatio-temporal alignment scheme to spatially register corresponding frames and refine the temporal mapping. We investigate the performance of the proposed method on videos recorded from vehicles driven along different types of roads and compare with related previous works.
Keywords: video alignment
|
|
|
Gemma Roig, Xavier Boix, F. de la Torre, Joan Serrat, & C. Vilella. (2011). Hierarchical CRF with product label spaces for parts-based Models. In IEEE Conference on Automatic Face and Gesture Recognition (pp. 657–664).
Abstract: Non-rigid object detection is a challenging an open research problem in computer vision. It is a critical part in many applications such as image search, surveillance, human-computer interaction or image auto-annotation. Most successful approaches to non-rigid object detection make use of part-based models. In particular, Conditional Random Fields (CRF) have been successfully embedded into a discriminative parts-based model framework due to its effectiveness for learning and inference (usually based on a tree structure). However, CRF-based approaches do not incorporate global constraints and only model pairwise interactions. This is especially important when modeling object classes that may have complex parts interactions (e.g. facial features or body articulations), because neglecting them yields an oversimplified model with suboptimal performance. To overcome this limitation, this paper proposes a novel hierarchical CRF (HCRF). The main contribution is to build a hierarchy of part combinations by extending the label set to a hierarchy of product label spaces. In order to keep the inference computation tractable, we propose an effective method to reduce the new label set. We test our method on two applications: facial feature detection on the Multi-PIE database and human pose estimation on the Buffy dataset.
Keywords: Shape; Computational modeling; Principal component analysis; Random variables; Color; Upper bound; Facial features
|
|
|
Fahad Shahbaz Khan, Joost Van de Weijer, Andrew Bagdanov, & Maria Vanrell. (2011). Portmanteau Vocabularies for Multi-Cue Image Representation. In 25th Annual Conference on Neural Information Processing Systems.
Abstract: We describe a novel technique for feature combination in the bag-of-words model of image classification. Our approach builds discriminative compound words from primitive cues learned independently from training images. Our main observation is that modeling joint-cue distributions independently is more statistically robust for typical classification problems than attempting to empirically estimate the dependent, joint-cue distribution directly. We use Information theoretic vocabulary compression to find discriminative combinations of cues and the resulting vocabulary of portmanteau words is compact, has the cue binding property, and supports individual weighting of cues in the final image representation. State-of-the-art results on both the Oxford Flower-102 and Caltech-UCSD Bird-200 datasets demonstrate the effectiveness of our technique compared to other, significantly more complex approaches to multi-cue image representation
|
|
|
Naila Murray, Sandra Skaff, Luca Marchesotti, & Florent Perronnin. (2011). Towards Automatic Concept Transfer. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Non-Photorealistic Animation and Rendering (167.176). ACM Press.
Abstract: This paper introduces a novel approach to automatic concept transfer; examples of concepts are “romantic”, “earthy”, and “luscious”. The approach modifies the color content of an input image given only a concept specified by a user in natural language, thereby requiring minimal user input. This approach is particularly useful for users who are aware of the message they wish to convey in the transferred image while being unsure of the color combination needed to achieve the corresponding transfer. The user may adjust the intensity level of the concept transfer to his/her liking with a single parameter. The proposed approach uses a convex clustering algorithm, with a novel pruning mechanism, to automatically set the complexity of models of chromatic content. It also uses the Earth-Mover's Distance to compute a mapping between the models of the input image and the target chromatic concept. Results show that our approach yields transferred images which effectively represent concepts, as confirmed by a user study.
Keywords: chromatic modeling, color concepts, color transfer, concept transfer
|
|
|
Jordi Roca, C. Alejandro Parraga, & Maria Vanrell. (2011). Categorical Focal Colours are Structurally Invariant Under Illuminant Changes. In European Conference on Visual Perception (196). Perception 40.
Abstract: The visual system perceives the colour of surfaces approximately constant under changes of illumination. In this work, we investigate how stable is the perception of categorical \“focal\” colours and their interrelations with varying illuminants and simple chromatic backgrounds. It has been proposed that best examples of colour categories across languages cluster in small regions of the colour space and are restricted to a set of 11 basic terms (Kay and Regier, 2003 Proceedings of the National Academy of Sciences of the USA 100 9085\–9089). Following this, we developed a psychophysical paradigm that exploits the ability of subjects to reliably reproduce the most representative examples of each category, adjusting multiple test patches embedded in a coloured Mondrian. The experiment was run on a CRT monitor (inside a dark room) under various simulated illuminants. We modelled the recorded data for each subject and adapted state as a 3D interconnected structure (graph) in Lab space. The graph nodes were the subject\’s focal colours at each adaptation state. The model allowed us to get a better distance measure between focal structures under different illuminants. We found that perceptual focal structures tend to be preserved better than the structures of the physical \“ideal\” colours under illuminant changes.
|
|
|
Ruth Aylett, Ginevra Castellano, Bogdan Raducanu, Ana Paiva, & Marc Hanheide. (2011). Long-term socially perceptive and interactive robot companions: challenges and future perspectives. In 13th International Conference on Multimodal Interaction (pp. 323–326). ACM.
Abstract: This paper gives a brief overview of the challenges for multi-model perception and generation applied to robot companions located in human social environments. It reviews the current position in both perception and generation and the immediate technical challenges and goes on to consider the extra issues raised by embodiment and social context. Finally, it briefly discusses the impact of systems that must function continually over months rather than just for a few hours.
Keywords: human-robot interaction, multimodal interaction, social robotics
|
|
|
Antonio Hernandez, Carlos Primo, & Sergio Escalera. (2011). Automatic user interaction correction via Multi-label Graph cuts. In In ICCV 2011 1st IEEE International Workshop on Human Interaction in Computer Vision HICV (pp. 1276–1281).
Abstract: Most applications in image segmentation requires from user interaction in order to achieve accurate results. However, user wants to achieve the desired segmentation accuracy reducing effort of manual labelling. In this work, we extend standard multi-label α-expansion Graph Cut algorithm so that it analyzes the interaction of the user in order to modify the object model and improve final segmentation of objects. The approach is inspired in the fact that fast user interactions may introduce some pixel errors confusing object and background. Our results with different degrees of user interaction and input errors show high performance of the proposed approach on a multi-label human limb segmentation problem compared with classical α-expansion algorithm.
|
|
|
Miguel Reyes, Gabriel Dominguez, & Sergio Escalera. (2011). Feature Weighting in Dynamic Time Warping for Gesture Recognition in Depth Data. In 1st IEEE Workshop on Consumer Depth Cameras for Computer Vision (pp. 1182–1188).
Abstract: We present a gesture recognition approach for depth video data based on a novel Feature Weighting approach within the Dynamic Time Warping framework. Depth features from human joints are compared through video sequences using Dynamic Time Warping, and weights are assigned to features based on inter-intra class gesture variability. Feature Weighting in Dynamic Time Warping is then applied for recognizing begin-end of gestures in data sequences. The obtained results recognizing several gestures in depth data show high performance compared with classical Dynamic Time Warping approach.
|
|
|
Michal Drozdzal, Santiago Segui, Petia Radeva, Jordi Vitria, & Laura Igual. (2011). System and Method for Displaying Motility Events in an in Vivo Image Stream.
|
|
|
Marçal Rusiñol, R.Roset, Josep Llados, & C.Montaner. (2011). Automatic Index Generation of Digitized Map Series by Coordinate Extraction and Interpretation. In In Proceedings of the Sixth International Workshop on Digital Technologies in Cartographic Heritage.
|
|
|
Jaime Moreno, & Xavier Otazu. (2011). Image compression algorithm based on Hilbert scanning of embedded quadTrees: an introduction of the Hi-SET coder. In IEEE International Conference on Multimedia and Expo (pp. 1–6).
Abstract: In this work we present an effective and computationally simple algorithm for image compression based on Hilbert Scanning of Embedded quadTrees (Hi-SET). It allows to represent an image as an embedded bitstream along a fractal function. Embedding is an important feature of modern image compression algorithms, in this way Salomon in [1, pg. 614] cite that another feature and perhaps a unique one is the fact of achieving the best quality for the number of bits input by the decoder at any point during the decoding. Hi-SET possesses also this latter feature. Furthermore, the coder is based on a quadtree partition strategy, that applied to image transformation structures such as discrete cosine or wavelet transform allows to obtain an energy clustering both in frequency and space. The coding algorithm is composed of three general steps, using just a list of significant pixels. The implementation of the proposed coder is developed for gray-scale and color image compression. Hi-SET compressed images are, on average, 6.20dB better than the ones obtained by other compression techniques based on the Hilbert scanning. Moreover, Hi-SET improves the image quality in 1.39dB and 1.00dB in gray-scale and color compression, respectively, when compared with JPEG2000 coder.
|
|
|
Jaime Moreno, & Xavier Otazu. (2011). Image coder based on Hilbert scanning of embedded quadTrees. In Data Compression Conference (p. 470).
Abstract: In this work we present an effective and computationally simple algorithm for image compression based on Hilbert Scanning of Embedded quadTrees (Hi-SET). It allows to represent an image as an embedded bitstream along a fractal function. Embedding is an important feature of modern image compression algorithms, in this way Salomon in [1, pg. 614] cite that another feature and perhaps a unique one is the fact of achieving the best quality for the number of bits input by the decoder at any point during the decoding. Hi-SET possesses also this latter feature. Furthermore, the coder is based on a quadtree partition strategy, that applied to image transformation structures such as discrete cosine or wavelet transform allows to obtain an energy clustering both in frequency and space. The coding algorithm is composed of three general steps, using just a list of significant pixels.
|
|
|
Jorge Bernal, Fernando Vilariño, & F. Javier Sanchez. (2011). Towards Intelligent Systems for Colonoscopy. In Paul Miskovitz (Ed.), Colonoscopy (Vol. 1, pp. 257–282). Intech.
Abstract: In this chapter we present tools that can be used to build intelligent systems for colonoscopy.
The idea is, by using methods based on computer vision and artificial intelligence, add significant value to the colonoscopy procedure. Intelligent systems are being used to assist in other medical interventions
|
|
|
Jorge Bernal, F. Javier Sanchez, & Fernando Vilariño. (2011). Depth of Valleys Accumulation Algorithm for Object Detection. In 14th Congrès Català en Intel·ligencia Artificial (Vol. 1, pp. 71–80).
Abstract: This work aims at detecting in which regions the objects in the image are by using information about the intensity of valleys, which appear to surround ob- jects in images where the source of light is in the line of direction than the camera. We present our depth of valleys accumulation method, which consists of two stages: first, the definition of the depth of valleys image which combines the output of a ridges and valleys detector with the morphological gradient to measure how deep is a point inside a valley and second, an algorithm that denotes points of the image as interior to objects those which are inside complete or incomplete boundaries in the depth of valleys image. To evaluate the performance of our method we have tested it on several application domains. Our results on object region identification are promising, specially in the field of polyp detection in colonoscopy videos, and we also show its applicability in different areas.
Keywords: Object Recognition, Object Region Identification, Image Analysis, Image Processing
|
|