|
Javier Vazquez, Maria Vanrell, & Robert Benavente. (2010). Color names as a constraint for Computer Vision problems. In Proceedings of The CREATE 2010 Conference (324–328).
Abstract: Computer Vision Problems are usually ill-posed. Constraining de gamut of possible solutions is then a necessary step. Many constrains for different problems have been developed during years. In this paper, we present a different way of constraining some of these problems: the use of color names. In particular, we will focus on segmentation, representation ans constancy.
|
|
|
Fahad Shahbaz Khan, Joost Van de Weijer, & Maria Vanrell. (2010). Who Painted this Painting? In Proceedings of The CREATE 2010 Conference (329–333).
|
|
|
Shida Beigpour, & Joost Van de Weijer. (2010). Photo-Realistic Color Alteration for Architecture and Design. In Proceedings of The CREATE 2010 Conference (84–88).
Abstract: As color is a strong stimuli we receive from the exterior world, choosing the right color can prove crucial in creating the desired architecture and desing. We propose a framework to apply a realistic color change on both objects and their illuminant lights for snapshots of architectural designs, in order to visualize and choose the right color before actully applying the change in the real world. The proposed framework is based on the laws of physics in order to accomplish realistic and physically plausible results.
|
|
|
Ariel Amato, Angel Sappa, Alicia Fornes, Felipe Lumbreras, & Josep Llados. (2013). Divide and Conquer: Atomizing and Parallelizing A Task in A Mobile Crowdsourcing Platform. In 2nd International ACM Workshop on Crowdsourcing for Multimedia (pp. 21–22).
Abstract: In this paper we present some conclusions about the advantages of having an efficient task formulation when a crowdsourcing platform is used. In particular we show how the task atomization and distribution can help to obtain results in an efficient way. Our proposal is based on a recursive splitting of the original task into a set of smaller and simpler tasks. As a result both more accurate and faster solutions are obtained. Our evaluation is performed on a set of ancient documents that need to be digitized.
|
|
|
Josep Llados. (2006). Computer Vision: Progress of Research and Development ( J. Llados(ed.), Ed.).
|
|
|
Robert Benavente, Laura Igual, & Fernando Vilariño. (2008). Current Challenges in Computer Vision.
|
|
|
Francesco Ciompi, Oriol Pujol, Simone Balocco, Xavier Carrillo, J. Mauri, & Petia Radeva. (2011). Automatic Key Frames Detection in Intravascular Ultrasound Sequences. In In MICCAI 2011 Workshop on Computing and Visualization for Intra Vascular Imaging.
Abstract: We present a method for the automatic detection of key frames in Intravascular Ultrasound (IVUS) sequences. The key frames are markers delimiting morphological changes along the vessel. The aim of defining key frames is two-fold: (1) they allow to summarize the content of the pullback into few representative frames; (2) they represent the basis for the automatic detection of clinical events in IVUS. The proposed approach achieved a compression ratio of 0.016 with respect to the original sequence and an average inter-frame distance of 61.76 frame, minimizing the number of missed clinical events.
|
|
|
Chen Zhang, Maria del Mar Vila Muñoz, Petia Radeva, Roberto Elosua, Maria Grau, Angels Betriu, et al. (2015). Carotid Artery Segmentation in Ultrasound Images. In Computing and Visualization for Intravascular Imaging and Computer Assisted Stenting (CVII-STENT2015), Joint MICCAI Workshops.
|
|
|
B. Moghaddam, David Guillamet, & Jordi Vitria. (2003). Local Appearance-Based Models using High-Order Statistics of Image Features. In IEEE International Conference on Computer Vision and Pattern Recognition (CVPR).
|
|
|
Bogdan Raducanu, & Jordi Vitria. (2007). Online Learning for Human-Robot Interaction. In IEEE Conference on Computer Vision and Pattern Recognition Workshop on.
|
|
|
Sergio Escalera, Petia Radeva, & Oriol Pujol. (2007). Complex Salient Regions for Computer Vision Problems. In IEEE Conference on Computer Vision and Pattern Recognition Workshop on.
|
|
|
Sergio Escalera, Oriol Pujol, J. Mauri, & Petia Radeva. (2008). IVUS Tissue Characterization with Sub-class Error-correcting Output Codes. In Computer Vision and Pattern Recognition Workshops, 2008. CVPR Workshops 2008. IEEE Computer Society Conference on, pp. 1–8, 23–28 juny 2008..
Abstract: Intravascular ultrasound (IVUS) represents a powerful imaging technique to explore coronary vessels and to study their morphology and histologic properties. In this paper, we characterize different tissues based on Radio Frequency, texture-based, slope-based, and combined features. To deal with the classification of multiple tissues, we require the use of robust multi-class learning techniques. In this context, we propose a strategy to model multi-class classification tasks using sub-classes information in the ECOC framework. The new strategy splits the classes into different subsets according to the applied base classifier. Complex IVUS data sets containing overlapping data are learnt by splitting the original set of classes into sub-classes, and embedding the binary problems in a problem-dependent ECOC design. The method automatically characterizes different tissues, showing performance improvements over the state-of-the-art ECOC techniques for different base classifiers and feature sets.
|
|
|
Agata Lapedriza, David Masip, & Jordi Vitria. (2008). On the Use of Independent Tasks for Face Recognition. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (1–6).
|
|
|
Jose Manuel Alvarez, Theo Gevers, & Antonio Lopez. (2009). Learning Photometric Invariance from Diversified Color Model Ensembles. In 22nd IEEE Conference on Computer Vision and Pattern Recognition (565–572).
Abstract: Color is a powerful visual cue for many computer vision applications such as image segmentation and object recognition. However, most of the existing color models depend on the imaging conditions affecting negatively the performance of the task at hand. Often, a reflection model (e.g., Lambertian or dichromatic reflectance) is used to derive color invariant models. However, those reflection models might be too restricted to model real-world scenes in which different reflectance mechanisms may hold simultaneously. Therefore, in this paper, we aim to derive color invariance by learning from color models to obtain diversified color invariant ensembles. First, a photometrical orthogonal and non-redundant color model set is taken on input composed of both color variants and invariants. Then, the proposed method combines and weights these color models to arrive at a diversified color ensemble yielding a proper balance between invariance (repeatability) and discriminative power (distinctiveness). To achieve this, the fusion method uses a multi-view approach to minimize the estimation error. In this way, the method is robust to data uncertainty and produces properly diversified color invariant ensembles. Experiments are conducted on three different image datasets to validate the method. From the theoretical and experimental results, it is concluded that the method is robust against severe variations in imaging conditions. The method is not restricted to a certain reflection model or parameter tuning. Further, the method outperforms state-of- the-art detection techniques in the field of object, skin and road recognition.
Keywords: road detection
|
|
|
Sergio Escalera, Eloi Puertas, Petia Radeva, & Oriol Pujol. (2009). Multimodal laughter recognition in video conversations. In 2nd IEEE Workshop on CVPR for Human communicative Behavior analysis (110–115).
Abstract: Laughter detection is an important area of interest in the Affective Computing and Human-computer Interaction fields. In this paper, we propose a multi-modal methodology based on the fusion of audio and visual cues to deal with the laughter recognition problem in face-to-face conversations. The audio features are extracted from the spectogram and the video features are obtained estimating the mouth movement degree and using a smile and laughter classifier. Finally, the multi-modal cues are included in a sequential classifier. Results over videos from the public discussion blog of the New York Times show that both types of features perform better when considered together by the classifier. Moreover, the sequential methodology shows to significantly outperform the results obtained by an Adaboost classifier.
|
|