|
Carles Fernandez, Jordi Gonzalez, & Xavier Roca. (2010). Automatic Learning of Background Semantics in Generic Surveilled Scenes. In 11th European Conference on Computer Vision (Vol. 6313, 678–692). LNCS. Springer Berlin Heidelberg.
Abstract: Advanced surveillance systems for behavior recognition in outdoor traffic scenes depend strongly on the particular configuration of the scenario. Scene-independent trajectory analysis techniques statistically infer semantics in locations where motion occurs, and such inferences are typically limited to abnormality. Thus, it is interesting to design contributions that automatically categorize more specific semantic regions. State-of-the-art approaches for unsupervised scene labeling exploit trajectory data to segment areas like sources, sinks, or waiting zones. Our method, in addition, incorporates scene-independent knowledge to assign more meaningful labels like crosswalks, sidewalks, or parking spaces. First, a spatiotemporal scene model is obtained from trajectory analysis. Subsequently, a so-called GI-MRF inference process reinforces spatial coherence, and incorporates taxonomy-guided smoothness constraints. Our method achieves automatic and effective labeling of conceptual regions in urban scenarios, and is robust to tracking errors. Experimental validation on 5 surveillance databases has been conducted to assess the generality and accuracy of the segmentations. The resulting scene models are used for model-based behavior analysis.
|
|
|
Bhalaji Nagarajan, Ricardo Marques, Marcos Mejia, & Petia Radeva. (2022). Class-conditional Importance Weighting for Deep Learning with Noisy Labels. In 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (Vol. 5, pp. 679–686).
Abstract: Large-scale accurate labels are very important to the Deep Neural Networks to train them and assure high performance. However, it is very expensive to create a clean dataset since usually it relies on human interaction. To this purpose, the labelling process is made cheap with a trade-off of having noisy labels. Learning with Noisy Labels is an active area of research being at the same time very challenging. The recent advances in Self-supervised learning and robust loss functions have helped in advancing noisy label research. In this paper, we propose a loss correction method that relies on dynamic weights computed based on the model training. We extend the existing Contrast to Divide algorithm coupled with DivideMix using a new class-conditional weighted scheme. We validate the method using the standard noise experiments and achieved encouraging results.
Keywords: Noisy Labeling; Loss Correction; Class-conditional Importance Weighting; Learning with Noisy Labels
|
|
|
Oriol Pujol, Debora Gil, & Petia Radeva. (2005). Fundamentals of Stop and Go active models. Image and Vision Computing, 23(8), 681–691.
Abstract: An efficient snake formulation should conform to the idea of picking the smoothest curve among all the shapes approximating an object of interest. In current geodesic snakes, the regularizing curvature also affects the convergence stage, hindering the latter at concave regions. In the present work, we make use of characteristic functions to define a novel geodesic formulation that decouples regularity and convergence. This term decoupling endows the snake with higher adaptability to non-convex shapes. Convergence is ensured by splitting the definition of the external force into an attractive vector field and a repulsive one. In our paper, we propose to use likelihood maps as approximation of characteristic functions of object appearance. The better efficiency and accuracy of our decoupled scheme are illustrated in the particular case of feature space-based segmentation.
Keywords: Deformable models; Geodesic snakes; Region-based segmentation
|
|
|
J. Pladellorens, M.J. Yzuel, J. Castell, & Joan Serrat. (1993). Calculo automatico del volumen del ventriculo izquierdo. Comparacion con expertos. Optica Pura y Aplicada., 685–691.
|
|
|
Aura Hernandez-Sabate, Debora Gil, Josefina Mauri, & Petia Radeva. (2006). Reducing cardiac motion in IVUS sequences. In Proceeding of Computers in Cardiology (Vol. 33, pp. 685–688).
Abstract: Cardiac vessel displacement is a main artifact in IVUS sequences. It hinders visualization of the main structures in an appropriate orientation and alignment and affects extracting vessel measurements. In this paper, we present a novel approach for image sequence alignment based on spectral analysis, which removes rigid dynamics, preserving at the same time the vessel geometry. First, we suppress the translation by taking, for each frame, the center of mass of the image as origin of coordinates. In polar coordinates with such point as origin, the rotation appears as a horizontal displacement. The translation induces a phase shift in the Fourier coefficients of two consecutive polar images. We estimate the phase by adjusting a regression plane to the phases of the principal frequencies. Experiments show that the presented strategy suppress cardiac motion regardless of the acquisition device. 1.
|
|
|
Eloi Puertas, Miguel Angel Bautista, Daniel Sanchez, Sergio Escalera, & Oriol Pujol. (2014). Learning to Segment Humans by Stacking their Body Parts,. In ECCV Workshop on ChaLearn Looking at People (Vol. 8925, pp. 685–697). LNCS.
Abstract: Human segmentation in still images is a complex task due to the wide range of body poses and drastic changes in environmental conditions. Usually, human body segmentation is treated in a two-stage fashion. First, a human body part detection step is performed, and then, human part detections are used as prior knowledge to be optimized by segmentation strategies. In this paper, we present a two-stage scheme based on Multi-Scale Stacked Sequential Learning (MSSL). We define an extended feature set by stacking a multi-scale decomposition of body
part likelihood maps. These likelihood maps are obtained in a first stage
by means of a ECOC ensemble of soft body part detectors. In a second stage, contextual relations of part predictions are learnt by a binary classifier, obtaining an accurate body confidence map. The obtained confidence map is fed to a graph cut optimization procedure to obtain the final segmentation. Results show improved segmentation when MSSL is included in the human segmentation pipeline.
Keywords: Human body segmentation; Stacked Sequential Learning
|
|
|
Arjan Gijsenij, & Theo Gevers. (2011). Color Constancy Using Natural Image Statistics and Scene Semantics. TPAMI - IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(4), 687–698.
Abstract: Existing color constancy methods are all based on specific assumptions such as the spatial and spectral characteristics of images. As a consequence, no algorithm can be considered as universal. However, with the large variety of available methods, the question is how to select the method that performs best for a specific image. To achieve selection and combining of color constancy algorithms, in this paper natural image statistics are used to identify the most important characteristics of color images. Then, based on these image characteristics, the proper color constancy algorithm (or best combination of algorithms) is selected for a specific image. To capture the image characteristics, the Weibull parameterization (e.g., grain size and contrast) is used. It is shown that the Weibull parameterization is related to the image attributes to which the used color constancy methods are sensitive. An MoG-classifier is used to learn the correlation and weighting between the Weibull-parameters and the image attributes (number of edges, amount of texture, and SNR). The output of the classifier is the selection of the best performing color constancy method for a certain image. Experimental results show a large improvement over state-of-the-art single algorithms. On a data set consisting of more than 11,000 images, an increase in color constancy performance up to 20 percent (median angular error) can be obtained compared to the best-performing single algorithm. Further, it is shown that for certain scene categories, one specific color constancy algorithm can be used instead of the classifier considering several algorithms.
|
|
|
Jiaolong Xu, David Vazquez, Sebastian Ramos, Antonio Lopez, & Daniel Ponsa. (2013). Adapting a Pedestrian Detector by Boosting LDA Exemplar Classifiers. In CVPR Workshop on Ground Truth – What is a good dataset? (pp. 688–693).
Abstract: Training vision-based pedestrian detectors using synthetic datasets (virtual world) is a useful technique to collect automatically the training examples with their pixel-wise ground truth. However, as it is often the case, these detectors must operate in real-world images, experiencing a significant drop of their performance. In fact, this effect also occurs among different real-world datasets, i.e. detectors' accuracy drops when the training data (source domain) and the application scenario (target domain) have inherent differences. Therefore, in order to avoid this problem, it is required to adapt the detector trained with synthetic data to operate in the real-world scenario. In this paper, we propose a domain adaptation approach based on boosting LDA exemplar classifiers from both virtual and real worlds. We evaluate our proposal on multiple real-world pedestrian detection datasets. The results show that our method can efficiently adapt the exemplar classifiers from virtual to real world, avoiding drops in average precision over the 15%.
Keywords: Pedestrian Detection; Domain Adaptation
|
|
|
Pau Torras, Arnau Baro, Lei Kang, & Alicia Fornes. (2021). On the Integration of Language Models into Sequence to Sequence Architectures for Handwritten Music Recognition. In International Society for Music Information Retrieval Conference (pp. 690–696).
Abstract: Despite the latest advances in Deep Learning, the recognition of handwritten music scores is still a challenging endeavour. Even though the recent Sequence to Sequence(Seq2Seq) architectures have demonstrated its capacity to reliably recognise handwritten text, their performance is still far from satisfactory when applied to historical handwritten scores. Indeed, the ambiguous nature of handwriting, the non-standard musical notation employed by composers of the time and the decaying state of old paper make these scores remarkably difficult to read, sometimes even by trained humans. Thus, in this work we explore the incorporation of language models into a Seq2Seq-based architecture to try to improve transcriptions where the aforementioned unclear writing produces statistically unsound mistakes, which as far as we know, has never been attempted for this field of research on this architecture. After studying various Language Model integration techniques, the experimental evaluation on historical handwritten music scores shows a significant improvement over the state of the art, showing that this is a promising research direction for dealing with such difficult manuscripts.
|
|
|
Miguel Angel Bautista, Sergio Escalera, Xavier Baro, Petia Radeva, Jordi Vitria, & Oriol Pujol. (2011). Minimal Design of Error-Correcting Output Codes. PRL - Pattern Recognition Letters, 33(6), 693–702.
Abstract: IF JCR CCIA 1.303 2009 54/103
The classification of large number of object categories is a challenging trend in the pattern recognition field. In literature, this is often addressed using an ensemble of classifiers. In this scope, the Error-correcting output codes framework has demonstrated to be a powerful tool for combining classifiers. However, most state-of-the-art ECOC approaches use a linear or exponential number of classifiers, making the discrimination of a large number of classes unfeasible. In this paper, we explore and propose a minimal design of ECOC in terms of the number of classifiers. Evolutionary computation is used for tuning the parameters of the classifiers and looking for the best minimal ECOC code configuration. The results over several public UCI datasets and different multi-class computer vision problems show that the proposed methodology obtains comparable (even better) results than state-of-the-art ECOC methodologies with far less number of dichotomizers.
Keywords: Multi-class classification; Error-correcting output codes; Ensemble of classifiers
|
|
|
Antoni Gurgui, Debora Gil, & Enric Marti. (2015). Laplacian Unitary Domain for Texture Morphing. In Proceedings of the 10th International Conference on Computer Vision Theory and Applications VISIGRAPP2015 (Vol. 1, pp. 693–699). SciTePress.
Abstract: Deformation of expressive textures is the gateway to realistic computer synthesis of expressions. By their good mathematical properties and flexible formulation on irregular meshes, most texture mappings rely on solutions to the Laplacian in the cartesian space. In the context of facial expression morphing, this approximation can be seen from the opposite point of view by neglecting the metric. In this paper, we use the properties of the Laplacian in manifolds to present a novel approach to warping expressive facial images in order to generate a morphing between them.
Keywords: Facial; metamorphosis;LaplacianMorphing
|
|
|
Ciprian Corneanu, Meysam Madadi, Sergio Escalera, & Aleix Martinez. (2020). Explainable Early Stopping for Action Unit Recognition. In Faces and Gestures in E-health and welfare workshop (pp. 693–699).
Abstract: A common technique to avoid overfitting when training deep neural networks (DNN) is to monitor the performance in a dedicated validation data partition and to stop
training as soon as it saturates. This only focuses on what the model does, while completely ignoring what happens inside it.
In this work, we open the “black-box” of DNN in order to perform early stopping. We propose to use a novel theoretical framework that analyses meso-scale patterns in the topology of the functional graph of a network while it trains. Based on it,
we decide when it transitions from learning towards overfitting in a more explainable way. We exemplify the benefits of this approach on a state-of-the art custom DNN that jointly learns local representations and label structure employing an ensemble of dedicated subnetworks. We show that it is practically equivalent in performance to early stopping with patience, the standard early stopping algorithm in the literature. This proves beneficial for AU recognition performance and provides new insights into how learning of AUs occurs in DNNs.
|
|
|
Arjan Gijsenij, R. Lu, Theo Gevers, & De Xu. (2012). Color Constancy for Multiple Light Source. TIP - IEEE Transactions on Image Processing, 21(2), 697–707.
Abstract: Impact factor 2010: 2.92
Impact factor 2011/2012?: 3.32
Color constancy algorithms are generally based on the simplifying assumption that the spectral distribution of a light source is uniform across scenes. However, in reality, this assumption is often violated due to the presence of multiple light sources. In this paper, we will address more realistic scenarios where the uniform light-source assumption is too restrictive. First, a methodology is proposed to extend existing algorithms by applying color constancy locally to image patches, rather than globally to the entire image. After local (patch-based) illuminant estimation, these estimates are combined into more robust estimations, and a local correction is applied based on a modified diagonal model. Quantitative and qualitative experiments on spectral and real images show that the proposed methodology reduces the influence of two light sources simultaneously present in one scene. If the chromatic difference between these two illuminants is more than 1° , the proposed framework outperforms algorithms based on the uniform light-source assumption (with error-reduction up to approximately 30%). Otherwise, when the chromatic difference is less than 1° and the scene can be considered to contain one (approximately) uniform light source, the performance of the proposed method framework is similar to global color constancy methods.
|
|
|
Cesar de Souza, Adrien Gaidon, Eleonora Vig, & Antonio Lopez. (2016). Sympathy for the Details: Dense Trajectories and Hybrid Classification Architectures for Action Recognition. In 14th European Conference on Computer Vision (pp. 697–716). LNCS.
Abstract: Action recognition in videos is a challenging task due to the complexity of the spatio-temporal patterns to model and the difficulty to acquire and learn on large quantities of video data. Deep learning, although a breakthrough for image classification and showing promise for videos, has still not clearly superseded action recognition methods using hand-crafted features, even when training on massive datasets. In this paper, we introduce hybrid video classification architectures based on carefully designed unsupervised representations of hand-crafted spatio-temporal features classified by supervised deep networks. As we show in our experiments on five popular benchmarks for action recognition, our hybrid model combines the best of both worlds: it is data efficient (trained on 150 to 10000 short clips) and yet improves significantly on the state of the art, including recent deep models trained on millions of manually labelled images and videos.
|
|
|
Carles Fernandez, Pau Baiget, Xavier Roca, & Jordi Gonzalez. (2007). Semantic Annotation of Complex Human Scenes for Multimedia Surveillance. In AI* Artificial Intelligence and Human–Oriented Computing. 10th Congress of the Italian Association for Artificial Intelligence, (Vol. 4733, 698–709). LNCS.
|
|