|
Kai Wang, Joost Van de Weijer, & Luis Herranz. (2021). ACAE-REMIND for online continual learning with compressed feature replay. PRL - Pattern Recognition Letters, 150, 122–129.
Abstract: Online continual learning aims to learn from a non-IID stream of data from a number of different tasks, where the learner is only allowed to consider data once. Methods are typically allowed to use a limited buffer to store some of the images in the stream. Recently, it was found that feature replay, where an intermediate layer representation of the image is stored (or generated) leads to superior results than image replay, while requiring less memory. Quantized exemplars can further reduce the memory usage. However, a drawback of these methods is that they use a fixed (or very intransigent) backbone network. This significantly limits the learning of representations that can discriminate between all tasks. To address this problem, we propose an auxiliary classifier auto-encoder (ACAE) module for feature replay at intermediate layers with high compression rates. The reduced memory footprint per image allows us to save more exemplars for replay. In our experiments, we conduct task-agnostic evaluation under online continual learning setting and get state-of-the-art performance on ImageNet-Subset, CIFAR100 and CIFAR10 dataset.
Keywords: online continual learning; autoencoders; vector quantization
|
|
|
Patricia Marquez. (2015). A Confidence Framework for the Assessment of Optical Flow Performance (Debora Gil, & Aura Hernandez, Eds.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: Optical Flow (OF) is the input of a wide range of decision support systems such as car driver assistance, UAV guiding or medical diagnose. In these real situations, the absence of ground truth forces to assess OF quality using quantities computed from either sequences or the computed optical flow itself. These quantities are generally known as Confidence Measures, CM. Even if we have a proper confidence measure we still need a way to evaluate its ability to discard pixels with an OF prone to have a large error. Current approaches only provide a descriptive evaluation of the CM performance but such approaches are not capable to fairly compare different confidence measures and optical flow algorithms. Thus, it is of prime importance to define a framework and a general road map for the evaluation of optical flow performance.
This thesis provides a framework able to decide which pairs “ optical flow – confidence measure” (OF-CM) are best suited for optical flow error bounding given a confidence level determined by a decision support system. To design this framework we cover the following points:
Descriptive scores. As a first step, we summarize and analyze the sources of inaccuracies in the output of optical flow algorithms. Second, we present several descriptive plots that visually assess CM capabilities for OF error bounding. In addition to the descriptive plots, given a plot representing OF-CM capabilities to bound the error, we provide a numeric score that categorizes the plot according to its decreasing profile, that is, a score assessing CM performance.
Statistical framework. We provide a comparison framework that assesses the best suited OF-CM pair for error bounding that uses a two stage cascade process. First of all we assess the predictive value of the confidence measures by means of a descriptive plot. Then, for a sample of descriptive plots computed over training frames, we obtain a generic curve that will be used for sequences with no ground truth. As a second step, we evaluate the obtained general curve and its capabilities to really reflect the predictive value of a confidence measure using the variability across train frames by means of ANOVA.
The presented framework has shown its potential in the application on clinical decision support systems. In particular, we have analyzed the impact of the different image artifacts such as noise and decay to the output of optical flow in a cardiac diagnose system and we have improved the navigation inside the bronchial tree on bronchoscopy.
|
|
|
Patricia Marquez, Debora Gil, R.Mester, & Aura Hernandez-Sabate. (2014). Local Analysis of Confidence Measures for Optical Flow Quality Evaluation. In 9th International Conference on Computer Vision Theory and Applications (Vol. 3, pp. 450–457).
Abstract: Optical Flow (OF) techniques facing the complexity of real sequences have been developed in the last years. Even using the most appropriate technique for our specific problem, at some points the output flow might fail to achieve the minimum error required for the system. Confidence measures computed from either input data or OF output should discard those points where OF is not accurate enough for its further use. It follows that evaluating the capabilities of a confidence measure for bounding OF error is as important as the definition
itself. In this paper we analyze different confidence measures and point out their advantages and limitations for their use in real world settings. We also explore the agreement with current tools for their evaluation of confidence measures performance.
Keywords: Optical Flow; Confidence Measure; Performance Evaluation.
|
|
|
Naveen Onkarappa, Sujay M. Veerabhadrappa, & Angel Sappa. (2012). Optical Flow in Onboard Applications: A Study on the Relationship Between Accuracy and Scene Texture. In 4th International Conference on Signal and Image Processing (Vol. 221, pp. 257–267).
Abstract: Optical flow has got a major role in making advanced driver assistance systems (ADAS) a reality. ADAS applications are expected to perform efficiently in all kinds of environments, those are highly probable, that one can drive the vehicle in different kinds of roads, times and seasons. In this work, we study the relationship of optical flow with different roads, that is by analyzing optical flow accuracy on different road textures. Texture measures such as TeX , TeX and TeX are evaluated for this purpose. Further, the relation of regularization weight to the flow accuracy in the presence of different textures is also analyzed. Additionally, we present a framework to generate synthetic sequences of different textures in ADAS scenarios with ground-truth optical flow.
|
|
|
Patricia Marquez, Debora Gil, & Aura Hernandez-Sabate. (2011). A Confidence Measure for Assessing Optical Flow Accuracy in the Absence of Ground Truth. In IEEE International Conference on Computer Vision – Workshops (pp. 2042–2049). Barcelona (Spain): IEEE.
Abstract: Optical flow is a valuable tool for motion analysis in autonomous navigation systems. A reliable application requires determining the accuracy of the computed optical flow. This is a main challenge given the absence of ground truth in real world sequences. This paper introduces a measure of optical flow accuracy for Lucas-Kanade based flows in terms of the numerical stability of the data-term. We call this measure optical flow condition number. A statistical analysis over ground-truth data show a good statistical correlation between the condition number and optical flow error. Experiments on driving sequences illustrate its potential for autonomous navigation systems.
Keywords: IEEE International Conference on Computer Vision – Workshops
|
|
|
Patricia Marquez, Debora Gil, & Aura Hernandez-Sabate. (2012). Error Analysis for Lucas-Kanade Based Schemes. In 9th International Conference on Image Analysis and Recognition (Vol. 7324, pp. 184–191). LNCS. Springer-Verlag Berlin Heidelberg.
Abstract: Optical flow is a valuable tool for motion analysis in medical imaging sequences. A reliable application requires determining the accuracy of the computed optical flow. This is a main challenge given the absence of ground truth in medical sequences. This paper presents an error analysis of Lucas-Kanade schemes in terms of intrinsic design errors and numerical stability of the algorithm. Our analysis provides a confidence measure that is naturally correlated to the accuracy of the flow field. Our experiments show the higher predictive value of our confidence measure compared to existing measures.
Keywords: Optical flow, Confidence measure, Lucas-Kanade, Cardiac Magnetic Resonance
|
|
|
Adria Rico, & Alicia Fornes. (2017). Camera-based Optical Music Recognition using a Convolutional Neural Network. In 12th IAPR International Workshop on Graphics Recognition (pp. 27–28).
Abstract: Optical Music Recognition (OMR) consists in recognizing images of music scores. Contrary to expectation, the current OMR systems usually fail when recognizing images of scores captured by digital cameras and smartphones. In this work, we propose a camera-based OMR system based on Convolutional Neural Networks, showing promising preliminary results
Keywords: optical music recognition; document analysis; convolutional neural network; deep learning
|
|
|
Arnau Baro, Pau Riba, Jorge Calvo-Zaragoza, & Alicia Fornes. (2019). From Optical Music Recognition to Handwritten Music Recognition: a Baseline. PRL - Pattern Recognition Letters, 123, 1–8.
Abstract: Optical Music Recognition (OMR) is the branch of document image analysis that aims to convert images of musical scores into a computer-readable format. Despite decades of research, the recognition of handwritten music scores, concretely the Western notation, is still an open problem, and the few existing works only focus on a specific stage of OMR. In this work, we propose a full Handwritten Music Recognition (HMR) system based on Convolutional Recurrent Neural Networks, data augmentation and transfer learning, that can serve as a baseline for the research community.
|
|
|
Arnau Baro, Pau Riba, Jorge Calvo-Zaragoza, & Alicia Fornes. (2017). Optical Music Recognition by Recurrent Neural Networks. In 14th IAPR International Workshop on Graphics Recognition (pp. 25–26).
Abstract: Optical Music Recognition is the task of transcribing a music score into a machine readable format. Many music scores are written in a single staff, and therefore, they could be treated as a sequence. Therefore, this work explores the use of Long Short-Term Memory (LSTM) Recurrent Neural Networks for reading the music score sequentially, where the LSTM helps in keeping the context. For training, we have used a synthetic dataset of more than 40000 images, labeled at primitive level
Keywords: Optical Music Recognition; Recurrent Neural Network; Long Short-Term Memory
|
|
|
Arnau Baro, Pau Riba, Jorge Calvo-Zaragoza, & Alicia Fornes. (2018). Optical Music Recognition by Long Short-Term Memory Networks. In B. L. A. Fornes (Ed.), Graphics Recognition. Current Trends and Evolutions (Vol. 11009, pp. 81–95). LNCS. Springer.
Abstract: Optical Music Recognition refers to the task of transcribing the image of a music score into a machine-readable format. Many music scores are written in a single staff, and therefore, they could be treated as a sequence. Therefore, this work explores the use of Long Short-Term Memory (LSTM) Recurrent Neural Networks for reading the music score sequentially, where the LSTM helps in keeping the context. For training, we have used a synthetic dataset of more than 40000 images, labeled at primitive level. The experimental results are promising, showing the benefits of our approach.
Keywords: Optical Music Recognition; Recurrent Neural Network; Long ShortTerm Memory
|
|
|
Ciprian Corneanu, Sergio Escalera, & Aleix M. Martinez. (2020). Computing the Testing Error Without a Testing Set. In 33rd IEEE Conference on Computer Vision and Pattern Recognition.
Abstract: Oral. Paper award nominee.
Deep Neural Networks (DNNs) have revolutionized computer vision. We now have DNNs that achieve top (performance) results in many problems, including object recognition, facial expression analysis, and semantic segmentation, to name but a few. The design of the DNNs that achieve top results is, however, non-trivial and mostly done by trailand-error. That is, typically, researchers will derive many DNN architectures (i.e., topologies) and then test them on multiple datasets. However, there are no guarantees that the selected DNN will perform well in the real world. One can use a testing set to estimate the performance gap between the training and testing sets, but avoiding overfitting-to-thetesting-data is almost impossible. Using a sequestered testing dataset may address this problem, but this requires a constant update of the dataset, a very expensive venture. Here, we derive an algorithm to estimate the performance gap between training and testing that does not require any testing dataset. Specifically, we derive a number of persistent topology measures that identify when a DNN is learning to generalize to unseen samples. This allows us to compute the DNN’s testing error on unseen samples, even when we do not have access to them. We provide extensive experimental validation on multiple networks and datasets to demonstrate the feasibility of the proposed approach.
|
|
|
Mariella Dimiccoli, Benoît Girard, Alain Berthoz, & Daniel Bennequin. (2013). Striola Magica: a functional explanation of otolith organs. JCN - Journal of Computational Neuroscience, 35(2), 125–154.
Abstract: Otolith end organs of vertebrates sense linear accelerations of the head and gravitation. The hair cells on their epithelia are responsible for transduction. In mammals, the striola, parallel to the line where hair cells reverse their polarization, is a narrow region centered on a curve with curvature and torsion. It has been shown that the striolar region is functionally different from the rest, being involved in a phasic vestibular pathway. We propose a mathematical and computational model that explains the necessity of this amazing geometry for the striola to be able to carry out its function. Our hypothesis, related to the biophysics of the hair cells and to the physiology of their afferent neurons, is that striolar afferents collect information from several type I hair cells to detect the jerk in a large domain of acceleration directions. This predicts a mean number of two calyces for afferent neurons, as measured in rodents. The domain of acceleration directions sensed by our striolar model is compatible with the experimental results obtained on monkeys considering all afferents. Therefore, the main result of our study is that phasic and tonic vestibular afferents cover the same geometrical fields, but at different dynamical and frequency domains.
Keywords: Otolith organs ;Striola; Vestibular pathway
|
|
|
Gabriel Villalonga, Sebastian Ramos, German Ros, David Vazquez, & Antonio Lopez. (2014). 3d Pedestrian Detection via Random Forest.
Abstract: Our demo focuses on showing the extraordinary performance of our novel 3D pedestrian detector along with its simplicity and real-time capabilities. This detector has been designed for autonomous driving applications, but it can also be applied in other scenarios that cover both outdoor and indoor applications.
Our pedestrian detector is based on the combination of a random forest classifier with HOG-LBP features and the inclusion of a preprocessing stage based on 3D scene information in order to precisely determinate the image regions where the detector should search for pedestrians. This approach ends up in a high accurate system that runs real-time as it is required by many computer vision and robotics applications.
Keywords: Pedestrian Detection
|
|
|
Alejandro Cartas, Jordi Luque, Petia Radeva, Carlos Segura, & Mariella Dimiccoli. (2019). Seeing and Hearing Egocentric Actions: How Much Can We Learn? In IEEE International Conference on Computer Vision Workshops (pp. 4470–4480).
Abstract: Our interaction with the world is an inherently multimodal experience. However, the understanding of human-to-object interactions has historically been addressed focusing on a single modality. In particular, a limited number of works have considered to integrate the visual and audio modalities for this purpose. In this work, we propose a multimodal approach for egocentric action recognition in a kitchen environment that relies on audio and visual information. Our model combines a sparse temporal sampling strategy with a late fusion of audio, spatial, and temporal streams. Experimental results on the EPIC-Kitchens dataset show that multimodal integration leads to better performance than unimodal approaches. In particular, we achieved a 5.18% improvement over the state of the art on verb classification.
|
|
|
Zhijie Fang, & Antonio Lopez. (2018). Is the Pedestrian going to Cross? Answering by 2D Pose Estimation. In IEEE Intelligent Vehicles Symposium (pp. 1271–1276).
Abstract: Our recent work suggests that, thanks to nowadays powerful CNNs, image-based 2D pose estimation is a promising cue for determining pedestrian intentions such as crossing the road in the path of the ego-vehicle, stopping before entering the road, and starting to walk or bending towards the road. This statement is based on the results obtained on non-naturalistic sequences (Daimler dataset), i.e. in sequences choreographed specifically for performing the study. Fortunately, a new publicly available dataset (JAAD) has appeared recently to allow developing methods for detecting pedestrian intentions in naturalistic driving conditions; more specifically, for addressing the relevant question is the pedestrian going to cross? Accordingly, in this paper we use JAAD to assess the usefulness of 2D pose estimation for answering such a question. We combine CNN-based pedestrian detection, tracking and pose estimation to predict the crossing action from monocular images. Overall, the proposed pipeline provides new state-ofthe-art results.
|
|