|
Patricia Marquez and 6 others. 2014. Factors Affecting Optical Flow Performance in Tagging Magnetic Resonance Imaging. 17th International Conference on Medical Image Computing and Computer Assisted Intervention. Springer International Publishing, 231–238. (LNCS.)
Abstract: Changes in cardiac deformation patterns are correlated with cardiac pathologies. Deformation can be extracted from tagging Magnetic Resonance Imaging (tMRI) using Optical Flow (OF) techniques. For applications of OF in a clinical setting it is important to assess to what extent the performance of a particular OF method is stable across dierent clinical acquisition artifacts. This paper presents a statistical validation framework, based on ANOVA, to assess the motion and appearance factors that have the largest in uence on OF accuracy drop.
In order to validate this framework, we created a database of simulated tMRI data including the most common artifacts of MRI and test three dierent OF methods, including HARP.
Keywords: Optical flow; Performance Evaluation; Synthetic Database; ANOVA; Tagging Magnetic Resonance Imaging
|
|
|
Marcelo D. Pistarelli, Angel Sappa and Ricardo Toledo. 2013. Multispectral Stereo Image Correspondence. 15th International Conference on Computer Analysis of Images and Patterns. Springer Berlin Heidelberg, 217–224. (LNCS.)
Abstract: This paper presents a novel multispectral stereo image correspondence approach. It is evaluated using a stereo rig constructed with a visible spectrum camera and a long wave infrared spectrum camera. The novelty of the proposed approach lies on the usage of Hough space as a correspondence search domain. In this way it avoids searching for correspondence in the original multispectral image domains, where information is low correlated, and a common domain is used. The proposed approach is intended to be used in outdoor urban scenarios, where images contain large amount of edges. These edges are used as distinctive characteristics for the matching in the Hough space. Experimental results are provided showing the validity of the proposed approach.
|
|
|
Gioacchino Vino and Angel Sappa. 2013. Revisiting Harris Corner Detector Algorithm: a Gradual Thresholding Approach. 10th International Conference on Image Analysis and Recognition. Springer Berlin Heidelberg, 354–363. (LNCS.)
Abstract: This paper presents an adaptive thresholding approach intended to increase the number of detected corners, while reducing the amount of those ones corresponding to noisy data. The proposed approach works by using the classical Harris corner detector algorithm and overcome the difficulty in finding a general threshold that work well for all the images in a given data set by proposing a novel adaptive thresholding scheme. Initially, two thresholds are used to discern between strong corners and flat regions. Then, a region based criteria is used to discriminate between weak corners and noisy points in the midway interval. Experimental results show that the proposed approach has a better capability to reject false corners and, at the same time, to detect weak ones. Comparisons with the state of the art are provided showing the validity of the proposed approach.
|
|
|
Dennis G.Romero, Anselmo Frizera, Angel Sappa, Boris X. Vintimilla and Teodiano F.Bastos. 2015. A predictive model for human activity recognition by observing actions and context. Advanced Concepts for Intelligent Vision Systems, Proceedings of 16th International Conference, ACIVS 2015. Springer International Publishing, 323–333. (LNCS.)
Abstract: This paper presents a novel model to estimate human activities — a human activity is defined by a set of human actions. The proposed approach is based on the usage of Recurrent Neural Networks (RNN) and Bayesian inference through the continuous monitoring of human actions and its surrounding environment. In the current work human activities are inferred considering not only visual analysis but also additional resources; external sources of information, such as context information, are incorporated to contribute to the activity estimation. The novelty of the proposed approach lies in the way the information is encoded, so that it can be later associated according to a predefined semantic structure. Hence, a pattern representing a given activity can be defined by a set of actions, plus contextual information or other kind of information that could be relevant to describe the activity. Experimental results with real data are provided showing the validity of the proposed approach.
|
|
|
Cesar de Souza, Adrien Gaidon, Eleonora Vig and Antonio Lopez. 2016. Sympathy for the Details: Dense Trajectories and Hybrid Classification Architectures for Action Recognition. 14th European Conference on Computer Vision.697–716. (LNCS.)
Abstract: Action recognition in videos is a challenging task due to the complexity of the spatio-temporal patterns to model and the difficulty to acquire and learn on large quantities of video data. Deep learning, although a breakthrough for image classification and showing promise for videos, has still not clearly superseded action recognition methods using hand-crafted features, even when training on massive datasets. In this paper, we introduce hybrid video classification architectures based on carefully designed unsupervised representations of hand-crafted spatio-temporal features classified by supervised deep networks. As we show in our experiments on five popular benchmarks for action recognition, our hybrid model combines the best of both worlds: it is data efficient (trained on 150 to 10000 short clips) and yet improves significantly on the state of the art, including recent deep models trained on millions of manually labelled images and videos.
|
|
|
Aura Hernandez-Sabate, Lluis Albarracin, Daniel Calvo and Nuria Gorgorio. 2016. EyeMath: Identifying Mathematics Problem Solving Processes in a RTS Video Game. 5th International Conference Games and Learning Alliance.50–59. (LNCS.)
Abstract: Photorealistic virtual environments are crucial for developing and testing automated driving systems in a safe way during trials. As commercially available simulators are expensive and bulky, this paper presents a low-cost, extendable, and easy-to-use (LEE) virtual environment with the aim to highlight its utility for level 3 driving automation. In particular, an experiment is performed using the presented simulator to explore the influence of different variables regarding control transfer of the car after the system was driving autonomously in a highway scenario. The results show that the speed of the car at the time when the system needs to transfer the control to the human driver is critical.
Keywords: Simulation environment; Automated Driving; Driver-Vehicle interaction
|
|
|
Saad Minhas and 6 others. 2016. LEE: A photorealistic Virtual Environment for Assessing Driver-Vehicle Interactions in Self-Driving Mode. 14th European Conference on Computer Vision Workshops.894–900. (LNCS.)
Abstract: Photorealistic virtual environments are crucial for developing and testing automated driving systems in a safe way during trials. As commercially available simulators are expensive and bulky, this paper presents a low-cost, extendable, and easy-to-use (LEE) virtual environment with the aim to highlight its utility for level 3 driving automation. In particular, an experiment is performed using the presented simulator to explore the influence of different variables regarding control transfer of the car after the system was driving autonomously in a highway scenario. The results show that the speed of the car at the time when the system needs to transfer the control to the human driver is critical.
Keywords: Simulation environment; Automated Driving; Driver-Vehicle interaction
|
|
|
Juan A. Carvajal Ayala, Dennis Romero and Angel Sappa. 2016. Fine-tuning based deep convolutional networks for lepidopterous genus recognition. 21st Ibero American Congress on Pattern Recognition.467–475. (LNCS.)
Abstract: This paper describes an image classification approach oriented to identify specimens of lepidopterous insects at Ecuadorian ecological reserves. This work seeks to contribute to studies in the area of biology about genus of butterflies and also to facilitate the registration of unrecognized specimens. The proposed approach is based on the fine-tuning of three widely used pre-trained Convolutional Neural Networks (CNNs). This strategy is intended to overcome the reduced number of labeled images. Experimental results with a dataset labeled by expert biologists is presented, reaching a recognition accuracy above 92%.
|
|
|
Muhammad Anwer Rao, Fahad Shahbaz Khan, Joost Van de Weijer and Jorma Laaksonen. 2017. Top-Down Deep Appearance Attention for Action Recognition. 20th Scandinavian Conference on Image Analysis.297–309. (LNCS.)
Abstract: Recognizing human actions in videos is a challenging problem in computer vision. Recently, convolutional neural network based deep features have shown promising results for action recognition. In this paper, we investigate the problem of fusing deep appearance and motion cues for action recognition. We propose a video representation which combines deep appearance and motion based local convolutional features within the bag-of-deep-features framework. Firstly, dense deep appearance and motion based local convolutional features are extracted from spatial (RGB) and temporal (flow) networks, respectively. Both visual cues are processed in parallel by constructing separate visual vocabularies for appearance and motion. A category-specific appearance map is then learned to modulate the weights of the deep motion features. The proposed representation is discriminative and binds the deep local convolutional features to their spatial locations. Experiments are performed on two challenging datasets: JHMDB dataset with 21 action classes and ACT dataset with 43 categories. The results clearly demonstrate that our approach outperforms both standard approaches of early and late feature fusion. Further, our approach is only employing action labels and without exploiting body part information, but achieves competitive performance compared to the state-of-the-art deep features based approaches.
Keywords: Action recognition; CNNs; Feature fusion
|
|
|
Felipe Codevilla, Antonio Lopez, Vladlen Koltun and Alexey Dosovitskiy. 2018. On Offline Evaluation of Vision-based Driving Models. 15th European Conference on Computer Vision.246–262. (LNCS.)
Abstract: Autonomous driving models should ideally be evaluated by deploying
them on a fleet of physical vehicles in the real world. Unfortunately, this approach is not practical for the vast majority of researchers. An attractive alternative is to evaluate models offline, on a pre-collected validation dataset with ground truth annotation. In this paper, we investigate the relation between various online and offline metrics for evaluation of autonomous driving models. We find that offline prediction error is not necessarily correlated with driving quality, and two models with identical prediction error can differ dramatically in their driving performance. We show that the correlation of offline evaluation with driving quality can be significantly improved by selecting an appropriate validation dataset and
suitable offline metrics.
Keywords: Autonomous driving; deep learning
|
|