|
Gioacchino Vino, & Angel Sappa. (2013). Revisiting Harris Corner Detector Algorithm: a Gradual Thresholding Approach. In 10th International Conference on Image Analysis and Recognition (Vol. 7950, pp. 354–363). LNCS. Springer Berlin Heidelberg.
Abstract: This paper presents an adaptive thresholding approach intended to increase the number of detected corners, while reducing the amount of those ones corresponding to noisy data. The proposed approach works by using the classical Harris corner detector algorithm and overcome the difficulty in finding a general threshold that work well for all the images in a given data set by proposing a novel adaptive thresholding scheme. Initially, two thresholds are used to discern between strong corners and flat regions. Then, a region based criteria is used to discriminate between weak corners and noisy points in the midway interval. Experimental results show that the proposed approach has a better capability to reject false corners and, at the same time, to detect weak ones. Comparisons with the state of the art are provided showing the validity of the proposed approach.
|
|
|
Daniel Marczak, Sebastian Cygert, Tomasz Trzcinski, & Bartlomiej Twardowski. (2023). Revisiting Supervision for Continual Representation Learning.
Abstract: In the field of continual learning, models are designed to learn tasks one after the other. While most research has centered on supervised continual learning, recent studies have highlighted the strengths of self-supervised continual representation learning. The improved transferability of representations built with self-supervised methods is often associated with the role played by the multi-layer perceptron projector. In this work, we depart from this observation and reexamine the role of supervision in continual representation learning. We reckon that additional information, such as human annotations, should not deteriorate the quality of representations. Our findings show that supervised models when enhanced with a multi-layer perceptron head, can outperform self-supervised models in continual representation learning.
|
|
|
Mark Philip Philipsen, Anders Jorgensen, Thomas B. Moeslund, & Sergio Escalera. (2016). RGB-D Segmentation of Poultry Entrails. In 9th Conference on Articulated Motion and Deformable Objects.
Abstract: Best commercial paper award.
|
|
|
Pichao Wang, Wanqing Li, Philip Ogunbona, Jun Wan, & Sergio Escalera. (2018). RGB-D-based Human Motion Recognition with Deep Learning: A Survey. CVIU - Computer Vision and Image Understanding, 171, 118–139.
Abstract: Human motion recognition is one of the most important branches of human-centered research activities. In recent years, motion recognition based on RGB-D data has attracted much attention. Along with the development in artificial intelligence, deep learning techniques have gained remarkable success in computer vision. In particular, convolutional neural networks (CNN) have achieved great success for image-based tasks, and recurrent neural networks (RNN) are renowned for sequence-based problems. Specifically, deep learning methods based on the CNN and RNN architectures have been adopted for motion recognition using RGB-D data. In this paper, a detailed overview of recent advances in RGB-D-based motion recognition is presented. The reviewed methods are broadly categorized into four groups, depending on the modality adopted for recognition: RGB-based, depth-based, skeleton-based and RGB+D-based. As a survey focused on the application of deep learning to RGB-D-based motion recognition, we explicitly discuss the advantages and limitations of existing techniques. Particularly, we highlighted the methods of encoding spatial-temporal-structural information inherent in video sequence, and discuss potential directions for future research.
Keywords: Human motion recognition; RGB-D data; Deep learning; Survey
|
|
|
Cristhian Aguilera, Xavier Soria, Angel Sappa, & Ricardo Toledo. (2017). RGBN Multispectral Images: a Novel Color Restoration Approach. In 15th International Conference on Practical Applications of Agents and Multi-Agent System.
Abstract: This paper describes a color restoration technique used to remove NIR information from single sensor cameras where color and near-infrared images are simultaneously acquired|referred to in the literature as RGBN images. The proposed approach is based on a neural network architecture that learns the NIR information contained in the RGBN images. The proposed approach is evaluated on real images obtained by using a pair of RGBN cameras. Additionally, qualitative comparisons with a nave color correction technique based on mean square
error minimization are provided.
Keywords: Multispectral Imaging; Free Sensor Model; Neural Network
|
|
|
Antonio Lopez. (1997). Ridge/Valley-like structures: Creases, separatrices and drainage patterns.
|
|
|
Antonio Lopez, & Joan Serrat. (1997). Ridge/Valley-like structures: Creases, separatrices and drainage patterns.
|
|
|
Antonio Lopez, Joan Serrat, J. Saludes, Cristina Cañero, Felipe Lumbreras, & T. Graf. (2005). Ridgeness for Detecting Lane Markings.
|
|
|
Antonio Lopez, & Joan Serrat. (1998). Ridges and Valleys in Image Analysis.
|
|
|
A. Pujol, Antonio Lopez, Jose Luis Alba, & Juan J. Villanueva. (2001). Ridges, Valleys and Hausdorff Based Similarity Measures for Face Detection and Matching.
|
|
|
Fadi Dornaika, & Angel Sappa. (2007). Rigid and Non-rigid Face Motion Tracking by Aligning Texture Maps and Stereo 3D Models. PRL - Pattern Recognition Letters, 28(15), 2116–2126.
|
|
|
Fadi Dornaika, & Angel Sappa. (2006). Rigid and Non-Rigid Face Motion Tracking by Aligning Texture Maps and Stereo-Based 3D Models. In 8th International Conference on Advanced Concepts for Intelligent Vision Systems (ACIVS´06), LNCS 4179: 675–684.
|
|
|
David Lloret, Antonio Lopez, & Joan Serrat. (1997). Rigid Registration of CT and MR volumes based on Rothes creases.
|
|
|
Laura Lopez-Fuentes, Claudio Rossi, & Harald Skinnemoen. (2017). River segmentation for flood monitoring. In Data Science for Emergency Management at Big Data 2017.
Abstract: Floods are major natural disasters which cause deaths and material damages every year. Monitoring these events is crucial in order to reduce both the affected people and the economic losses. In this work we train and test three different Deep Learning segmentation algorithms to estimate the water area from river images, and compare their performances. We discuss the implementation of a novel data chain aimed to monitor river water levels by automatically process data collected from surveillance cameras, and to give alerts in case of high increases of the water level or flooding. We also create and openly publish the first image dataset for river water segmentation.
|
|
|
Angel Sappa, Rosa Herrero, Fadi Dornaika, David Geronimo, & Antonio Lopez. (2007). Road Approximation in Euclidean and v-Disparity Space: A Comparative Study. In Computer Aided Systems Theory, (Vol. 4739, 1105–1112). LNCS.
Abstract: This paper presents a comparative study between two road approximation techniques—planar surfaces—from stereo vision data. The first approach is carried out in the v-disparity space and is based on a voting scheme, the Hough transform. The second one consists in computing the best fitting plane for the whole 3D road data points, directly in the Euclidean space, by using least squares fitting. The comparative study is initially performed over a set of different synthetic surfaces
(e.g., plane, quadratic surface, cubic surface) digitized by a virtual stereo head; then real data obtained with a commercial stereo head are used. The comparative study is intended to be used as a criterion for fining the best technique according to the road geometry. Additionally, it highlights common problems driven from a wrong assumption about the scene’s prior knowledge.
|
|