|
Maria Oliver, Gloria Haro, Mariella Dimiccoli, Baptiste Mazin, & Coloma Ballester. (2016). A computational model of amodal completion. In SIAM Conference on Imaging Science.
Abstract: This paper presents a computational model to recover the most likely interpretation of the 3D scene structure from a planar image, where some objects may occlude others. The estimated scene interpretation is obtained by integrating some global and local cues and provides both the complete disoccluded objects that form the scene and their ordering according to depth. Our method first computes several distal scenes which are compatible with the proximal planar image. To compute these different hypothesized scenes, we propose a perceptually inspired object disocclusion method, which works by minimizing the Euler's elastica as well as by incorporating the relatability of partially occluded contours and the convexity of the disoccluded objects. Then, to estimate the preferred scene we rely on a Bayesian model and define probabilities taking into account the global complexity of the objects in the hypothesized scenes as well as the effort of bringing these objects in their relative position in the planar image, which is also measured by an Euler's elastica-based quantity. The model is illustrated with numerical experiments on, both, synthetic and real images showing the ability of our model to reconstruct the occluded objects and the preferred perceptual order among them. We also present results on images of the Berkeley dataset with provided figure-ground ground-truth labeling.
|
|
|
G. de Oliveira, A. Cartas, Marc Bolaños, Mariella Dimiccoli, Xavier Giro, & Petia Radeva. (2016). LEMoRe: A Lifelog Engine for Moments Retrieval at the NTCIR-Lifelog LSAT Task. In 12th NTCIR Conference on Evaluation of Information Access Technologies.
Abstract: Semantic image retrieval from large amounts of egocentric visual data requires to leverage powerful techniques for filling in the semantic gap. This paper introduces LEMoRe, a Lifelog Engine for Moments Retrieval, developed in the context of the Lifelog Semantic Access Task (LSAT) of the the NTCIR-12 challenge and discusses its performance variation on different trials. LEMoRe integrates classical image descriptors with high-level semantic concepts extracted by Convolutional Neural Networks (CNN), powered by a graphic user interface that uses natural language processing. Although this is just a first attempt towards interactive image retrieval from large egocentric datasets and there is a large room for improvement of the system components and the user interface, the structure of the system itself and the way the single components cooperate are very promising.
|
|
|
G. de Oliveira, Mariella Dimiccoli, & Petia Radeva. (2016). Egocentric Image Retrieval With Deep Convolutional Neural Networks. In 19th International Conference of the Catalan Association for Artificial Intelligence (pp. 71–76).
|
|
|
Maedeh Aghaei, Mariella Dimiccoli, & Petia Radeva. (2016). With whom do I interact with? Social interaction detection in egocentric photo-streams. In 23rd International Conference on Pattern Recognition.
Abstract: Given a user wearing a low frame rate wearable camera during a day, this work aims to automatically detect the moments when the user gets engaged into a social interaction solely by reviewing the automatically captured photos by the worn camera. The proposed method, inspired by the sociological concept of F-formation, exploits distance and orientation of the appearing individuals -with respect to the user- in the scene from a bird-view perspective. As a result, the interaction pattern over the sequence can be understood as a two-dimensional time series that corresponds to the temporal evolution of the distance and orientation features over time. A Long-Short Term Memory-based Recurrent Neural Network is then trained to classify each time series. Experimental evaluation over a dataset of 30.000 images has shown promising results on the proposed method for social interaction detection in egocentric photo-streams.
|
|
|
Y. Patel, Lluis Gomez, Marçal Rusiñol, & Dimosthenis Karatzas. (2016). Dynamic Lexicon Generation for Natural Scene Images. In 14th European Conference on Computer Vision Workshops (pp. 395–410).
Abstract: Many scene text understanding methods approach the endtoend recognition problem from a word-spotting perspective and take huge benet from using small per-image lexicons. Such customized lexicons are normally assumed as given and their source is rarely discussed.
In this paper we propose a method that generates contextualized lexicons
for scene images using only visual information. For this, we exploit
the correlation between visual and textual information in a dataset consisting
of images and textual content associated with them. Using the topic modeling framework to discover a set of latent topics in such a dataset allows us to re-rank a xed dictionary in a way that prioritizes the words that are more likely to appear in a given image. Moreover, we train a CNN that is able to reproduce those word rankings but using only the image raw pixels as input. We demonstrate that the quality of the automatically obtained custom lexicons is superior to a generic frequency-based baseline.
Keywords: scene text; photo OCR; scene understanding; lexicon generation; topic modeling; CNN
|
|
|
Fernando Vilariño, Dan Norton, & Onur Ferhat. (2016). The Eye Doesn't Click – Eyetracking and Digital Content Interaction. In 4S/EASST Conference.
|
|
|
Fernando Vilariño. (2016). Giving Value to digital collections in the Public Library. In Librarian 2020.
|
|
|
Fernando Vilariño, & Dimosthenis Karatzas. (2016). A Living Lab approach for Citizen Science in Libraries. In 1st International ECSA Conference.
|
|
|
Fernando Vilariño. (2016). Dissemination, creation and education from archives: Case study of the collection of Digitized Visual Poems from Joan Brossa Foundation. In International Workshop on Poetry: Archives, Poetries and Receptions.
|
|
|
Angel Sappa, P. Carvajal, Cristhian A. Aguilera-Carrasco, Miguel Oliveira, Dennis Romero, & Boris X. Vintimilla. (2016). Wavelet based visible and infrared image fusion: a comparative study. SENS - Sensors, 16(6), 1–15.
Abstract: This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and Long Wave InfraRed (LWIR).
Keywords: Image fusion; fusion evaluation metrics; visible and infrared imaging; discrete wavelet transform
|
|
|
Cristhian A. Aguilera-Carrasco, F. Aguilera, Angel Sappa, C. Aguilera, & Ricardo Toledo. (2016). Learning cross-spectral similarity measures with deep convolutional neural networks. In 29th IEEE Conference on Computer Vision and Pattern Recognition Worshops.
Abstract: The simultaneous use of images from different spectracan be helpful to improve the performance of many computer vision tasks. The core idea behind the usage of crossspectral approaches is to take advantage of the strengths of each spectral band providing a richer representation of a scene, which cannot be obtained with just images from one spectral band. In this work we tackle the cross-spectral image similarity problem by using Convolutional Neural Networks (CNNs). We explore three different CNN architectures to compare the similarity of cross-spectral image patches. Specifically, we train each network with images from the visible and the near-infrared spectrum, and then test the result with two public cross-spectral datasets. Experimental results show that CNN approaches outperform the current state-of-art on both cross-spectral datasets. Additionally, our experiments show that some CNN architectures are capable of generalizing between different crossspectral domains.
|
|
|
Victor Ponce, Baiyu Chen, Marc Oliu, Ciprian Corneanu, Albert Clapes, Isabelle Guyon, et al. (2016). ChaLearn LAP 2016: First Round Challenge on First Impressions – Dataset and Results. In 14th European Conference on Computer Vision Workshops.
Abstract: This paper summarizes the ChaLearn Looking at People 2016 First Impressions challenge data and results obtained by the teams in the rst round of the competition. The goal of the competition was to automatically evaluate ve \apparent“ personality traits (the so-called \Big Five”) from videos of subjects speaking in front of a camera, by using human judgment. In this edition of the ChaLearn challenge, a novel data set consisting of 10,000 shorts clips from YouTube videos has been made publicly available. The ground truth for personality traits was obtained from workers of Amazon Mechanical Turk (AMT). To alleviate calibration problems between workers, we used pairwise comparisons between videos, and variable levels were reconstructed by tting a Bradley-Terry-Luce model with maximum likelihood. The CodaLab open source
platform was used for submission of predictions and scoring. The competition attracted, over a period of 2 months, 84 participants who are grouped in several teams. Nine teams entered the nal phase. Despite the diculty of the task, the teams made great advances in this round of the challenge.
Keywords: Behavior Analysis; Personality Traits; First Impressions
|
|
|
Jose A. Garcia, David Masip, Valerio Sbragaglia, & Jacopo Aguzzi. (2016). Automated Identification and Tracking of Nephrops norvegicus (L.) Using Infrared and Monochromatic Blue Light. In 19th International Conference of the Catalan Association for Artificial Intelligence.
Abstract: Automated video and image analysis can be a very efficient tool to analyze
animal behavior based on sociality, especially in hard access environments
for researchers. The understanding of this social behavior can play a key role in the sustainable design of capture policies of many species. This paper proposes the use of computer vision algorithms to identify and track a specific specie, the Norway lobster, Nephrops norvegicus, a burrowing decapod with relevant commercial value which is captured by trawling. These animals can only be captured when are engaged in seabed excursions, which are strongly related with their social behavior.
This emergent behavior is modulated by the day-night cycle, but their social
interactions remain unknown to the scientific community. The paper introduces an identification scheme made of four distinguishable black and white tags (geometric shapes). The project has recorded 15-day experiments in laboratory pools, under monochromatic blue light (472 nm.) and darkness conditions (recorded using Infra Red light). Using this massive image set, we propose a comparative of state-ofthe-art computer vision algorithms to distinguish and track the different animals’ movements. We evaluate the robustness to the high noise presence in the infrared video signals and free out-of-plane rotations due to animal movement. The experiments show promising accuracies under a cross-validation protocol, being adaptable to the automation and analysis of large scale data. In a second contribution, we created an extensive dataset of shapes (46027 different shapes) from four daily experimental video recordings, which will be available to the community.
Keywords: computer vision; video analysis; object recognition; tracking; behaviour; social; decapod; Nephrops norvegicus
|
|
|
Jose A. Garcia, David Masip, Valerio Sbragaglia, & Jacopo Aguzzi. (2016). Using ORB, BoW and SVM to identificate and track tagged Norway lobster Nephrops Norvegicus (L.). In 3rd International Conference on Maritime Technology and Engineering.
Abstract: Sustainable capture policies of many species strongly depend on the understanding of their social behaviour. Nevertheless, the analysis of emergent behaviour in marine species poses several challenges. Usually animals are captured and observed in tanks, and their behaviour is inferred from their dynamics and interactions. Therefore, researchers must deal with thousands of hours of video data. Without loss of generality, this paper proposes a computer
vision approach to identify and track specific species, the Norway lobster, Nephrops norvegicus. We propose an identification scheme were animals are marked using black and white tags with a geometric shape in the center (holed
triangle, filled triangle, holed circle and filled circle). Using a massive labelled dataset; we extract local features based on the ORB descriptor. These features are a posteriori clustered, and we construct a Bag of Visual Words feature vector per animal. This approximation yields us invariance to rotation
and translation. A SVM classifier achieves generalization results above 99%. In a second contribution, we will make the code and training data publically available.
|
|
|
Vassileios Balntas, Edgar Riba, Daniel Ponsa, & Krystian Mikolajczyk. (2016). Learning local feature descriptors with triplets and shallow convolutional neural networks. In 27th British Machine Vision Conference.
Abstract: It has recently been demonstrated that local feature descriptors based on convolutional neural networks (CNN) can significantly improve the matching performance. Previous work on learning such descriptors has focused on exploiting pairs of positive and negative patches to learn discriminative CNN representations. In this work, we propose to utilize triplets of training samples, together with in-triplet mining of hard negatives.
We show that our method achieves state of the art results, without the computational overhead typically associated with mining of negatives and with lower complexity of the network architecture. We compare our approach to recently introduced convolutional local feature descriptors, and demonstrate the advantages of the proposed methods in terms of performance and speed. We also examine different loss functions associated with triplets.
|
|