|
Cristhian A. Aguilera-Carrasco, Angel Sappa, & Ricardo Toledo. (2015). LGHD: a Feature Descriptor for Matching Across Non-Linear Intensity Variations. In 22th IEEE International Conference on Image Processing (pp. 178–181).
|
|
|
Xavier Otazu, Olivier Penacchio, & Xim Cerda-Company. (2015). Brightness and colour induction through contextual influences in V1. In Scottish Vision Group 2015 SGV2015 (Vol. 12, pp. 1208–2012).
|
|
|
Olivier Penacchio, Xavier Otazu, A. wilkins, & J. Harris. (2015). Uncomfortable images prevent lateral interactions in the cortex from providing a sparse code. In European Conference on Visual Perception ECVP2015.
|
|
|
Xavier Otazu, Olivier Penacchio, & Xim Cerda-Company. (2015). An excitatory-inhibitory firing rate model accounts for brightness induction, colour induction and visual discomfort. In Barcelona Computational, Cognitive and Systems Neuroscience.
|
|
|
Santiago Segui, Oriol Pujol, & Jordi Vitria. (2015). Learning to count with deep object features. In Deep Vision: Deep Learning in Computer Vision, CVPR 2015 Workshop (pp. 90–96).
Abstract: Learning to count is a learning strategy that has been recently proposed in the literature for dealing with problems where estimating the number of object instances in a scene is the final objective. In this framework, the task of learning to detect and localize individual object instances is seen as a harder task that can be evaded by casting the problem as that of computing a regression value from hand-crafted image features. In this paper we explore the features that are learned when training a counting convolutional neural
network in order to understand their underlying representation.
To this end we define a counting problem for MNIST data and show that the internal representation of the network is able to classify digits in spite of the fact that no direct supervision was provided for them during training.
We also present preliminary results about a deep network that is able to count the number of pedestrians in a scene.
|
|
|
Marc Bolaños, R. Mestre, Estefania Talavera, Xavier Giro, & Petia Radeva. (2015). Visual Summary of Egocentric Photostreams by Representative Keyframes. In IEEE International Conference on Multimedia and Expo ICMEW2015 (pp. 1–6).
Abstract: Building a visual summary from an egocentric photostream captured by a lifelogging wearable camera is of high interest for different applications (e.g. memory reinforcement). In this paper, we propose a new summarization method based on keyframes selection that uses visual features extracted bymeans of a convolutional neural network. Our method applies an unsupervised clustering for dividing the photostreams into events, and finally extracts the most relevant keyframe for each event. We assess the results by applying a blind-taste test on a group of 20 people who assessed the quality of the
summaries.
Keywords: egocentric; lifelogging; summarization; keyframes
|
|
|
Nuria Cirera, Alicia Fornes, & Josep Llados. (2015). Hidden Markov model topology optimization for handwriting recognition. In 13th International Conference on Document Analysis and Recognition ICDAR2015 (pp. 626–630).
Abstract: In this paper we present a method to optimize the topology of linear left-to-right hidden Markov models. These models are very popular for sequential signals modeling on tasks such as handwriting recognition. Many topology definition methods select the number of states for a character model based
on character length. This can be a drawback when characters are shorter than the minimum allowed by the model, since they can not be properly trained nor recognized. The proposed method optimizes the number of states per model by automatically including convenient skip-state transitions and therefore it avoids the aforementioned problem.We discuss and compare our method with other character length-based methods such the Fixed, Bakis and Quantile methods. Our proposal performs well on off-line handwriting recognition task.
|
|
|
Juan Ignacio Toledo, Jordi Cucurull, Jordi Puiggali, Alicia Fornes, & Josep Llados. (2015). Document Analysis Techniques for Automatic Electoral Document Processing: A Survey. In E-Voting and Identity, Proceedings of 5th international conference, VoteID 2015 (pp. 139–141). LNCS.
Abstract: In this paper, we will discuss the most common challenges in electoral document processing and study the different solutions from the document analysis community that can be applied in each case. We will cover Optical Mark Recognition techniques to detect voter selections in the Australian Ballot, handwritten number recognition for preferential elections and handwriting recognition for write-in areas. We will also propose some particular adjustments that can be made to those general techniques in the specific context of electoral documents.
Keywords: Document image analysis; Computer vision; Paper ballots; Paper based elections; Optical scan; Tally
|
|
|
Pau Riba, Josep Llados, & Alicia Fornes. (2015). Handwritten Word Spotting by Inexact Matching of Grapheme Graphs. In 13th International Conference on Document Analysis and Recognition ICDAR2015 (pp. 781–785).
Abstract: This paper presents a graph-based word spotting for handwritten documents. Contrary to most word spotting techniques, which use statistical representations, we propose a structural representation suitable to be robust to the inherent deformations of handwriting. Attributed graphs are constructed using a part-based approach. Graphemes extracted from shape convexities are used as stable units of handwriting, and are associated to graph nodes. Then, spatial relations between them determine graph edges. Spotting is defined in terms of an error-tolerant graph matching using bipartite-graph matching algorithm. To make the method usable in large datasets, a graph indexing approach that makes use of binary embeddings is used as preprocessing. Historical documents are used as experimental framework. The approach is comparable to statistical ones in terms of time and memory requirements, especially when dealing with large document collections.
|
|
|
Onur Ferhat, Arcadi Llanza, & Fernando Vilariño. (2015). A Feature-Based Gaze Estimation Algorithm for Natural Light Scenarios. In Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 (Vol. 9117, pp. 569–576). LNCS. Springer International Publishing.
Abstract: We present an eye tracking system that works with regular webcams. We base our work on open source CVC Eye Tracker [7] and we propose a number of improvements and a novel gaze estimation method. The new method uses features extracted from iris segmentation and it does not fall into the traditional categorization of appearance–based/model–based methods. Our experiments show that our approach reduces the gaze estimation errors by 34 % in the horizontal direction and by 12 % in the vertical direction compared to the baseline system.
Keywords: Eye tracking; Gaze estimation; Natural light; Webcam
|
|
|
Kamal Nasrollahi, Sergio Escalera, P. Rasti, Gholamreza Anbarjafari, Xavier Baro, Hugo Jair Escalante, et al. (2015). Deep Learning based Super-Resolution for Improved Action Recognition. In 5th International Conference on Image Processing Theory, Tools and Applications IPTA2015 (pp. 67–72).
Abstract: Action recognition systems mostly work with videos of proper quality and resolution. Even most challenging benchmark databases for action recognition, hardly include videos of low-resolution from, e.g., surveillance cameras. In videos recorded by such cameras, due to the distance between people and cameras, people are pictured very small and hence challenge action recognition algorithms. Simple upsampling methods, like bicubic interpolation, cannot retrieve all the detailed information that can help the recognition. To deal with this problem, in this paper we combine results of bicubic interpolation with results of a state-ofthe-art deep learning-based super-resolution algorithm, through an alpha-blending approach. The experimental results obtained on down-sampled version of a large subset of Hoolywood2 benchmark database show the importance of the proposed system in increasing the recognition rate of a state-of-the-art action recognition system for handling low-resolution videos.
|
|
|
Isabelle Guyon, Kristin Bennett, Gavin Cawley, Hugo Jair Escalante, & Sergio Escalera. (2015). The AutoML challenge on codalab. In IEEE International Joint Conference on Neural Networks IJCNN2015.
|
|
|
Gerard Canal, Cecilio Angulo, & Sergio Escalera. (2015). Gesture based Human Multi-Robot interaction. In IEEE International Joint Conference on Neural Networks IJCNN2015.
Abstract: The emergence of robot applications for nontechnical users implies designing new ways of interaction between robotic platforms and users. The main goal of this work is the development of a gestural interface to interact with robots
in a similar way as humans do, allowing the user to provide information of the task with non-verbal communication. The gesture recognition application has been implemented using the Microsoft’s KinectTM v2 sensor. Hence, a real-time algorithm based on skeletal features is described to deal with both, static
gestures and dynamic ones, being the latter recognized using a weighted Dynamic Time Warping method. The gesture recognition application has been implemented in a multi-robot case.
A NAO humanoid robot is in charge of interacting with the users and respond to the visual signals they produce. Moreover, a wheeled Wifibot robot carries both the sensor and the NAO robot, easing navigation when necessary. A broad set of user tests have been carried out demonstrating that the system is, indeed, a
natural approach to human robot interaction, with a fast response and easy to use, showing high gesture recognition rates.
|
|
|
Xavier Baro, Jordi Gonzalez, Junior Fabian, Miguel Angel Bautista, Marc Oliu, Hugo Jair Escalante, et al. (2015). ChaLearn Looking at People 2015 challenges: action spotting and cultural event recognition. In 2015 IEEE Conference on Computer Vision and Pattern Recognition Worshops (CVPRW) (pp. 1–9).
Abstract: Following previous series on Looking at People (LAP) challenges [6, 5, 4], ChaLearn ran two competitions to be presented at CVPR 2015: action/interaction spotting and cultural event recognition in RGB data. We ran a second round on human activity recognition on RGB data sequences. In terms of cultural event recognition, tens of categories have to be recognized. This involves scene understanding and human analysis. This paper summarizes the two performed challenges and obtained results. Details of the ChaLearn LAP competitions can be found at http://gesture.chalearn.org/.
|
|
|
Andres Traumann, Sergio Escalera, & Gholamreza Anbarjafari. (2015). A New Retexturing Method for Virtual Fitting Room Using Kinect 2 Camera. In 2015 IEEE Conference on Computer Vision and Pattern Recognition Worshops (CVPRW) (pp. 75–79).
|
|