|
E. Barakova, Maya Dimitrova, T. Lorents, & Petia Radeva. (2004). The Web as an “Autobiographical Agent”.
|
|
|
German Ros, Laura Sellart, Joanna Materzynska, David Vazquez, & Antonio Lopez. (2016). The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes. In 29th IEEE Conference on Computer Vision and Pattern Recognition (pp. 3234–3243).
Abstract: Vision-based semantic segmentation in urban scenarios is a key functionality for autonomous driving. The irruption of deep convolutional neural networks (DCNNs) allows to foresee obtaining reliable classifiers to perform such a visual task. However, DCNNs require to learn many parameters from raw images; thus, having a sufficient amount of diversified images with this class annotations is needed. These annotations are obtained by a human cumbersome labour specially challenging for semantic segmentation, since pixel-level annotations are required. In this paper, we propose to use a virtual world for automatically generating realistic synthetic images with pixel-level annotations. Then, we address the question of how useful can be such data for the task of semantic segmentation; in particular, when using a DCNN paradigm. In order to answer this question we have generated a synthetic diversified collection of urban images, named SynthCity, with automatically generated class annotations. We use SynthCity in combination with publicly available real-world urban images with manually provided annotations. Then, we conduct experiments on a DCNN setting that show how the inclusion of SynthCity in the training stage significantly improves the performance of the semantic segmentation task
Keywords: Domain Adaptation; Autonomous Driving; Virtual Data; Semantic Segmentation
|
|
|
Alicia Fornes, Asma Bensalah, Cristina Carmona_Duarte, Jialuo Chen, Miguel A. Ferrer, Andreas Fischer, et al. (2022). The RPM3D Project: 3D Kinematics for Remote Patient Monitoring. In Intertwining Graphonomics with Human Movements. 20th International Conference of the International Graphonomics Society, IGS 2022 (Vol. 13424, pp. 217–226). LNCS.
Abstract: This project explores the feasibility of remote patient monitoring based on the analysis of 3D movements captured with smartwatches. We base our analysis on the Kinematic Theory of Rapid Human Movement. We have validated our research in a real case scenario for stroke rehabilitation at the Guttmann Institute (https://www.guttmann.com/en/) (neurorehabilitation hospital), showing promising results. Our work could have a great impact in remote healthcare applications, improving the medical efficiency and reducing the healthcare costs. Future steps include more clinical validation, developing multi-modal analysis architectures (analysing data from sensors, images, audio, etc.), and exploring the application of our technology to monitor other neurodegenerative diseases.
Keywords: Healthcare applications; Kinematic; Theory of Rapid Human Movements; Human activity recognition; Stroke rehabilitation; 3D kinematics
|
|
|
Marçal Rusiñol, & Josep Llados. (2012). The Role of the Users in Handwritten Word Spotting Applications: Query Fusion and Relevance Feedback. In 13th International Conference on Frontiers in Handwriting Recognition (pp. 55–60).
Abstract: In this paper we present the importance of including the user in the loop in a handwritten word spotting framework. Several off-the-shelf query fusion and relevance feedback strategies have been tested in the handwritten word spotting context. The increase in terms of precision when the user is included in the loop is assessed using two datasets of historical handwritten documents and a baseline word spotting approach based on a bag-of-visual-words model.
|
|
|
David Masip, Alexander Todorov, & Jordi Vitria. (2012). The Role of Facial Regions in Evaluating Social Dime. In Rita Cucchiara V. M. Andrea Fusiello (Ed.), 12th European Conference on Computer Vision – Workshops and Demonstrations (Vol. 7584, pp. 210–219). LNCS. Springer Berlin Heidelberg.
Abstract: Facial trait judgments are an important information cue for people. Recent works in the Psychology field have stated the basis of face evaluation, defining a set of traits that we evaluate from faces (e.g. dominance, trustworthiness, aggressiveness, attractiveness, threatening or intelligence among others). We rapidly infer information from others faces, usually after a short period of time (< 1000ms) we perceive a certain degree of dominance or trustworthiness of another person from the face. Although these perceptions are not necessarily accurate, they influence many important social outcomes (such as the results of the elections or the court decisions). This topic has also attracted the attention of Computer Vision scientists, and recently a computational model to automatically predict trait evaluations from faces has been proposed. These systems try to mimic the human perception by means of applying machine learning classifiers to a set of labeled data. In this paper we perform an experimental study on the specific facial features that trigger the social inferences. Using previous results from the literature, we propose to use simple similarity maps to evaluate which regions of the face influence the most the trait inferences. The correlation analysis is performed using only appearance, and the results from the experiments suggest that each trait is correlated with specific facial characteristics.
Keywords: Workshops and Demonstrations
|
|
|
Dimosthenis Karatzas, Lluis Gomez, & Marçal Rusiñol. (2017). The Robust Reading Competition Annotation and Evaluation Platform. In 1st International Workshop on Open Services and Tools for Document Analysis.
Abstract: The ICDAR Robust Reading Competition (RRC), initiated in 2003 and re-established in 2011, has become the defacto evaluation standard for the international community. Concurrent with its second incarnation in 2011, a continuous effort started to develop an online framework to facilitate the hosting and management of competitions. This short paper briefly outlines the Robust Reading Competition Annotation and Evaluation Platform, the backbone of the Robust Reading Competition, comprising a collection of tools and processes that aim to simplify the management and annotation
of data, and to provide online and offline performance evaluation and analysis services
|
|
|
Dimosthenis Karatzas, Lluis Gomez, Marçal Rusiñol, & Anguelos Nicolaou. (2018). The Robust Reading Competition Annotation and Evaluation Platform. In 13th IAPR International Workshop on Document Analysis Systems (pp. 61–66).
Abstract: The ICDAR Robust Reading Competition (RRC), initiated in 2003 and reestablished in 2011, has become the defacto evaluation standard for the international community. Concurrent with its second incarnation in 2011, a continuous
effort started to develop an online framework to facilitate the hosting and management of competitions. This short paper briefly outlines the Robust Reading Competition Annotation and Evaluation Platform, the backbone of the
Robust Reading Competition, comprising a collection of tools and processes that aim to simplify the management and annotation of data, and to provide online and offline performance evaluation and analysis services.
|
|
|
Mohammad Rouhani, & Angel Sappa. (2013). The Richer Representation the Better Registration. TIP - IEEE Transactions on Image Processing, 22(12), 5036–5049.
Abstract: In this paper, the registration problem is formulated as a point to model distance minimization. Unlike most of the existing works, which are based on minimizing a point-wise correspondence term, this formulation avoids the correspondence search that is time-consuming. In the first stage, the target set is described through an implicit function by employing a linear least squares fitting. This function can be either an implicit polynomial or an implicit B-spline from a coarse to fine representation. In the second stage, we show how the obtained implicit representation is used as an interface to convert point-to-point registration into point-to-implicit problem. Furthermore, we show that this registration distance is smooth and can be minimized through the Levengberg-Marquardt algorithm. All the formulations presented for both stages are compact and easy to implement. In addition, we show that our registration method can be handled using any implicit representation though some are coarse and others provide finer representations; hence, a tradeoff between speed and accuracy can be set by employing the right implicit function. Experimental results and comparisons in 2D and 3D show the robustness and the speed of convergence of the proposed approach.
|
|
|
Adrien Gaidon, Antonio Lopez, & Florent Perronnin. (2018). The Reasonable Effectiveness of Synthetic Visual Data. IJCV - International Journal of Computer Vision, 126(9), 899–901.
|
|
|
Marc Serra, Olivier Penacchio, Robert Benavente, Maria Vanrell, & Dimitris Samaras. (2014). The Photometry of Intrinsic Images. In 27th IEEE Conference on Computer Vision and Pattern Recognition (pp. 1494–1501).
Abstract: Intrinsic characterization of scenes is often the best way to overcome the illumination variability artifacts that complicate most computer vision problems, from 3D reconstruction to object or material recognition. This paper examines the deficiency of existing intrinsic image models to accurately account for the effects of illuminant color and sensor characteristics in the estimation of intrinsic images and presents a generic framework which incorporates insights from color constancy research to the intrinsic image decomposition problem. The proposed mathematical formulation includes information about the color of the illuminant and the effects of the camera sensors, both of which modify the observed color of the reflectance of the objects in the scene during the acquisition process. By modeling these effects, we get a “truly intrinsic” reflectance image, which we call absolute reflectance, which is invariant to changes of illuminant or camera sensors. This model allows us to represent a wide range of intrinsic image decompositions depending on the specific assumptions on the geometric properties of the scene configuration and the spectral properties of the light source and the acquisition system, thus unifying previous models in a single general framework. We demonstrate that even partial information about sensors improves significantly the estimated reflectance images, thus making our method applicable for a wide range of sensors. We validate our general intrinsic image framework experimentally with both synthetic data and natural images.
|
|
|
Simon Jégou, Michal Drozdzal, David Vazquez, Adriana Romero, & Yoshua Bengio. (2017). The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation. In IEEE Conference on Computer Vision and Pattern Recognition Workshops.
Abstract: State-of-the-art approaches for semantic image segmentation are built on Convolutional Neural Networks (CNNs). The typical segmentation architecture is composed of (a) a downsampling path responsible for extracting coarse semantic features, followed by (b) an upsampling path trained to recover the input image resolution at the output of the model and, optionally, (c) a post-processing module (e.g. Conditional Random Fields) to refine the model predictions.
Recently, a new CNN architecture, Densely Connected Convolutional Networks (DenseNets), has shown excellent results on image classification tasks. The idea of DenseNets is based on the observation that if each layer is directly connected to every other layer in a feed-forward fashion then the network will be more accurate and easier to train.
In this paper, we extend DenseNets to deal with the problem of semantic segmentation. We achieve state-of-the-art results on urban scene benchmark datasets such as CamVid and Gatech, without any further post-processing module nor pretraining. Moreover, due to smart construction of the model, our approach has much less parameters than currently published best entries for these datasets.
Keywords: Semantic Segmentation
|
|
|
Sergio Escalera, & Ralf Herbrich. (2020). The NeurIPS’18 Competition: From Machine Learning to Intelligent Conversations (Sergio Escalera, & Ralf Hebrick, Eds.).
Abstract: This volume presents the results of the Neural Information Processing Systems Competition track at the 2018 NeurIPS conference. The competition follows the same format as the 2017 competition track for NIPS. Out of 21 submitted proposals, eight competition proposals were selected, spanning the area of Robotics, Health, Computer Vision, Natural Language Processing, Systems and Physics. Competitions have become an integral part of advancing state-of-the-art in artificial intelligence (AI). They exhibit one important difference to benchmarks: Competitions test a system end-to-end rather than evaluating only a single component; they assess the practicability of an algorithmic solution in addition to assessing feasibility.
|
|
|
Fernando Vilariño, Dimosthenis Karatzas, & Alberto Valcarce. (2018). The Library Living Lab Barcelona: A participative approach to technology as an enabling factor for innovation in cultural spaces. Technology Innovation Management Review.
|
|
|
Fernando Vilariño, & Dimosthenis Karatzas. (2015). The Library Living Lab. In Open Living Lab Days.
|
|
|
David Augusto Rojas, Fahad Shahbaz Khan, & Joost Van de Weijer. (2010). The Impact of Color on Bag-of-Words based Object Recognition. In 20th International Conference on Pattern Recognition (1549–1553).
Abstract: In recent years several works have aimed at exploiting color information in order to improve the bag-of-words based image representation. There are two stages in which color information can be applied in the bag-of-words framework. Firstly, feature detection can be improved by choosing highly informative color-based regions. Secondly, feature description, typically focusing on shape, can be improved with a color description of the local patches. Although both approaches have been shown to improve results the combined merits have not yet been analyzed. Therefore, in this paper we investigate the combined contribution of color to both the feature detection and extraction stages. Experiments performed on two challenging data sets, namely Flower and Pascal VOC 2009; clearly demonstrate that incorporating color in both feature detection and extraction significantly improves the overall performance.
|
|