|
Mirko Arnold, Anarta Ghosh, Stephen Ameling, & G Lacey. (2010). Automatic segmentation and inpainting of specular highlights for endoscopic imaging. EURASIP JIVP - EURASIP Journal on Image and Video Processing, 2010(9).
|
|
|
Sergio Alloza, Flavio Escribano, Sergi Delgado, Ciprian Corneanu, & Sergio Escalera. (2017). XBadges. Identifying and training soft skills with commercial video games Improving persistence, risk taking & spatial reasoning with commercial video games and facial and emotional recognition system. In 4th Congreso de la Sociedad Española para las Ciencias del Videojuego (Vol. 1957, pp. 13–28).
Abstract: XBadges is a research project based on the hypothesis that commercial video games (nonserious games) can train soft skills. We measure persistence, patial reasoning and risk taking before and after subjects paticipate in controlled game playing sessions.
In addition, we have developed an automatic facial expression recognition system capable of inferring their emotions while playing, allowing us to study the role of emotions in soft skills acquisition. We have used Flappy Bird, Pacman and Tetris for assessing changes in persistence, risk taking and spatial reasoning respectively.
Results show how playing Tetris significantly improves spatial reasoning and how playing Pacman significantly improves prudence in certain areas of behavior. As for emotions, they reveal that being concentrated helps to improve performance and skills acquisition. Frustration is also shown as a key element. With the results obtained we are able to glimpse multiple applications in areas which need soft skills development.
Keywords: Video Games; Soft Skills; Training; Skilling Development; Emotions; Cognitive Abilities; Flappy Bird; Pacman; Tetris
|
|
|
Ernest Valveny, & Enric Marti. (2000). Deformable Template Matching within a Bayesian Framework for Hand-Written Graphic Symbol Recognition. Graphics Recognition Recent Advances, 1941, 193–208.
Abstract: We describe a method for hand-drawn symbol recognition based on deformable template matching able to handle uncertainty and imprecision inherent to hand-drawing. Symbols are represented as a set of straight lines and their deformations as geometric transformations of these lines. Matching, however, is done over the original binary image to avoid loss of information during line detection. It is defined as an energy minimization problem, using a Bayesian framework which allows to combine fidelity to ideal shape of the symbol and flexibility to modify the symbol in order to get the best fit to the binary input image. Prior to matching, we find the best global transformation of the symbol to start the recognition process, based on the distance between symbol lines and image lines. We have applied this method to the recognition of dimensions and symbols in architectural floor plans and we show its flexibility to recognize distorted symbols.
|
|
|
Pedro Herruzo, Marc Bolaños, & Petia Radeva. (2016). Can a CNN Recognize Catalan Diet? In AIP Conference Proceedings (Vol. 1773).
Abstract: CoRR abs/1607.08811
Nowadays, we can find several diseases related to the unhealthy diet habits of the population, such as diabetes, obesity, anemia, bulimia and anorexia. In many cases, these diseases are related to the food consumption of people. Mediterranean diet is scientifically known as a healthy diet that helps to prevent many metabolic diseases. In particular, our work focuses on the recognition of Mediterranean food and dishes. The development of this methodology would allow to analise the daily habits of users with wearable cameras, within the topic of lifelogging. By using automatic mechanisms we could build an objective tool for the analysis of the patient’s behavior, allowing specialists to discover unhealthy food patterns and understand the user’s lifestyle.
With the aim to automatically recognize a complete diet, we introduce a challenging multi-labeled dataset related to Mediter-ranean diet called FoodCAT. The first type of label provided consists of 115 food classes with an average of 400 images per dish, and the second one consists of 12 food categories with an average of 3800 pictures per class. This dataset will serve as a basis for the development of automatic diet recognition. In this context, deep learning and more specifically, Convolutional Neural Networks (CNNs), currently are state-of-the-art methods for automatic food recognition. In our work, we compare several architectures for image classification, with the purpose of diet recognition. Applying the best model for recognising food categories, we achieve a top-1 accuracy of 72.29%, and top-5 of 97.07%. In a complete diet recognition of dishes from Mediterranean diet, enlarged with the Food-101 dataset for international dishes recognition, we achieve a top-1 accuracy of 68.07%, and top-5 of 89.53%, for a total of 115+101 food classes.
|
|
|
Rafael E. Rivadeneira, Angel Sappa, & Boris X. Vintimilla. (2022). Thermal Image Super-Resolution: A Novel Unsupervised Approach. In International Joint Conference on Computer Vision, Imaging and Computer Graphics (Vol. 1474, 495–506).
Abstract: This paper proposes the use of a CycleGAN architecture for thermal image super-resolution under a transfer domain strategy, where middle-resolution images from one camera are transferred to a higher resolution domain of another camera. The proposed approach is trained with a large dataset acquired using three thermal cameras at different resolutions. An unsupervised learning process is followed to train the architecture. Additional loss function is proposed trying to improve results from the state of the art approaches. Following the first thermal image super-resolution challenge (PBVS-CVPR2020) evaluations are performed. A comparison with previous works is presented showing the proposed approach reaches the best results.
|
|
|
Josep Llados, Gemma Sanchez, & Enric Marti. (1998). A string based method to recognize symbols and structural textures in architectural plans. In Graphics Recognition Algorithms and Systems Second International Workshop, GREC' 97 Nancy, France, August 22–23, 1997 Selected Papers (Vol. 1389, pp. 91–103). LNCS. Springer Link.
Abstract: This paper deals with the recognition of symbols and structural textures in architectural plans using string matching techniques. A plan is represented by an attributed graph whose nodes represent characteristic points and whose edges represent segments. Symbols and textures can be seen as a set of regions, i.e. closed loops in the graph, with a particular arrangement. The search for a symbol involves a graph matching between the regions of a model graph and the regions of the graph representing the document. Discriminating a texture means a clustering of neighbouring regions of this graph. Both procedures involve a similarity measure between graph regions. A string codification is used to represent the sequence of outlining edges of a region. Thus, the similarity between two regions is defined in terms of the string edit distance between their boundary strings. The use of string matching allows the recognition method to work also under presence of distortion.
|
|
|
Ole Vilhelm-Larsen, Petia Radeva, & Enric Marti. (1995). Guidelines for choosing optimal parameters of elasticity for snakes. In Computer Analysis Of Images And Patterns (Vol. 970, pp. 106–113). LNCS.
Abstract: This paper proposes a guidance in the process of choosing and using the parameters of elasticity of a snake in order to obtain a precise segmentation. A new two step procedure is defined based on upper and lower bounds on the parameters. Formulas, by which these bounds can be calculated for real images where parts of the contour may be missing, are presented. Experiments on segmentation of bone structures in X-ray images have verified the usefulness of the new procedure.
|
|
|
David Roche, Debora Gil, & Jesus Giraldo. (2014). Mathematical modeling of G protein-coupled receptor function: What can we learn from empirical and mechanistic models? In G Protein-Coupled Receptors – Modeling and Simulation Advances in Experimental Medicine and Biology (Vol. 796, pp. 159–181). Springer Netherlands.
Abstract: Empirical and mechanistic models differ in their approaches to the analysis of pharmacological effect. Whereas the parameters of the former are not physical constants those of the latter embody the nature, often complex, of biology. Empirical models are exclusively used for curve fitting, merely to characterize the shape of the E/[A] curves. Mechanistic models, on the contrary, enable the examination of mechanistic hypotheses by parameter simulation. Regretfully, the many parameters that mechanistic models may include can represent a great difficulty for curve fitting, representing, thus, a challenge for computational method development. In the present study some empirical and mechanistic models are shown and the connections, which may appear in a number of cases between them, are analyzed from the curves they yield. It may be concluded that systematic and careful curve shape analysis can be extremely useful for the understanding of receptor function, ligand classification and drug discovery, thus providing a common language for the communication between pharmacologists and medicinal chemists.
Keywords: β-arrestin; biased agonism; curve fitting; empirical modeling; evolutionary algorithm; functional selectivity; G protein; GPCR; Hill coefficient; intrinsic efficacy; inverse agonism; mathematical modeling; mechanistic modeling; operational model; parameter optimization; receptor dimer; receptor oligomerization; receptor constitutive activity; signal transduction; two-state model
|
|
|
Vacit Oguz Yazici, Joost Van de Weijer, & Arnau Ramisa. (2018). Color Naming for Multi-Color Fashion Items. In 6th World Conference on Information Systems and Technologies (Vol. 747, pp. 64–73).
Abstract: There exists a significant amount of research on color naming of single colored objects. However in reality many fashion objects consist of multiple colors. Currently, searching in fashion datasets for multi-colored objects can be a laborious task. Therefore, in this paper we focus on color naming for images with multi-color fashion items. We collect a dataset, which consists of images which may have from one up to four colors. We annotate the images with the 11 basic colors of the English language. We experiment with several designs for deep neural networks with different losses. We show that explicitly estimating the number of colors in the fashion item leads to improved results.
Keywords: Deep learning; Color; Multi-label
|
|
|
Mirko Arnold, Stephan Ameling, Anarta Ghosh, & Gerard Lacey. (2011). Quality Improvement of Endoscopy Videos. In Proceedings of the 8th IASTED International Conference on Biomedical Engineering (Vol. 723).
|
|
|
Thanh Ha Do, Salvatore Tabbone, & Oriol Ramos Terrades. (2016). Spotting Symbol over Graphical Documents Via Sparsity in Visual Vocabulary. In Recent Trends in Image Processing and Pattern Recognition (Vol. 709).
|
|
|
Patricia Suarez, Dario Carpio, & Angel Sappa. (2024). Enhancement of guided thermal image super-resolution approaches. NEUCOM - Neurocomputing, 573(127197), 1–17.
Abstract: Guided image processing techniques are widely used to extract meaningful information from a guiding image and facilitate the enhancement of the guided one. This paper specifically addresses the challenge of guided thermal image super-resolution, where a low-resolution thermal image is enhanced using a high-resolution visible spectrum image. We propose a new strategy that enhances outcomes from current guided super-resolution methods. This is achieved by transforming the initial guiding data into a representation resembling a thermal-like image, which is more closely in sync with the intended output. Experimental results with upscale factors of 8 and 16, demonstrate the outstanding performance of our approach in guided thermal image super-resolution obtained by mapping the original guiding information to a thermal-like image representation.
|
|
|
Fei Yang, Yaxing Wang, Luis Herranz, Yongmei Cheng, & Mikhail Mozerov. (2022). A Novel Framework for Image-to-image Translation and Image Compression. NEUCOM - Neurocomputing, 508, 58–70.
Abstract: Data-driven paradigms using machine learning are becoming ubiquitous in image processing and communications. In particular, image-to-image (I2I) translation is a generic and widely used approach to image processing problems, such as image synthesis, style transfer, and image restoration. At the same time, neural image compression has emerged as a data-driven alternative to traditional coding approaches in visual communications. In this paper, we study the combination of these two paradigms into a joint I2I compression and translation framework, focusing on multi-domain image synthesis. We first propose distributed I2I translation by integrating quantization and entropy coding into an I2I translation framework (i.e. I2Icodec). In practice, the image compression functionality (i.e. autoencoding) is also desirable, requiring to deploy alongside I2Icodec a regular image codec. Thus, we further propose a unified framework that allows both translation and autoencoding capabilities in a single codec. Adaptive residual blocks conditioned on the translation/compression mode provide flexible adaptation to the desired functionality. The experiments show promising results in both I2I translation and image compression using a single model.
|
|
|
O. Fors, A. Richichi, Xavier Otazu, & J. Nuñez. (2008). A new wavelet-based approach for the automated treatment of large sets of lunar occultation data. Astronomy and Astrohysics, 297–304.
|
|
|
Miguel Oliveira, Victor Santos, Angel Sappa, & P. Dias. (2015). Scene Representations for Autonomous Driving: an approach based on polygonal primitives. In 2nd Iberian Robotics Conference ROBOT2015 (Vol. 417, pp. 503–515).
Abstract: In this paper, we present a novel methodology to compute a 3D scene
representation. The algorithm uses macro scale polygonal primitives to model the scene. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Results show that the approach is capable of producing accurate descriptions of the scene. In addition, the algorithm is very efficient when compared to other techniques.
Keywords: Scene reconstruction; Point cloud; Autonomous vehicles
|
|