Enric Marti, Jaume Rocarias, Ricardo Toledo, & Aura Hernandez-Sabate. (2009). Caronte: plataforma Moodle con gestion flexible de grupos. Primeras experiencias en asignaturas de Ingenieria Informatica.
Abstract: En este artículo se presenta Caronte, entorno LMS (Learning Management System) basado en Moodle. Una característica importante del entorno es la gestión flexible de grupos en una asignatura. Entendemos por grupo un conjunto de alumnos que realizan un trabajo y uno de ellos entrega la actividad propuesta (práctica, encuesta, etc.) en representación del grupo. Hemos trabajado en la confección de estos grupos, implementando un sistema de inscripción por contraseña.
Caronte ofrece un conjunto de actividades basadas en este concepto de grupo: encuestas, tareas (entrega de trabajos o prácticas), encuestas de autoevaluación y cuestionarios, entre otras.
Basada en nuestra actividad de encuesta, hemos definido una actividad de Control, que permite un cierto feedback electrónico del profesor sobre la actividad de los alumnos.
Finalmente, se presenta un resumen de las experiencias de uso de Caronte sobre asignaturas de Ingeniería Informática en el curso 2007-08.
|
Mikhail Mozerov, Ariel Amato, & Xavier Roca. (2009). Occlusion Handling in Trinocular Stereo using Composite Disparity Space Image. In 19th International Conference on Computer Graphics and Vision (69–73).
Abstract: In this paper we propose a method that smartly improves occlusion handling in stereo matching using trinocular stereo. The main idea is based on the assumption that any occluded region in a matched stereo pair (middle-left images) in general is not occluded in the opposite matched pair (middle-right images). Then two disparity space images (DSI) can be merged in one composite DSI. The proposed integration differs from the known approach that uses a cumulative cost. A dense disparity map is obtained with a global optimization algorithm using the proposed composite DSI. The experimental results are evaluated on the Middlebury data set, showing high performance of the proposed algorithm especially in the occluded regions. One of the top positions in the rank of the Middlebury website confirms the performance of our method to be competitive with the best stereo matching.
|
Pau Baiget. (2009). Modeling Human Behavior for Image Sequence Understanding and Generation (Jordi Gonzalez, & Xavier Roca, Eds.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: The comprehension of animal behavior, especially human behavior, is one of the most ancient and studied problems since the beginning of civilization. The big list of factors that interact to determine a person action require the collaboration of different disciplines, such as psichology, biology, or sociology. In the last years the analysis of human behavior has received great attention also from the computer vision community, given the latest advances in the acquisition of human motion data from image sequences.
Despite the increasing availability of that data, there still exists a gap towards obtaining a conceptual representation of the obtained observations. Human behavior analysis is based on a qualitative interpretation of the results, and therefore the assignment of concepts to quantitative data is linked to a certain ambiguity.
This Thesis tackles the problem of obtaining a proper representation of human behavior in the contexts of computer vision and animation. On the one hand, a good behavior model should permit the recognition and explanation the observed activity in image sequences. On the other hand, such a model must allow the generation of new synthetic instances, which model the behavior of virtual agents.
First, we propose methods to automatically learn the models from observations. Given a set of quantitative results output by a vision system, a normal behavior model is learnt. This results provides a tool to determine the normality or abnormality of future observations. However, machine learning methods are unable to provide a richer description of the observations. We confront this problem by means of a new method that incorporates prior knowledge about the enviornment and about the expected behaviors. This framework, formed by the reasoning engine FMTL and the modeling tool SGT allows the generation of conceptual descriptions of activity in new image sequences. Finally, we demonstrate the suitability of the proposed framework to simulate behavior of virtual agents, which are introduced into real image sequences and interact with observed real agents, thereby easing the generation of augmented reality sequences.
The set of approaches presented in this Thesis has a growing set of potential applications. The analysis and description of behavior in image sequences has its principal application in the domain of smart video--surveillance, in order to detect suspicious or dangerous behaviors. Other applications include automatic sport commentaries, elderly monitoring, road traffic analysis, and the development of semantic video search engines. Alternatively, behavioral virtual agents allow to simulate accurate real situations, such as fires or crowds. Moreover, the inclusion of virtual agents into real image sequences has been widely deployed in the games and cinema industries.
|
Carles Fernandez, Pau Baiget, Xavier Roca, & Jordi Gonzalez. (2009). Exploiting Natural Language Generation in Scene Interpretation. In Human–Centric Interfaces for Ambient Intelligence (Vol. 4, 71–93). Elsevier Science and Tech.
|
Alicia Fornes, Josep Llados, Gemma Sanchez, & Horst Bunke. (2009). Symbol-independent writer identification in old handwritten music scores. In In proceedings of 8th IAPR International Workshop on Graphics Recognition (186–197). Springer Berlin Heidelberg.
|
Agnes Borras, & Josep Llados. (2009). Corest: A measure of color and space stability to detect salient regions according to human criteria. In 5th International Conference on Computer Vision Theory and Applications (pp. 204–209).
|
Salim Jouili, Salvatore Tabbone, & Ernest Valveny. (2009). Evaluation of graph matching measures for documents retrieval. In In proceedings of 8th IAPR International Workshop on Graphics Recognition (13–21).
Abstract: In this paper we evaluate four graph distance measures. The analysis is performed for document retrieval tasks. For this aim, different kind of documents are used which include line drawings (symbols), ancient documents (ornamental letters), shapes and trademark-logos. The experimental results show that the performance of each grahp distance measure depends on the kind of data and the graph representation technique.
Keywords: Graph Matching; Graph retrieval; structural representation; Performance Evaluation
|
Gemma Roig, Xavier Boix, & Fernando De la Torre. (2009). Optimal Feature Selection for Subspace Image Matching. In 2nd IEEE International Workshop on Subspace Methods in conjunction.
Abstract: Image matching has been a central research topic in computer vision over the last decades. Typical approaches to correspondence involve matching feature points between images. In this paper, we present a novel problem for establishing correspondences between a sparse set of image features and a previously learned subspace model. We formulate the matching task as an energy minimization, and jointly optimize over all possible feature assignments and parameters of the subspace model. This problem is in general NP-hard. We propose a convex relaxation approximation, and develop two optimization strategies: naïve gradient-descent and quadratic programming. Alternatively, we reformulate the optimization criterion as a sparse eigenvalue problem, and solve it using a recently proposed backward greedy algorithm. Experimental results on facial feature detection show that the quadratic programming solution provides better selection mechanism for relevant features.
|
Angel Sappa, Niki Aifanti, Sotiris Malassiotis, & Michael G. Strintzis. (2009). Prior Knowledge Based Motion Model Representation. In Horst Bunke, JuanJose Villanueva, & Gemma Sanchez (Eds.), Progress in Computer Vision and Image Analysis (Vol. 16).
|
Partha Pratim Roy, Josep Llados, & Umapada Pal. (2009). A Complete System for Detection and Recognition of Text in Graphical Documents using Background Information. In 5th International Conference on Computer Vision Theory and Applications.
|
Dena Bazazian. (2018). Fully Convolutional Networks for Text Understanding in Scene Images (Dimosthenis Karatzas, & Andrew Bagdanov, Eds.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: Text understanding in scene images has gained plenty of attention in the computer vision community and it is an important task in many applications as text carries semantically rich information about scene content and context. For instance, reading text in a scene can be applied to autonomous driving, scene understanding or assisting visually impaired people. The general aim of scene text understanding is to localize and recognize text in scene images. Text regions are first localized in the original image by a trained detector model and afterwards fed into a recognition module. The tasks of localization and recognition are highly correlated since an inaccurate localization can affect the recognition task.
The main purpose of this thesis is to devise efficient methods for scene text understanding. We investigate how the latest results on deep learning can advance text understanding pipelines. Recently, Fully Convolutional Networks (FCNs) and derived methods have achieved a significant performance on semantic segmentation and pixel level classification tasks. Therefore, we took benefit of the strengths of FCN approaches in order to detect text in natural scenes. In this thesis we have focused on two challenging tasks of scene text understanding which are Text Detection and Word Spotting. For the task of text detection, we have proposed an efficient text proposal technique in scene images. We have considered the Text Proposals method as the baseline which is an approach to reduce the search space of possible text regions in an image. In order to improve the Text Proposals method we combined it with Fully Convolutional Networks to efficiently reduce the number of proposals while maintaining the same level of accuracy and thus gaining a significant speed up. Our experiments demonstrate that this text proposal approach yields significantly higher recall rates than the line based text localization techniques, while also producing better-quality localization. We have also applied this technique on compressed images such as videos from wearable egocentric cameras. For the task of word spotting, we have introduced a novel mid-level word representation method. We have proposed a technique to create and exploit an intermediate representation of images based on text attributes which roughly correspond to character probability maps. Our representation extends the concept of Pyramidal Histogram Of Characters (PHOC) by exploiting Fully Convolutional Networks to derive a pixel-wise mapping of the character distribution within candidate word regions. We call this representation the Soft-PHOC. Furthermore, we show how to use Soft-PHOC descriptors for word spotting tasks through an efficient text line proposal algorithm. To evaluate the detected text, we propose a novel line based evaluation along with the classic bounding box based approach. We test our method on incidental scene text images which comprises real-life scenarios such as urban scenes. The importance of incidental scene text images is due to the complexity of backgrounds, perspective, variety of script and language, short text and little linguistic context. All of these factors together makes the incidental scene text images challenging.
|
Joan Oliver, Ricardo Toledo, J. Pujol, J. Sorribes, & E. Valderrama. (2009). Un ABP basado en la robotica para las ingenierias informaticas.
|
Eduard Vazquez. (2007). Distribution Characterization using Topological Features. Application to Colour Image Processing. Master's thesis, , .
|
David Rotger. (2009). Analysis and Multi-Modal Fusion of coronary Images (Petia Radeva, Ed.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: The framework of this thesis is to study in detail different techniques and tools for medical image registration in order to ease the daily life of clinical experts in cardiology. The first aim of this thesis is providing computer tools for
fusing IVUS and angiogram data is of high clinical interest to help the physicians locate in IVUS data and decide which lesion is observed, how long it is, how far from a bifurcation or another lesions stays, etc. This thesis proves and
validates that we can segment the catheter path in angiographies using geodesic snakes (based on fast marching algorithm), a three-dimensional reconstruction of the catheter inspired in stereo vision and a new technique to fuse IVUS
and angiograms that establishes exact correspondences between them. We have developed a new workstation called iFusion that has four strong advantages: registration of IVUS and angiographic images with sub-pixel precision, it works on- and off-line, it is independent on the X-ray system and there is no need of daily calibration. The second aim of the thesis is devoted to developing a computer-aided analysis of IVUS for image-guided intervention. We have designed, implemented
and validated a robust algorithm for stent extraction and reconstruction from IVUS videos. We consider a very special and recent kind of stents, bioabsorbable stents that represent a great clinical challenge due to their property to be
absorbed by time and thus avoiding the “danger” of neostenosis as one of the main problems of metallic stents. We present a new and very promising algorithm based on an optimized cascade of multiple classifiers to automatically detect individual stent struts of a very novel bioabsorbable drug eluting coronary stent. This problem represents a very challenging target given the variability in contrast, shape and grey levels of the regions to be detected, what is
denoted by the high variability between the specialists (inter-observer variability of 0.14~$\pm$0.12). The obtained results of the automatic strut detection are within the inter-observer variability.
|
Xavier Baro. (2009). Probabilistic Darwin Machines: A New Approach to Develop Evolutionary Object Detection (Jordi Vitria, Ed.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: Ever since computers were invented, we have wondered whether they might perform some of the human quotidian tasks. One of the most studied and still nowadays less understood problem is the capacity to learn from our experiences and how we generalize the knowledge that we acquire. One of that unaware tasks for the persons and that more interest is awakening in different scientific areas since the beginning, is the one that is known as pattern recognition. The creation of models that represent the world that surrounds us, help us for recognizing objects in our environment, to predict situations, to identify behaviors... All this information allows us to adapt ourselves and to interact with our environment. The capacity of adaptation of individuals to their environment has been related to the amount of patterns that are capable of identifying.
This thesis faces the pattern recognition problem from a Computer Vision point of view, taking one of the most paradigmatic and extended approaches to object detection as starting point. After studying this approach, two weak points are identified: The first makes reference to the description of the objects, and the second is a limitation of the learning algorithm, which hampers the utilization of best descriptors.
In order to address the learning limitations, we introduce evolutionary computation techniques to the classical object detection approach.
After testing the classical evolutionary approaches, such as genetic algorithms, we develop a new learning algorithm based on Probabilistic Darwin Machines, which better adapts to the learning problem. Once the learning limitation is avoided, we introduce a new feature set, which maintains the benefits of the classical feature set, adding the ability to describe non localities. This combination of evolutionary learning algorithm and features is tested on different public data sets, outperforming the results obtained by the classical approach.
|