|
Records |
Links |
|
Author |
Q. Xue; Laura Igual; A. Berenguel; M. Guerrieri; L. Garrido |
![download PDF file pdf](img/file_PDF.gif)
|
|
Title |
Active Contour Segmentation with Affine Coordinate-Based Parametrization |
Type |
Conference Article |
|
Year |
2014 |
Publication |
9th International Conference on Computer Vision Theory and Applications |
Abbreviated Journal |
|
|
|
Volume |
1 |
Issue |
|
Pages ![sorted by First Page field, ascending order (up)](img/sort_asc.gif) |
5-14 |
|
|
Keywords |
Active Contours; Affine Coordinates; Mean Value Coordinates |
|
|
Abstract |
In this paper, we present a new framework for image segmentation based on parametrized active contours. The contour and the points of the image space are parametrized using a set of reduced control points that have to form a closed polygon in two dimensional problems and a closed surface in three dimensional problems. By moving the control points, the active contour evolves. We use mean value coordinates as the parametrization tool for the interface, which allows to parametrize any point of the space, inside or outside the closed polygon
or surface. Region-based energies such as the one proposed by Chan and Vese can be easily implemented in both two and three dimensional segmentation problems. We show the usefulness of our approach with several experiments. |
|
|
Address |
Lisboa; January 2014 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
VISAPP |
|
|
Notes |
OR;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ XIB2014 |
Serial |
2452 |
|
Permanent link to this record |
|
|
|
|
Author |
Arnau Baro; Pau Riba; Alicia Fornes |
![download PDF file pdf](img/file_PDF.gif)
|
|
Title |
A Starting Point for Handwritten Music Recognition |
Type |
Conference Article |
|
Year |
2018 |
Publication |
1st International Workshop on Reading Music Systems |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages ![sorted by First Page field, ascending order (up)](img/sort_asc.gif) |
5-6 |
|
|
Keywords |
Optical Music Recognition; Long Short-Term Memory; Convolutional Neural Networks; MUSCIMA++; CVCMUSCIMA |
|
|
Abstract |
In the last years, the interest in Optical Music Recognition (OMR) has reawakened, especially since the appearance of deep learning. However, there are very few works addressing handwritten scores. In this work we describe a full OMR pipeline for handwritten music scores by using Convolutional and Recurrent Neural Networks that could serve as a baseline for the research community. |
|
|
Address |
Paris; France; September 2018 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
WORMS |
|
|
Notes |
DAG; 600.097; 601.302; 601.330; 600.121 |
Approved |
no |
|
|
Call Number |
Admin @ si @ BRF2018 |
Serial |
3223 |
|
Permanent link to this record |
|
|
|
|
Author |
Helena Muñoz; Fernando Vilariño; Dimosthenis Karatzas |
![goto web page url](img/www.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Eye-Movements During Information Extraction from Administrative Documents |
Type |
Conference Article |
|
Year |
2019 |
Publication |
International Conference on Document Analysis and Recognition Workshops |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages ![sorted by First Page field, ascending order (up)](img/sort_asc.gif) |
6-9 |
|
|
Keywords |
|
|
|
Abstract |
A key aspect of digital mailroom processes is the extraction of relevant information from administrative documents. More often than not, the extraction process cannot be fully automated, and there is instead an important amount of manual intervention. In this work we study the human process of information extraction from invoice document images. We explore whether the gaze of human annotators during an manual information extraction process could be exploited towards reducing the manual effort and automating the process. To this end, we perform an eye-tracking experiment replicating real-life interfaces for information extraction. Through this pilot study we demonstrate that relevant areas in the document can be identified reliably through automatic fixation classification, and the obtained models generalize well to new subjects. Our findings indicate that it is in principle possible to integrate the human in the document image analysis loop, making use of the scanpath to automate the extraction process or verify extracted information. |
|
|
Address |
Sydney; Australia; September 2019 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICDARW |
|
|
Notes |
DAG; 600.140; 600.121; 600.129;SIAI |
Approved |
no |
|
|
Call Number |
Admin @ si @ MVK2019 |
Serial |
3336 |
|
Permanent link to this record |
|
|
|
|
Author |
Pau Rodriguez; Jordi Gonzalez; Josep M. Gonfaus; Xavier Roca |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Integrating Vision and Language in Social Networks for Identifying Visual Patterns of Personality Traits |
Type |
Journal |
|
Year |
2019 |
Publication |
International Journal of Social Science and Humanity |
Abbreviated Journal |
IJSSH |
|
|
Volume |
9 |
Issue |
1 |
Pages ![sorted by First Page field, ascending order (up)](img/sort_asc.gif) |
6-12 |
|
|
Keywords |
|
|
|
Abstract |
Social media, as a major platform for communication and information exchange, is a rich repository of the opinions and sentiments of 2.3 billion users about a vast spectrum of topics. In this sense, user text interactions are widely used to sense the whys of certain social user’s demands and cultural- driven interests. However, the knowledge embedded in the 1.8 billion pictures which are uploaded daily in public profiles has just started to be exploited. Following this trend on visual-based social analysis, we present a novel methodology based on neural networks to build a combined image-and-text based personality trait model, trained with images posted together with words found highly correlated to specific personality traits. So, the key contribution in this work is to explore whether OCEAN personality trait modeling can be addressed based on images, here called MindPics, appearing with certain tags with psychological insights. We found that there is a correlation between posted images and the personality estimated from their accompanying texts. Thus, the experimental results are consistent with previous cyber-psychology results based on texts, suggesting that images could also be used for personality estimation: classification results on some personality traits show that specific and characteristic visual patterns emerge, in essence representing abstract concepts. These results open new avenues of research for further refining the proposed personality model under the supervision of psychology experts, and to further substitute current textual personality questionnaires by image-based ones. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ISE; 600.119 |
Approved |
no |
|
|
Call Number |
Admin @ si @ RGG2019 |
Serial |
3414 |
|
Permanent link to this record |
|
|
|
|
Author |
Jose Antonio Rodriguez; Florent Perronnin |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Local Gradient Histogram Features for Word Spotting in Unconstrained Handwritten Documents |
Type |
Conference Article |
|
Year |
2008 |
Publication |
International Conference on Frontiers in Handwriting Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages ![sorted by First Page field, ascending order (up)](img/sort_asc.gif) |
7–12 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
Montreal (Canada) |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICFHR |
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
Admin @ si @ RoP2008b |
Serial |
1066 |
|
Permanent link to this record |
|
|
|
|
Author |
Ariel Amato; Mikhail Mozerov; Xavier Roca; Jordi Gonzalez |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Robust Real-Time Background Subtraction Based on Local Neighborhood Patterns |
Type |
Journal Article |
|
Year |
2010 |
Publication |
EURASIP Journal on Advances in Signal Processing |
Abbreviated Journal |
EURASIPJ |
|
|
Volume |
|
Issue |
|
Pages ![sorted by First Page field, ascending order (up)](img/sort_asc.gif) |
7 |
|
|
Keywords |
|
|
|
Abstract |
Article ID 901205
This paper describes an efficient background subtraction technique for detecting moving objects. The proposed approach is able to overcome difficulties like illumination changes and moving shadows. Our method introduces two discriminative features based on angular and modular patterns, which are formed by similarity measurement between two sets of RGB color vectors: one belonging to the background image and the other to the current image. We show how these patterns are used to improve foreground detection in the presence of moving shadows and in the case when there are strong similarities in color between background and foreground pixels. Experimental results over a collection of public and own datasets of real image sequences demonstrate that the proposed technique achieves a superior performance compared with state-of-the-art methods. Furthermore, both the low computational and space complexities make the presented algorithm feasible for real-time applications. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1110-8657 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ISE |
Approved |
no |
|
|
Call Number |
ISE @ ise @ AMR2010 |
Serial |
1463 |
|
Permanent link to this record |
|
|
|
|
Author |
Michal Drozdzal; Laura Igual; Jordi Vitria; Petia Radeva; Carolina Malagelada; Fernando Azpiroz |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
SIFT flow-based Sequences Alignment |
Type |
Conference Article |
|
Year |
2010 |
Publication |
Medical Image Computing in Catalunya: Graduate Student Workshop |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages ![sorted by First Page field, ascending order (up)](img/sort_asc.gif) |
7–8 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
Girona, Spain |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
MICCAT |
|
|
Notes |
OR;MILAB;MV |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ DIV2010 |
Serial |
1475 |
|
Permanent link to this record |
|
|
|
|
Author |
Sergio Escalera; David M.J. Tax; Oriol Pujol; Petia Radeva; Robert P.W. Duin |
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Multi-Class Classification in Image Analysis Via Error-Correcting Output Codes |
Type |
Book Chapter |
|
Year |
2011 |
Publication |
Innovations in Intelligent Image Analysis |
Abbreviated Journal |
|
|
|
Volume |
339 |
Issue |
|
Pages ![sorted by First Page field, ascending order (up)](img/sort_asc.gif) |
7-29 |
|
|
Keywords |
|
|
|
Abstract |
A common way to model multi-class classification problems is by means of Error-Correcting Output Codes (ECOC). Given a multi-class problem, the ECOC technique designs a codeword for each class, where each position of the code identifies the membership of the class for a given binary problem.A classification decision is obtained by assigning the label of the class with the closest code. In this paper, we overview the state-of-the-art on ECOC designs and test them in real applications. Results on different multi-class data sets show the benefits of using the ensemble of classifiers when categorizing objects in images. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Berlin Heidelberg |
Place of Publication |
Berlin |
Editor |
H. Kawasnicka; L.Jain |
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1860-949X |
ISBN |
978-3-642-17933-4 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB;HuPBA |
Approved |
no |
|
|
Call Number |
Admin @ si @ ETP2011 |
Serial |
1746 |
|
Permanent link to this record |
|
|
|
|
Author |
Anjan Dutta; Josep Llados; Horst Bunke; Umapada Pal |
![download PDF file pdf](img/file_PDF.gif)
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title |
A Product Graph Based Method for Dual Subgraph Matching Applied to Symbol Spotting |
Type |
Book Chapter |
|
Year |
2014 |
Publication |
Graphics Recognition. Current Trends and Challenges |
Abbreviated Journal |
|
|
|
Volume |
8746 |
Issue |
|
Pages ![sorted by First Page field, ascending order (up)](img/sort_asc.gif) |
7-11 |
|
|
Keywords |
Product graph; Dual edge graph; Subgraph matching; Random walks; Graph kernel |
|
|
Abstract |
Product graph has been shown as a way for matching subgraphs. This paper reports the extension of the product graph methodology for subgraph matching applied to symbol spotting in graphical documents. Here we focus on the two major limitations of the previous version of the algorithm: (1) spurious nodes and edges in the graph representation and (2) inefficient node and edge attributes. To deal with noisy information of vectorized graphical documents, we consider a dual edge graph representation on the original graph representing the graphical information and the product graph is computed between the dual edge graphs of the pattern graph and the target graph. The dual edge graph with redundant edges is helpful for efficient and tolerating encoding of the structural information of the graphical documents. The adjacency matrix of the product graph locates the pair of similar edges of two operand graphs and exponentiating the adjacency matrix finds similar random walks of greater lengths. Nodes joining similar random walks between two graphs are found by combining different weighted exponentials of adjacency matrices. An experimental investigation reveals that the recall obtained by this approach is quite encouraging. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Berlin Heidelberg |
Place of Publication |
|
Editor |
Bart Lamiroy; Jean-Marc Ogier |
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN |
978-3-662-44853-3 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG; 600.077 |
Approved |
no |
|
|
Call Number |
Admin @ si @ DLB2014 |
Serial |
2698 |
|
Permanent link to this record |
|
|
|
|
Author |
Diego Cheda; Daniel Ponsa; Antonio Lopez |
![download PDF file pdf](img/file_PDF.gif)
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title |
Pedestrian Candidates Generation using Monocular Cues |
Type |
Conference Article |
|
Year |
2012 |
Publication |
IEEE Intelligent Vehicles Symposium |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages ![sorted by First Page field, ascending order (up)](img/sort_asc.gif) |
7-12 |
|
|
Keywords |
pedestrian detection |
|
|
Abstract |
Common techniques for pedestrian candidates generation (e.g., sliding window approaches) are based on an exhaustive search over the image. This implies that the number of windows produced is huge, which translates into a significant time consumption in the classification stage. In this paper, we propose a method that significantly reduces the number of windows to be considered by a classifier. Our method is a monocular one that exploits geometric and depth information available on single images. Both representations of the world are fused together to generate pedestrian candidates based on an underlying model which is focused only on objects standing vertically on the ground plane and having certain height, according with their depths on the scene. We evaluate our algorithm on a challenging dataset and demonstrate its application for pedestrian detection, where a considerable reduction in the number of candidate windows is reached. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
IEEE Xplore |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1931-0587 |
ISBN |
978-1-4673-2119-8 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
IV |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ CPL2012c; ADAS @ adas @ cpl2012d |
Serial |
2013 |
|
Permanent link to this record |
|
|
|
|
Author |
Ivet Rafegas; Maria Vanrell |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Color encoding in biologically-inspired convolutional neural networks |
Type |
Journal Article |
|
Year |
2018 |
Publication |
Vision Research |
Abbreviated Journal |
VR |
|
|
Volume |
151 |
Issue |
|
Pages ![sorted by First Page field, ascending order (up)](img/sort_asc.gif) |
7-17 |
|
|
Keywords |
Color coding; Computer vision; Deep learning; Convolutional neural networks |
|
|
Abstract |
Convolutional Neural Networks have been proposed as suitable frameworks to model biological vision. Some of these artificial networks showed representational properties that rival primate performances in object recognition. In this paper we explore how color is encoded in a trained artificial network. It is performed by estimating a color selectivity index for each neuron, which allows us to describe the neuron activity to a color input stimuli. The index allows us to classify whether they are color selective or not and if they are of a single or double color. We have determined that all five convolutional layers of the network have a large number of color selective neurons. Color opponency clearly emerges in the first layer, presenting 4 main axes (Black-White, Red-Cyan, Blue-Yellow and Magenta-Green), but this is reduced and rotated as we go deeper into the network. In layer 2 we find a denser hue sampling of color neurons and opponency is reduced almost to one new main axis, the Bluish-Orangish coinciding with the dataset bias. In layers 3, 4 and 5 color neurons are similar amongst themselves, presenting different type of neurons that detect specific colored objects (e.g., orangish faces), specific surrounds (e.g., blue sky) or specific colored or contrasted object-surround configurations (e.g. blue blob in a green surround). Overall, our work concludes that color and shape representation are successively entangled through all the layers of the studied network, revealing certain parallelisms with the reported evidences in primate brains that can provide useful insight into intermediate hierarchical spatio-chromatic representations. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
CIC; 600.051; 600.087 |
Approved |
no |
|
|
Call Number |
Admin @ si @RaV2018 |
Serial |
3114 |
|
Permanent link to this record |
|
|
|
|
Author |
Mohamed Ali Souibgui; Pau Torras; Jialuo Chen; Alicia Fornes |
![goto web page url](img/www.gif)
|
|
Title |
An Evaluation of Handwritten Text Recognition Methods for Historical Ciphered Manuscripts |
Type |
Conference Article |
|
Year |
2023 |
Publication |
7th International Workshop on Historical Document Imaging and Processing |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages ![sorted by First Page field, ascending order (up)](img/sort_asc.gif) |
7-12 |
|
|
Keywords |
|
|
|
Abstract |
This paper investigates the effectiveness of different deep learning HTR families, including LSTM, Seq2Seq, and transformer-based approaches with self-supervised pretraining, in recognizing ciphered manuscripts from different historical periods and cultures. The goal is to identify the most suitable method or training techniques for recognizing ciphered manuscripts and to provide insights into the challenges and opportunities in this field of research. We evaluate the performance of these models on several datasets of ciphered manuscripts and discuss their results. This study contributes to the development of more accurate and efficient methods for recognizing historical manuscripts for the preservation and dissemination of our cultural heritage. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
HIP |
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
Admin @ si @ STC2023 |
Serial |
3849 |
|
Permanent link to this record |
|
|
|
|
Author |
Victor Ponce; Mario Gorga; Xavier Baro; Petia Radeva; Sergio Escalera |
![goto web page url](img/www.gif)
|
|
Title |
Analisis de la Expresion Oral y Gestual en Proyectos Fin de Carrera Via un Sistema de Vision Artificial |
Type |
Miscellaneous |
|
Year |
2011 |
Publication |
Revista electronica de la asociacion de enseñantes universitarios de la informatica AENUI |
Abbreviated Journal |
ReVision |
|
|
Volume |
4 |
Issue |
1 |
Pages ![sorted by First Page field, ascending order (up)](img/sort_asc.gif) |
8-18 |
|
|
Keywords |
|
|
|
Abstract |
La comunicación y expresión oral es una competencia de especial relevancia en el EEES. No obstante, en muchas enseñanzas superiores la puesta en práctica de esta competencia ha sido relegada principalmente a la presentación de proyectos fin de carrera. Dentro de un proyecto de innovación docente, se ha desarrollado una herramienta informática para la extracción de información objetiva para el análisis de la expresión oral y gestual de los alumnos. El objetivo es dar un “feedback” a los estudiantes que les permita mejorar la calidad de sus presentaciones. El prototipo inicial que se presenta en este trabajo permite extraer de forma automática información audiovisual y analizarla mediante técnicas de aprendizaje. El sistema ha sido aplicado a 15 proyectos fin de carrera y 15 exposiciones dentro de una asignatura de cuarto curso. Los resultados obtenidos muestran la viabilidad del sistema para sugerir factores que ayuden tanto en el éxito de la comunicación así como en los criterios de evaluación. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1989-1199 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB;HuPBA;MV |
Approved |
no |
|
|
Call Number |
Admin @ si @ PGB2011c |
Serial |
1783 |
|
Permanent link to this record |
|
|
|
|
Author |
Fernando Vilariño; Panagiota Spyridonos; Jordi Vitria; Fernando Azpiroz; Petia Radeva |
![download PDF file pdf](img/file_PDF.gif)
|
|
Title |
Cascade analysis for intestinal contraction detection |
Type |
Conference Article |
|
Year |
2006 |
Publication |
20th International Congress and exhibition Computer Assisted Radiology and Surgery |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages ![sorted by First Page field, ascending order (up)](img/sort_asc.gif) |
9-10 |
|
|
Keywords |
intestine video analysis, anisotropic features, support vector machine, cascade of classifiers |
|
|
Abstract |
In this work, we address the study of intestinal contractions in a novel approach based on a machine learning framework to process data from Wireless Capsule Video Endoscopy. Wireless endoscopy represents a unique way to visualize the intestine motility by creating long videos to visualize intestine dynamics. In this paper we argue that to analyze huge amount of wireless endoscopy data and define robust methods for contraction detection we should base our approach on sophisticated machine learning techniques. In particular, we propose a cascade of classifiers in order to remove different physiological phenomenon and obtain the motility pattern of small intestines. Our results show obtaining high specificity and sensitivity rates that highlight the high efficiency of the selected approach and support the feasibility of the proposed methodology in the automatic detection and analysis of intestine contractions. |
|
|
Address |
Osaka (Japan) |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
800 |
Expedition |
|
Conference |
CARS |
|
|
Notes |
MV;OR;MILAB;SIAI |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ VSV2006a; IAM @ iam @ VSV2006h |
Serial |
726 |
|
Permanent link to this record |
|
|
|
|
Author |
Arnau Ramisa; Shrihari Vasudevan; David Aldavert; Ricardo Toledo; Ramon Lopez de Mantaras |
![goto web page url](img/www.gif)
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title |
Evaluation of the SIFT Object Recognition Method in Mobile Robots: Frontiers in Artificial Intelligence and Applications |
Type |
Conference Article |
|
Year |
2009 |
Publication |
12th International Conference of the Catalan Association for Artificial Intelligence |
Abbreviated Journal |
|
|
|
Volume |
202 |
Issue |
|
Pages ![sorted by First Page field, ascending order (up)](img/sort_asc.gif) |
9-18 |
|
|
Keywords |
|
|
|
Abstract |
General object recognition in mobile robots is of primary importance in order to enhance the representation of the environment that robots will use for their reasoning processes. Therefore, we contribute reduce this gap by evaluating the SIFT Object Recognition method in a challenging dataset, focusing on issues relevant to mobile robotics. Resistance of the method to the robotics working conditions was found, but it was limited mainly to well-textured objects. |
|
|
Address |
Cardona, Spain |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0922-6389 |
ISBN |
978-1-60750-061-2 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CCIA |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ RVA2009 |
Serial |
1248 |
|
Permanent link to this record |