|
Records |
Links |
|
Author |
C. Mariño; M.G. Penas; M. Penedo; David Lloret; M.J. Carreira |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Integration of Mutual Information and Creaseness Based Methods for the Automatic Registration of SLO Sequences. |
Type |
Miscellaneous |
|
Year |
2001 |
Publication |
Proceedings of the SIARP´2001. |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
Brasil. |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
Admin @ si @ MPP2001 |
Serial |
197 |
|
Permanent link to this record |
|
|
|
|
Author |
Jorge Bernal; F. Javier Sanchez; Fernando Vilariño |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Integration of Valley Orientation Distribution for Polyp Region Identification in Colonoscopy |
Type |
Conference Article |
|
Year |
2011 |
Publication |
In MICCAI 2011 Workshop on Computational and Clinical Applications in Abdominal Imaging |
Abbreviated Journal |
|
|
|
Volume |
6668 |
Issue |
|
Pages |
76-83 |
|
|
Keywords |
|
|
|
Abstract |
This work presents a region descriptor based on the integration of the information that the depth of valleys image provides. The depth of valleys image is based on the presence of intensity valleys around polyps due to the image acquisition. Our proposed method consists of defining, for each point, a series of radial sectors around it and then accumulates the maxima of the depth of valleys image only if the orientation of the intensity valley coincides with the orientation of the sector above. We apply our descriptor to a prior segmentation of the images and we present promising results on polyp detection, outperforming other approaches that also integrate depth of valleys information. |
|
|
Address |
Toronto, Canada |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Link |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
Lecture Notes in Computer Science |
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
800 |
Expedition |
|
Conference |
ABI |
|
|
Notes |
MV;SIAI |
Approved |
no |
|
|
Call Number |
IAM @ iam @ BSV2011d |
Serial |
1698 |
|
Permanent link to this record |
|
|
|
|
Author |
Sergio Escalera; Ana Puig; Oscar Amoros; Maria Salamo |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Intelligent GPGPU Classification in Volume Visualization: a framework based on Error-Correcting Output Codes |
Type |
Journal Article |
|
Year |
2011 |
Publication |
Computer Graphics Forum |
Abbreviated Journal |
CGF |
|
|
Volume |
30 |
Issue |
7 |
Pages |
2107-2115 |
|
|
Keywords |
|
|
|
Abstract |
IF JCR 1.455 2010 25/99
In volume visualization, the definition of the regions of interest is inherently an iterative trial-and-error process finding out the best parameters to classify and render the final image. Generally, the user requires a lot of expertise to analyze and edit these parameters through multi-dimensional transfer functions. In this paper, we present a framework of intelligent methods to label on-demand multiple regions of interest. These methods can be split into a two-level GPU-based labelling algorithm that computes in time of rendering a set of labelled structures using the Machine Learning Error-Correcting Output Codes (ECOC) framework. In a pre-processing step, ECOC trains a set of Adaboost binary classifiers from a reduced pre-labelled data set. Then, at the testing stage, each classifier is independently applied on the features of a set of unlabelled samples and combined to perform multi-class labelling. We also propose an alternative representation of these classifiers that allows to highly parallelize the testing stage. To exploit that parallelism we implemented the testing stage in GPU-OpenCL. The empirical results on different data sets for several volume structures shows high computational performance and classification accuracy. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB; HuPBA |
Approved |
no |
|
|
Call Number |
Admin @ si @ EPA2011 |
Serial |
1881 |
|
Permanent link to this record |
|
|
|
|
Author |
S.Grau; Ana Puig; Sergio Escalera; Maria Salamo |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Intelligent Interactive Volume Classification |
Type |
Conference Article |
|
Year |
2013 |
Publication |
Pacific Graphics |
Abbreviated Journal |
|
|
|
Volume |
32 |
Issue |
7 |
Pages |
23-28 |
|
|
Keywords |
|
|
|
Abstract |
This paper defines an intelligent and interactive framework to classify multiple regions of interest from the original data on demand, without requiring any preprocessing or previous segmentation. The proposed intelligent and interactive approach is divided in three stages: visualize, training and testing. First, users visualize and label some samples directly on slices of the volume. Training and testing are based on a framework of Error Correcting Output Codes and Adaboost classifiers that learn to classify each region the user has painted. Later, at the testing stage, each classifier is directly applied on the rest of samples and combined to perform multi-class labeling, being used in the final rendering. We also parallelized the training stage using a GPU-based implementation for
obtaining a rapid interaction and classification. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
978-3-905674-50-7 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
PG |
|
|
Notes |
HuPBA; 600.046;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ GPE2013b |
Serial |
2355 |
|
Permanent link to this record |
|
|
|
|
Author |
Zhijie Fang; Antonio Lopez |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Intention Recognition of Pedestrians and Cyclists by 2D Pose Estimation |
Type |
Journal Article |
|
Year |
2019 |
Publication |
IEEE Transactions on Intelligent Transportation Systems |
Abbreviated Journal |
TITS |
|
|
Volume |
21 |
Issue |
11 |
Pages |
4773 - 4783 |
|
|
Keywords |
|
|
|
Abstract |
Anticipating the intentions of vulnerable road users (VRUs) such as pedestrians and cyclists is critical for performing safe and comfortable driving maneuvers. This is the case for human driving and, thus, should be taken into account by systems providing any level of driving assistance, from advanced driver assistant systems (ADAS) to fully autonomous vehicles (AVs). In this paper, we show how the latest advances on monocular vision-based human pose estimation, i.e. those relying on deep Convolutional Neural Networks (CNNs), enable to recognize the intentions of such VRUs. In the case of cyclists, we assume that they follow traffic rules to indicate future maneuvers with arm signals. In the case of pedestrians, no indications can be assumed. Instead, we hypothesize that the walking pattern of a pedestrian allows to determine if he/she has the intention of crossing the road in the path of the ego-vehicle, so that the ego-vehicle must maneuver accordingly (e.g. slowing down or stopping). In this paper, we show how the same methodology can be used for recognizing pedestrians and cyclists' intentions. For pedestrians, we perform experiments on the JAAD dataset. For cyclists, we did not found an analogous dataset, thus, we created our own one by acquiring and annotating videos which we share with the research community. Overall, the proposed pipeline provides new state-of-the-art results on the intention recognition of VRUs. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ FaL2019 |
Serial |
3305 |
|
Permanent link to this record |
|
|
|
|
Author |
Jordi Roca; A.Owen; G.Jordan; Y.Ling; C. Alejandro Parraga; A.Hurlbert |
![goto web page url](img/www.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Inter-individual Variations in Color Naming and the Structure of 3D Color Space |
Type |
Abstract |
|
Year |
2011 |
Publication |
Journal of Vision |
Abbreviated Journal |
VSS |
|
|
Volume |
12 |
Issue |
2 |
Pages |
166 |
|
|
Keywords |
|
|
|
Abstract |
36.307
Many everyday behavioural uses of color vision depend on color naming ability, which is neither measured nor predicted by most standardized tests of color vision, for either normal or anomalous color vision. Here we demonstrate a new method to quantify color naming ability by deriving a compact computational description of individual 3D color spaces. Methods: Individual observers underwent standardized color vision diagnostic tests (including anomaloscope testing) and a series of custom-made color naming tasks using 500 distinct color samples, either CRT stimuli (“light”-based) or Munsell chips (“surface”-based), with both forced- and free-choice color naming paradigms. For each subject, we defined his/her color solid as the set of 3D convex hulls computed for each basic color category from the relevant collection of categorised points in perceptually uniform CIELAB space. From the parameters of the convex hulls, we derived several indices to characterise the 3D structure of the color solid and its inter-individual variations. Using a reference group of 25 normal trichromats (NT), we defined the degree of normality for the shape, location and overlap of each color region, and the extent of “light”-“surface” agreement. Results: Certain features of color perception emerge from analysis of the average NT color solid, e.g.: (1) the white category is slightly shifted towards blue; and (2) the variability in category border location across NT subjects is asymmetric across color space, with least variability in the blue/green region. Comparisons between individual and average NT indices reveal specific naming “deficits”, e.g.: (1) Category volumes for white, green, brown and grey are expanded for anomalous trichromats and dichromats; and (2) the focal structure of color space is disrupted more in protanopia than other forms of anomalous color vision. The indices both capture the structure of subjective color spaces and allow us to quantify inter-individual differences in color naming ability. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1534-7362 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
CIC |
Approved |
no |
|
|
Call Number |
Admin @ si @ ROJ2011 |
Serial |
1758 |
|
Permanent link to this record |
|
|
|
|
Author |
Ernest Valveny; Oriol Ramos Terrades; Joan Mas; Marçal Rusiñol |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Interactive Document Retrieval and Classification. |
Type |
Book Chapter |
|
Year |
2013 |
Publication |
Multimodal Interaction in Image and Video Applications |
Abbreviated Journal |
|
|
|
Volume |
48 |
Issue |
|
Pages |
17-30 |
|
|
Keywords |
|
|
|
Abstract |
In this chapter we describe a system for document retrieval and classification following the interactive-predictive framework. In particular, the system addresses two different scenarios of document analysis: document classification based on visual appearance and logo detection. These two classical problems of document analysis are formulated following the interactive-predictive model, taking the user interaction into account to make easier the process of annotating and labelling the documents. A system implementing this model in a real scenario is presented and analyzed. This system also takes advantage of active learning techniques to speed up the task of labelling the documents. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Berlin Heidelberg |
Place of Publication |
|
Editor |
Angel Sappa; Jordi Vitria |
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1868-4394 |
ISBN |
978-3-642-35931-6 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
Admin @ si @ VRM2013 |
Serial |
2341 |
|
Permanent link to this record |
|
|
|
|
Author |
Michal Drozdzal; Santiago Segui; Carolina Malagelada; Fernando Azpiroz; Jordi Vitria; Petia Radeva |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Interactive Labeling of WCE Images |
Type |
Conference Article |
|
Year |
2011 |
Publication |
5th Iberian Conference on Pattern Recognition and Image Analysis |
Abbreviated Journal |
|
|
|
Volume |
6669 |
Issue |
|
Pages |
143-150 |
|
|
Keywords |
|
|
|
Abstract |
A high quality labeled training set is necessary for any supervised machine learning algorithm. Labeling of the data can be a very expensive process, specially while dealing with data of high variability and complexity. A good example of such data are the videos from Wireless Capsule Endoscopy. Building a representative WCE data set means many videos to be labeled by an expert. The problem that occurs is the data diversity, in the space of the features, from different WCE studies. That means that when new data arrives it is highly probable that it will not be represented in the training set, thus getting a high probability of performing an error when applying machine learning schemes. In this paper an interactive labeling scheme that allows reducing expert effort in the labeling process is presented. It is shown that the number of human interventions can be significantly reduced. The proposed system allows the annotation of informative/non-informative frames of the WCE video with less than 100 clicks |
|
|
Address |
Las Palmas de Gran Canaria. Spain |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer |
Place of Publication |
|
Editor |
Vitria, Jordi; Sanches, João Miguel Raposo; Hernández, Mario |
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
IbPRIA |
|
|
Notes |
MILAB;OR;MV |
Approved |
no |
|
|
Call Number |
Admin @ si @ DSM2011 |
Serial |
1734 |
|
Permanent link to this record |
|
|
|
|
Author |
Oriol Ramos Terrades; Alejandro Hector Toselli; Nicolas Serrano; Veronica Romero; Enrique Vidal; Alfons Juan |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Interactive layout analysis and transcription systems for historic handwritten documents |
Type |
Conference Article |
|
Year |
2010 |
Publication |
10th ACM Symposium on Document Engineering |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
219–222 |
|
|
Keywords |
Handwriting recognition; Interactive predictive processing; Partial supervision; Interactive layout analysis |
|
|
Abstract |
The amount of digitized legacy documents has been rising dramatically over the last years due mainly to the increasing number of on-line digital libraries publishing this kind of documents, waiting to be classified and finally transcribed into a textual electronic format (such as ASCII or PDF). Nevertheless, most of the available fully-automatic applications addressing this task are far from being perfect and heavy and inefficient human intervention is often required to check and correct the results of such systems. In contrast, multimodal interactive-predictive approaches may allow the users to participate in the process helping the system to improve the overall performance. With this in mind, two sets of recent advances are introduced in this work: a novel interactive method for text block detection and two multimodal interactive handwritten text transcription systems which use active learning and interactive-predictive technologies in the recognition process. |
|
|
Address |
Manchester, United Kingdom |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ACM |
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
Admin @ si @RTS2010 |
Serial |
1857 |
|
Permanent link to this record |
|
|
|
|
Author |
Marçal Rusiñol; David Aldavert; Dimosthenis Karatzas; Ricardo Toledo; Josep Llados |
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Interactive Trademark Image Retrieval by Fusing Semantic and Visual Content. Advances in Information Retrieval |
Type |
Conference Article |
|
Year |
2011 |
Publication |
33rd European Conference on Information Retrieval |
Abbreviated Journal |
|
|
|
Volume |
6611 |
Issue |
|
Pages |
314-325 |
|
|
Keywords |
|
|
|
Abstract |
In this paper we propose an efficient queried-by-example retrieval system which is able to retrieve trademark images by similarity from patent and trademark offices' digital libraries. Logo images are described by both their semantic content, by means of the Vienna codes, and their visual contents, by using shape and color as visual cues. The trademark descriptors are then indexed by a locality-sensitive hashing data structure aiming to perform approximate k-NN search in high dimensional spaces in sub-linear time. The resulting ranked lists are combined by using the Condorcet method and a relevance feedback step helps to iteratively revise the query and refine the obtained results. The experiments demonstrate the effectiveness and efficiency of this system on a realistic and large dataset. |
|
|
Address |
Dublin, Ireland |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer |
Place of Publication |
Berlin |
Editor |
P. Clough; C. Foley; C. Gurrin; G.J.F. Jones; W. Kraaij; H. Lee; V. Murdoch |
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
978-3-642-20160-8 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ECIR |
|
|
Notes |
DAG; RV;ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ RAK2011 |
Serial |
1737 |
|
Permanent link to this record |
|
|
|
|
Author |
David Vazquez; Antonio Lopez; Daniel Ponsa; David Geronimo |
![download PDF file pdf](img/file_PDF.gif)
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Interactive Training of Human Detectors |
Type |
Book Chapter |
|
Year |
2013 |
Publication |
Multiodal Interaction in Image and Video Applications |
Abbreviated Journal |
|
|
|
Volume |
48 |
Issue |
|
Pages |
169-182 |
|
|
Keywords |
Pedestrian Detection; Virtual World; AdaBoost; Domain Adaptation |
|
|
Abstract |
Image based human detection remains as a challenging problem. Most promising detectors rely on classifiers trained with labelled samples. However, labelling is a manual labor intensive step. To overcome this problem we propose to collect images of pedestrians from a virtual city, i.e., with automatic labels, and train a pedestrian detector with them, which works fine when such virtual-world data are similar to testing one, i.e., real-world pedestrians in urban areas. When testing data is acquired in different conditions than training one, e.g., human detection in personal photo albums, dataset shift appears. In previous work, we cast this problem as one of domain adaptation and solve it with an active learning procedure. In this work, we focus on the same problem but evaluating a different set of faster to compute features, i.e., Haar, EOH and their combination. In particular, we train a classifier with virtual-world data, using such features and Real AdaBoost as learning machine. This classifier is applied to real-world training images. Then, a human oracle interactively corrects the wrong detections, i.e., few miss detections are manually annotated and some false ones are pointed out too. A low amount of manual annotation is fixed as restriction. Real- and virtual-world difficult samples are combined within what we call cool world and we retrain the classifier with this data. Our experiments show that this adapted classifier is equivalent to the one trained with only real-world data but requiring 90% less manual annotations. |
|
|
Address |
Springer Heidelberg New York Dordrecht London |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Berlin Heidelberg |
Place of Publication |
|
Editor |
|
|
|
Language |
English |
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1868-4394 |
ISBN |
978-3-642-35931-6 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS; 600.057; 600.054; 605.203 |
Approved |
no |
|
|
Call Number |
VLP2013; ADAS @ adas @ vlp2013 |
Serial |
2193 |
|
Permanent link to this record |
|
|
|
|
Author |
Joost Van de Weijer; Fahad Shahbaz Khan; Marc Masana |
![download PDF file pdf](img/file_PDF.gif)
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Interactive Visual and Semantic Image Retrieval |
Type |
Book Chapter |
|
Year |
2013 |
Publication |
Multimodal Interaction in Image and Video Applications |
Abbreviated Journal |
|
|
|
Volume |
48 |
Issue |
|
Pages |
31-35 |
|
|
Keywords |
|
|
|
Abstract |
One direct consequence of recent advances in digital visual data generation and the direct availability of this information through the World-Wide Web, is a urgent demand for efficient image retrieval systems. The objective of image retrieval is to allow users to efficiently browse through this abundance of images. Due to the non-expert nature of the majority of the internet users, such systems should be user friendly, and therefore avoid complex user interfaces. In this chapter we investigate how high-level information provided by recently developed object recognition techniques can improve interactive image retrieval. Wel apply a bagof- word based image representation method to automatically classify images in a number of categories. These additional labels are then applied to improve the image retrieval system. Next to these high-level semantic labels, we also apply a low-level image description to describe the composition and color scheme of the scene. Both descriptions are incorporated in a user feedback image retrieval setting. The main objective is to show that automatic labeling of images with semantic labels can improve image retrieval results. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Berlin Heidelberg |
Place of Publication |
|
Editor |
Angel Sappa; Jordi Vitria |
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1868-4394 |
ISBN |
978-3-642-35931-6 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
CIC; 605.203; 600.048 |
Approved |
no |
|
|
Call Number |
Admin @ si @ WKC2013 |
Serial |
2284 |
|
Permanent link to this record |
|
|
|
|
Author |
Oriol Ramos Terrades; N. Serrano; Albert Gordo; Ernest Valveny; Alfons Juan-Ciscar |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Interactive-predictive detection of handwritten text blocks |
Type |
Conference Article |
|
Year |
2010 |
Publication |
17th Document Recognition and Retrieval Conference, part of the IS&T-SPIE Electronic Imaging Symposium |
Abbreviated Journal |
|
|
|
Volume |
7534 |
Issue |
|
Pages |
75340Q–75340Q–10 |
|
|
Keywords |
|
|
|
Abstract |
A method for text block detection is introduced for old handwritten documents. The proposed method takes advantage of sequential book structure, taking into account layout information from pages previously transcribed. This glance at the past is used to predict the position of text blocks in the current page with the help of conventional layout analysis methods. The method is integrated into the GIDOC prototype: a first attempt to provide integrated support for interactive-predictive page layout analysis, text line detection and handwritten text transcription. Results are given in a transcription task on a 764-page Spanish manuscript from 1891. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
DRR |
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
DAG @ dag @ TSG2010 |
Serial |
1479 |
|
Permanent link to this record |
|
|
|
|
Author |
J.A.Perez; Enric Marti; Juan J.Villanueva |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Interfase de Usuario de Entrada de Datos 3D en un CAD de Cartografía Urbana a partir de Pares Estereoscópicos |
Type |
Conference Article |
|
Year |
1992 |
Publication |
II Congreso Español de Informática Gráfica |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
47-60 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CEIG |
|
|
Notes |
IAM;ISE |
Approved |
no |
|
|
Call Number |
IAM @ iam @ PVM1992 |
Serial |
1624 |
|
Permanent link to this record |
|
|
|
|
Author |
David Rotger; Petia Radeva; J. Mauri; E Fernandez-Nofrerias |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Internal and External Coronary Vessel Images Registration. |
Type |
Miscellaneous |
|
Year |
2002 |
Publication |
|
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ RRM2002b |
Serial |
318 |
|
Permanent link to this record |