Home | << 1 2 3 4 5 6 7 8 9 10 >> [11–12] |
![]() |
Records | |||||
---|---|---|---|---|---|
Author | N. Serrano; L. Tarazon; D. Perez; Oriol Ramos Terrades; S. Juan | ||||
Title | The GIDOC Prototype | Type | Conference Article | ||
Year | 2010 | Publication | 10th International Workshop on Pattern Recognition in Information Systems | Abbreviated Journal | |
Volume | Issue | Pages ![]() |
82-89 | ||
Keywords | |||||
Abstract | Transcription of handwritten text in (old) documents is an important, time-consuming task for digital libraries. It might be carried out by first processing all document images off-line, and then manually supervising system transcriptions to edit incorrect parts. However, current techniques for automatic page layout analysis, text line detection and handwriting recognition are still far from perfect, and thus post-editing system output is not clearly better than simply ignoring it.
A more effective approach to transcribe old text documents is to follow an interactive- predictive paradigm in which both, the system is guided by the user, and the user is assisted by the system to complete the transcription task as efficiently as possible. Following this approach, a system prototype called GIDOC (Gimp-based Interactive transcription of old text DOCuments) has been developed to provide user-friendly, integrated support for interactive-predictive layout analysis, line detection and handwriting transcription. GIDOC is designed to work with (large) collections of homogeneous documents, that is, of similar structure and writing styles. They are annotated sequentially, by (par- tially) supervising hypotheses drawn from statistical models that are constantly updated with an increasing number of available annotated documents. And this is done at different annotation levels. For instance, at the level of page layout analysis, GIDOC uses a novel text block detection method in which conventional, memoryless techniques are improved with a “history” model of text block positions. Similarly, at the level of text line image transcription, GIDOC includes a handwriting recognizer which is steadily improved with a growing number of (partially) supervised transcriptions. |
||||
Address | Funchal, Portugal | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-989-8425-14-0 | Medium | ||
Area | Expedition | Conference | PRIS | ||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ STP2010 | Serial | 1868 | ||
Permanent link to this record | |||||
Author | Carlo Gatta; Simone Balocco; Francesco Ciompi; R. Hemetsberger; Oriol Rodriguez-Leor; Petia Radeva | ||||
Title | Real-time gating of IVUS sequences based on motion blur analysis: Method and quantitative validation | Type | Conference Article | ||
Year | 2010 | Publication | 13th international conference on Medical image computing and computer-assisted intervention | Abbreviated Journal | |
Volume | II | Issue | Pages ![]() |
59-67 | |
Keywords | |||||
Abstract | Intravascular Ultrasound (IVUS) is an image-guiding technique for cardiovascular diagnostic, providing cross-sectional images of vessels. During the acquisition, the catheter is pulled back (pullback) at a constant speed in order to acquire spatially subsequent images of the artery. However, during this procedure, the heart twist produces a swinging fluctuation of the probe position along the vessel axis. In this paper we propose a real-time gating algorithm based on the analysis of motion blur variations during the IVUS sequence. Quantitative tests performed on an in-vitro ground truth data base shown that our method is superior to state of the art algorithms both in computational speed and accuracy. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Springer-Verlag Berlin | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | MICCAI | ||
Notes | MILAB | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ GBC2010 | Serial | 1447 | ||
Permanent link to this record | |||||
Author | Jose Manuel Alvarez; Theo Gevers; Antonio Lopez | ||||
Title | 3D Scene Priors for Road Detection | Type | Conference Article | ||
Year | 2010 | Publication | 23rd IEEE Conference on Computer Vision and Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages ![]() |
57–64 | ||
Keywords | road detection | ||||
Abstract | Vision-based road detection is important in different areas of computer vision such as autonomous driving, car collision warning and pedestrian crossing detection. However, current vision-based road detection methods are usually based on low-level features and they assume structured roads, road homogeneity, and uniform lighting conditions. Therefore, in this paper, contextual 3D information is used in addition to low-level cues. Low-level photometric invariant cues are derived from the appearance of roads. Contextual cues used include horizon lines, vanishing points, 3D scene layout and 3D road stages. Moreover, temporal road cues are included. All these cues are sensitive to different imaging conditions and hence are considered as weak cues. Therefore, they are combined to improve the overall performance of the algorithm. To this end, the low-level, contextual and temporal cues are combined in a Bayesian framework to classify road sequences. Large scale experiments on road sequences show that the road detection method is robust to varying imaging conditions, road types, and scenarios (tunnels, urban and highway). Further, using the combined cues outperforms all other individual cues. Finally, the proposed method provides highest road detection accuracy when compared to state-of-the-art methods. | ||||
Address | San Francisco; CA; USA; June 2010 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1063-6919 | ISBN | 978-1-4244-6984-0 | Medium | |
Area | Expedition | Conference | CVPR | ||
Notes | ADAS;ISE | Approved | no | ||
Call Number | ADAS @ adas @ AGL2010a | Serial | 1302 | ||
Permanent link to this record | |||||
Author | C. Alejandro Parraga; Ramon Baldrich; Maria Vanrell | ||||
Title | Accurate Mapping of Natural Scenes Radiance to Cone Activation Space: A New Image Dataset | Type | Conference Article | ||
Year | 2010 | Publication | 5th European Conference on Colour in Graphics, Imaging and Vision and 12th International Symposium on Multispectral Colour Science | Abbreviated Journal | |
Volume | Issue | Pages ![]() |
50–57 | ||
Keywords | |||||
Abstract | The characterization of trichromatic cameras is usually done in terms of a device-independent color space, such as the CIE 1931 XYZ space. This is indeed convenient since it allows the testing of results against colorimetric measures. We have characterized our camera to represent human cone activation by mapping the camera sensor's (RGB) responses to human (LMS) through a polynomial transformation, which can be “customized” according to the types of scenes we want to represent. Here we present a method to test the accuracy of the camera measures and a study on how the choice of training reflectances for the polynomial may alter the results. | ||||
Address | Joensuu, Finland | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 9781617388897 | Medium | ||
Area | Expedition | Conference | CGIV/MCS | ||
Notes | CIC | Approved | no | ||
Call Number | CAT @ cat @ PBV2010a | Serial | 1322 | ||
Permanent link to this record | |||||
Author | Jose Manuel Alvarez; Theo Gevers; Antonio Lopez | ||||
Title | Learning photometric invariance for object detection | Type | Journal Article | ||
Year | 2010 | Publication | International Journal of Computer Vision | Abbreviated Journal | IJCV |
Volume | 90 | Issue | 1 | Pages ![]() |
45-61 |
Keywords | road detection | ||||
Abstract | Impact factor: 3.508 (the last available from JCR2009SCI). Position 4/103 in the category Computer Science, Artificial Intelligence. Quartile
Color is a powerful visual cue in many computer vision applications such as image segmentation and object recognition. However, most of the existing color models depend on the imaging conditions that negatively affect the performance of the task at hand. Often, a reflection model (e.g., Lambertian or dichromatic reflectance) is used to derive color invariant models. However, this approach may be too restricted to model real-world scenes in which different reflectance mechanisms can hold simultaneously. Therefore, in this paper, we aim to derive color invariance by learning from color models to obtain diversified color invariant ensembles. First, a photometrical orthogonal and non-redundant color model set is computed composed of both color variants and invariants. Then, the proposed method combines these color models to arrive at a diversified color ensemble yielding a proper balance between invariance (repeatability) and discriminative power (distinctiveness). To achieve this, our fusion method uses a multi-view approach to minimize the estimation error. In this way, the proposed method is robust to data uncertainty and produces properly diversified color invariant ensembles. Further, the proposed method is extended to deal with temporal data by predicting the evolution of observations over time. Experiments are conducted on three different image datasets to validate the proposed method. Both the theoretical and experimental results show that the method is robust against severe variations in imaging conditions. The method is not restricted to a certain reflection model or parameter tuning, and outperforms state-of-the-art detection techniques in the field of object, skin and road recognition. Considering sequential data, the proposed method (extended to deal with future observations) outperforms the other methods |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Springer US | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0920-5691 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS;ISE | Approved | no | ||
Call Number | ADAS @ adas @ AGL2010c | Serial | 1451 | ||
Permanent link to this record | |||||
Author | Santiago Segui; Michal Drozdzal; Petia Radeva; Jordi Vitria | ||||
Title | Severe Motility Diagnosis using WCE | Type | Conference Article | ||
Year | 2010 | Publication | Medical Image Computing in Catalunya: Graduate Student Workshop | Abbreviated Journal | |
Volume | Issue | Pages ![]() |
45–46 | ||
Keywords | |||||
Abstract | |||||
Address | Girona, Spain | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | MICCAT | ||
Notes | OR;MILAB;MV | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ SDR2010 | Serial | 1478 | ||
Permanent link to this record | |||||
Author | Pierluigi Casale; Oriol Pujol; Petia Radeva | ||||
Title | Embedding Random Projections in Regularized Gradient Boosting Machines | Type | Conference Article | ||
Year | 2010 | Publication | Supervised and Unsupervised Ensemble Methods and their Applications in the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases | Abbreviated Journal | |
Volume | Issue | Pages ![]() |
44–53 | ||
Keywords | |||||
Abstract | |||||
Address | Barcelona (Spain) | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | SUEMA | ||
Notes | MILAB;HUPBA | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ CPR2010c | Serial | 1466 | ||
Permanent link to this record | |||||
Author | Sergio Escalera; R. M. Martinez; Jordi Vitria; Petia Radeva; Maria Teresa Anguera | ||||
Title | Deteccion automatica de la dominancia en conversaciones diadicas | Type | Journal Article | ||
Year | 2010 | Publication | Escritos de Psicologia | Abbreviated Journal | EP |
Volume | 3 | Issue | 2 | Pages ![]() |
41–45 |
Keywords | Dominance detection; Non-verbal communication; Visual features | ||||
Abstract | Dominance is referred to the level of influence that a person has in a conversation. Dominance is an important research area in social psychology, but the problem of its automatic estimation is a very recent topic in the contexts of social and wearable computing. In this paper, we focus on the dominance detection of visual cues. We estimate the correlation among observers by categorizing the dominant people in a set of face-to-face conversations. Different dominance indicators from gestural communication are defined, manually annotated, and compared to the observers' opinion. Moreover, these indicators are automatically extracted from video sequences and learnt by using binary classifiers. Results from the three analyses showed a high correlation and allows the categorization of dominant people in public discussion video sequences. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1989-3809 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | HUPBA; OR; MILAB;MV | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ EMV2010 | Serial | 1315 | ||
Permanent link to this record | |||||
Author | Salim Jouili; Salvatore Tabbone; Ernest Valveny | ||||
Title | Comparing Graph Similarity Measures for Graphical Recognition | Type | Book Chapter | ||
Year | 2010 | Publication | Graphics Recognition. Achievements, Challenges, and Evolution. 8th International Workshop, GREC 2009. Selected Papers | Abbreviated Journal | |
Volume | 6020 | Issue | Pages ![]() |
37-48 | |
Keywords | |||||
Abstract | In this paper we evaluate four graph distance measures. The analysis is performed for document retrieval tasks. For this aim, different kind of documents are used including line drawings (symbols), ancient documents (ornamental letters), shapes and trademark-logos. The experimental results show that the performance of each graph distance measure depends on the kind of data and the graph representation technique. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Springer Berlin Heidelberg | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-642-13727-3 | Medium | |
Area | Expedition | Conference | GREC | ||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ JTV2010 | Serial | 2404 | ||
Permanent link to this record | |||||
Author | Miguel Reyes; Jordi Vitria; Petia Radeva; Sergio Escalera | ||||
Title | Real-time Activity Monitoring of Inpatients | Type | Conference Article | ||
Year | 2010 | Publication | Medical Image Computing in Catalunya: Graduate Student Workshop | Abbreviated Journal | |
Volume | Issue | Pages ![]() |
35–36 | ||
Keywords | |||||
Abstract | In this paper, we present the development of an application capable of monitoring a set of patient vital signs in real time. The application has been designed to support the medical staff of a hospital. Preliminary results show the suitability
of the system to prevent the injury produced by the agitation of the patients. |
||||
Address | Girona | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | MICCAT | ||
Notes | OR;MILAB;HUPBA;MV | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ RVR2010 | Serial | 1477 | ||
Permanent link to this record | |||||
Author | Javier Vazquez; G. D. Finlayson; Maria Vanrell | ||||
Title | A compact singularity function to predict WCS data and unique hues | Type | Conference Article | ||
Year | 2010 | Publication | 5th European Conference on Colour in Graphics, Imaging and Vision and 12th International Symposium on Multispectral Colour Science | Abbreviated Journal | |
Volume | Issue | Pages ![]() |
33–38 | ||
Keywords | |||||
Abstract | Understanding how colour is used by the human vision system is a widely studied research field. The field, though quite advanced, still faces important unanswered questions. One of them is the explanation of the unique hues and the assignment of color names. This problem addresses the fact of different perceptual status for different colors.
Recently, Philipona and O'Regan have proposed a biological model that allows to extract the reflection properties of any surface independently of the lighting conditions. These invariant properties are the basis to compute a singularity index that predicts the asymmetries presented in unique hues and basic color categories psychophysical data, therefore is giving a further step in their explanation. In this paper we build on their formulation and propose a new singularity index. This new formulation equally accounts for the location of the 4 peaks of the World colour survey and has two main advantages. First, it is a simple elegant numerical measure (the Philipona measurement is a rather cumbersome formula). Second, we develop a colour-based explanation for the measure. |
||||
Address | Joensuu, Finland | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 9781617388897 | Medium | ||
Area | Expedition | Conference | CGIV/MCS | ||
Notes | CIC | Approved | no | ||
Call Number | CAT @ cat @ VFV2010 | Serial | 1324 | ||
Permanent link to this record | |||||
Author | Antonio Hernandez; Miguel Reyes; Sergio Escalera; Petia Radeva | ||||
Title | Spatio-Temporal GrabCut human segmentation for face and pose recovery | Type | Conference Article | ||
Year | 2010 | Publication | IEEE International Workshop on Analysis and Modeling of Faces and Gestures | Abbreviated Journal | |
Volume | Issue | Pages ![]() |
33–40 | ||
Keywords | |||||
Abstract | In this paper, we present a full-automatic Spatio-Temporal GrabCut human segmentation methodology. GrabCut initialization is performed by a HOG-based subject detection, face detection, and skin color model for seed initialization. Spatial information is included by means of Mean Shift clustering whereas temporal coherence is considered by the historical of Gaussian Mixture Models. Moreover, human segmentation is combined with Shape and Active Appearance Models to perform full face and pose recovery. Results over public data sets as well as proper human action base show a robust segmentation and recovery of both face and pose using the presented methodology. | ||||
Address | San Francisco; CA; USA; June 2010 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 2160-7508 | ISBN | 978-1-4244-7029-7 | Medium | |
Area | Expedition | Conference | AMFG | ||
Notes | MILAB;HUPBA | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ HRE2010 | Serial | 1362 | ||
Permanent link to this record | |||||
Author | Fadi Dornaika; Bogdan Raducanu | ||||
Title | Single Snapshot 3D Head Pose Initialization for Tracking in Human Robot Interaction Scenario | Type | Conference Article | ||
Year | 2010 | Publication | 1st International Workshop on Computer Vision for Human-Robot Interaction | Abbreviated Journal | |
Volume | Issue | Pages ![]() |
32–39 | ||
Keywords | 1st International Workshop on Computer Vision for Human-Robot Interaction, in conjunction with IEEE CVPR 2010 | ||||
Abstract | This paper presents an automatic 3D head pose initialization scheme for a real-time face tracker with application to human-robot interaction. It has two main contributions. First, we propose an automatic 3D head pose and person specific face shape estimation, based on a 3D deformable model. The proposed approach serves to initialize our realtime 3D face tracker. What makes this contribution very attractive is that the initialization step can cope with faces
under arbitrary pose, so it is not limited only to near-frontal views. Second, the previous framework is used to develop an application in which the orientation of an AIBO’s camera can be controlled through the imitation of user’s head pose. In our scenario, this application is used to build panoramic images from overlapping snapshots. Experiments on real videos confirm the robustness and usefulness of the proposed methods. |
||||
Address | San Francisco; CA; USA; June 2010 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 2160-7508 | ISBN | 978-1-4244-7029-7 | Medium | |
Area | Expedition | Conference | CVPRW | ||
Notes | OR;MV | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ DoR2010a | Serial | 1309 | ||
Permanent link to this record | |||||
Author | Cesar Isaza; Joaquin Salas; Bogdan Raducanu | ||||
Title | Toward the Detection of Urban Infrastructures Edge Shadows | Type | Conference Article | ||
Year | 2010 | Publication | 12th International Conference on Advanced Concepts for Intelligent Vision Systems | Abbreviated Journal | |
Volume | 6474 | Issue | I | Pages ![]() |
30–37 |
Keywords | |||||
Abstract | In this paper, we propose a novel technique to detect the shadows cast by urban infrastructure, such as buildings, billboards, and traffic signs, using a sequence of images taken from a fixed camera. In our approach, we compute two different background models in parallel: one for the edges and one for the reflected light intensity. An algorithm is proposed to train the system to distinguish between moving edges in general and edges that belong to static objects, creating an edge background model. Then, during operation, a background intensity model allow us to separate between moving and static objects. Those edges included in the moving objects and those that belong to the edge background model are subtracted from the current image edges. The remaining edges are the ones cast by urban infrastructure. Our method is tested on a typical crossroad scene and the results show that the approach is sound and promising. | ||||
Address | Sydney, Australia | ||||
Corporate Author | Thesis | ||||
Publisher | Springer Berlin Heidelberg | Place of Publication | Editor | eds. Blanc–Talon et al | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-642-17687-6 | Medium | |
Area | Expedition | Conference | ACIVS | ||
Notes | OR;MV | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ ISR2010 | Serial | 1458 | ||
Permanent link to this record | |||||
Author | Jaume Gibert; Ernest Valveny; Horst Bunke | ||||
Title | Graph of Words Embedding for Molecular Structure-Activity Relationship Analysis | Type | Conference Article | ||
Year | 2010 | Publication | 15th Iberoamerican Congress on Pattern Recognition | Abbreviated Journal | |
Volume | 6419 | Issue | Pages ![]() |
30–37 | |
Keywords | |||||
Abstract | Structure-Activity relationship analysis aims at discovering chemical activity of molecular compounds based on their structure. In this article we make use of a particular graph representation of molecules and propose a new graph embedding procedure to solve the problem of structure-activity relationship analysis. The embedding is essentially an arrangement of a molecule in the form of a vector by considering frequencies of appearing atoms and frequencies of covalent bonds between them. Results on two benchmark databases show the effectiveness of the proposed technique in terms of recognition accuracy while avoiding high operational costs in the transformation. | ||||
Address | Sao Paulo, Brazil | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-642-16686-0 | Medium | |
Area | Expedition | Conference | CIARP | ||
Notes | DAG | Approved | no | ||
Call Number | DAG @ dag @ GVB2010 | Serial | 1462 | ||
Permanent link to this record |