Home | [81–90] << 91 92 93 94 95 96 97 98 99 100 >> [101–110] |
![]() |
Records | |||||
---|---|---|---|---|---|
Author | Simone Balocco; Carlo Gatta; Francesco Ciompi; A. Wahle; Petia Radeva; S. Carlier; G. Unal; E. Sanidas; F. Mauri; X. Carillo; T. Kovarnik; C. Wang; H. Chen; T. P. Exarchos; D. I. Fotiadis; F. Destrempes; G. Cloutier; Oriol Pujol; Marina Alberti; E. G. Mendizabal-Ruiz; M. Rivera; T. Aksoy; R. W. Downe; I. A. Kakadiaris | ||||
Title | Standardized evaluation methodology and reference database for evaluating IVUS image segmentation | Type | Journal Article | ||
Year | 2014 | Publication | Computerized Medical Imaging and Graphics | Abbreviated Journal | CMIG |
Volume | 38 | Issue | 2 | Pages | 70-90 |
Keywords | IVUS (intravascular ultrasound); Evaluation framework; Algorithm comparison; Image segmentation | ||||
Abstract | This paper describes an evaluation framework that allows a standardized and quantitative comparison of IVUS lumen and media segmentation algorithms. This framework has been introduced at the MICCAI 2011 Computing and Visualization for (Intra)Vascular Imaging (CVII) workshop, comparing the results of eight teams that participated.
We describe the available data-base comprising of multi-center, multi-vendor and multi-frequency IVUS datasets, their acquisition, the creation of the reference standard and the evaluation measures. The approaches address segmentation of the lumen, the media, or both borders; semi- or fully-automatic operation; and 2-D vs. 3-D methodology. Three performance measures for quantitative analysis have been proposed. The results of the evaluation indicate that segmentation of the vessel lumen and media is possible with an accuracy that is comparable to manual annotation when semi-automatic methods are used, as well as encouraging results can be obtained also in case of fully-automatic segmentation. The analysis performed in this paper also highlights the challenges in IVUS segmentation that remains to be solved. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN ![]() |
ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB; LAMP; HuPBA; 600.046; 600.063; 600.079 | Approved | no | ||
Call Number | Admin @ si @ BGC2013 | Serial | 2314 | ||
Permanent link to this record | |||||
Author | Francisco Javier Orozco; Ognjen Rudovic; Jordi Gonzalez; Maja Pantic | ||||
Title | Hierarchical On-line Appearance-Based Tracking for 3D Head Pose, Eyebrows, Lips, Eyelids and Irises | Type | Journal Article | ||
Year | 2013 | Publication | Image and Vision Computing | Abbreviated Journal | IMAVIS |
Volume | 31 | Issue | 4 | Pages | 322-340 |
Keywords | On-line appearance models; Levenberg–Marquardt algorithm; Line-search optimization; 3D face tracking; Facial action tracking; Eyelid tracking; Iris tracking | ||||
Abstract | In this paper, we propose an On-line Appearance-Based Tracker (OABT) for simultaneous tracking of 3D head pose, lips, eyebrows, eyelids and irises in monocular video sequences. In contrast to previously proposed tracking approaches, which deal with face and gaze tracking separately, our OABT can also be used for eyelid and iris tracking, as well as 3D head pose, lips and eyebrows facial actions tracking. Furthermore, our approach applies an on-line learning of changes in the appearance of the tracked target. Hence, the prior training of appearance models, which usually requires a large amount of labeled facial images, is avoided. Moreover, the proposed method is built upon a hierarchical combination of three OABTs, which are optimized using a Levenberg–Marquardt Algorithm (LMA) enhanced with line-search procedures. This, in turn, makes the proposed method robust to changes in lighting conditions, occlusions and translucent textures, as evidenced by our experiments. Finally, the proposed method achieves head and facial actions tracking in real-time. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Elsevier | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN ![]() |
ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ISE; 605.203; 302.012; 302.018; 600.049 | Approved | no | ||
Call Number | ORG2013 | Serial | 2221 | ||
Permanent link to this record | |||||
Author | Carles Sanchez; Debora Gil; Antoni Rosell; Albert Andaluz; F. Javier Sanchez | ||||
Title | Segmentation of Tracheal Rings in Videobronchoscopy combining Geometry and Appearance | Type | Conference Article | ||
Year | 2013 | Publication | Proceedings of the International Conference on Computer Vision Theory and Applications | Abbreviated Journal | |
Volume | 1 | Issue | Pages | 153--161 | |
Keywords | Video-bronchoscopy, tracheal ring segmentation, trachea geometric and appearance model | ||||
Abstract | Videobronchoscopy is a medical imaging technique that allows interactive navigation inside the respiratory pathways and minimal invasive interventions. Tracheal procedures are ordinary interventions that require measurement of the percentage of obstructed pathway for injury (stenosis) assessment. Visual assessment of stenosis in videobronchoscopic sequences requires high expertise of trachea anatomy and is prone to human error. Accurate detection of tracheal rings is the basis for automated estimation of the size of stenosed trachea. Processing of videobronchoscopic images acquired at the operating room is a challenging task due to the wide range of artifacts and acquisition conditions. We present a model of the geometric-appearance of tracheal rings for its detection in videobronchoscopic videos. Experiments on sequences acquired at the operating room, show a performance close to inter-observer variability | ||||
Address | Barcelona; February 2013 | ||||
Corporate Author | Thesis | ||||
Publisher | SciTePress | Place of Publication | Portugal | Editor | Sebastiano Battiato and José Braz |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN ![]() |
ISBN | 978-989-8565-47-1 | Medium | ||
Area | 800 | Expedition | Conference | VISAPP | |
Notes | IAM;MV; 600.044; 600.047; 600.060; 605.203 | Approved | no | ||
Call Number | IAM @ iam @ SGR2013 | Serial | 2123 | ||
Permanent link to this record | |||||
Author | Josep Llados; Marçal Rusiñol; Alicia Fornes; David Fernandez; Anjan Dutta | ||||
Title | On the Influence of Word Representations for Handwritten Word Spotting in Historical Documents | Type | Journal Article | ||
Year | 2012 | Publication | International Journal of Pattern Recognition and Artificial Intelligence | Abbreviated Journal | IJPRAI |
Volume | 26 | Issue | 5 | Pages | 1263002-126027 |
Keywords | Handwriting recognition; word spotting; historical documents; feature representation; shape descriptors Read More: http://www.worldscientific.com/doi/abs/10.1142/S0218001412630025 | ||||
Abstract | 0,624 JCR
Word spotting is the process of retrieving all instances of a queried keyword from a digital library of document images. In this paper we evaluate the performance of different word descriptors to assess the advantages and disadvantages of statistical and structural models in a framework of query-by-example word spotting in historical documents. We compare four word representation models, namely sequence alignment using DTW as a baseline reference, a bag of visual words approach as statistical model, a pseudo-structural model based on a Loci features representation, and a structural approach where words are represented by graphs. The four approaches have been tested with two collections of historical data: the George Washington database and the marriage records from the Barcelona Cathedral. We experimentally demonstrate that statistical representations generally give a better performance, however it cannot be neglected that large descriptors are difficult to be implemented in a retrieval scenario where word spotting requires the indexation of data with million word images. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN ![]() |
ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ LRF2012 | Serial | 2128 | ||
Permanent link to this record | |||||
Author | Javier Vazquez; Robert Benavente; Maria Vanrell | ||||
Title | Naming constraints constancy | Type | Conference Article | ||
Year | 2012 | Publication | 2nd Joint AVA / BMVA Meeting on Biological and Machine Vision | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Different studies have shown that languages from industrialized cultures
share a set of 11 basic colour terms: red, green, blue, yellow, pink, purple, brown, orange, black, white, and grey (Berlin & Kay, 1969, Basic Color Terms, University of California Press)( Kay & Regier, 2003, PNAS, 100, 9085-9089). Some of these studies have also reported the best representatives or focal values of each colour (Boynton and Olson, 1990, Vision Res. 30,1311–1317), (Sturges and Whitfield, 1995, CRA, 20:6, 364–376). Some further studies have provided us with fuzzy datasets for color naming by asking human observers to rate colours in terms of membership values (Benavente -et al-, 2006, CRA. 31:1, 48–56,). Recently, a computational model based on these human ratings has been developed (Benavente -et al-, 2008, JOSA-A, 25:10, 2582-2593). This computational model follows a fuzzy approach to assign a colour name to a particular RGB value. For example, a pixel with a value (255,0,0) will be named 'red' with membership 1, while a cyan pixel with a RGB value of (0, 200, 200) will be considered to be 0.5 green and 0.5 blue. In this work, we show how this colour naming paradigm can be applied to different computer vision tasks. In particular, we report results in colour constancy (Vazquez-Corral -et al-, 2012, IEEE TIP, in press) showing that the classical constraints on either illumination or surface reflectance can be substituted by the statistical properties encoded in the colour names. [Supported by projects TIN2010-21771-C02-1, CSD2007-00018]. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN ![]() |
ISBN | Medium | |||
Area | Expedition | Conference | AV A | ||
Notes | CIC | Approved | no | ||
Call Number | Admin @ si @ VBV2012 | Serial | 2131 | ||
Permanent link to this record | |||||
Author | Xavier Otazu; Olivier Penacchio; Laura Dempere-Marco | ||||
Title | An investigation into plausible neural mechanisms related to the the CIWaM computational model for brightness induction | Type | Conference Article | ||
Year | 2012 | Publication | 2nd Joint AVA / BMVA Meeting on Biological and Machine Vision | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Brightness induction is the modulation of the perceived intensity of an area by the luminance of surrounding areas. From a purely computational perspective, we built a low-level computational model (CIWaM) of early sensory processing based on multi-resolution wavelets with the aim of replicating brightness and colour (Otazu et al., 2010, Journal of Vision, 10(12):5) induction effects. Furthermore, we successfully used the CIWaM architecture to define a computational saliency model (Murray et al, 2011, CVPR, 433-440; Vanrell et al, submitted to AVA/BMVA'12). From a biological perspective, neurophysiological evidence suggests that perceived brightness information may be explicitly represented in V1. In this work we investigate possible neural mechanisms that offer a plausible explanation for such effects. To this end, we consider the model by Z.Li (Li, 1999, Network:Comput. Neural Syst., 10, 187-212) which is based on biological data and focuses on the part of V1 responsible for contextual influences, namely, layer 2-3 pyramidal cells, interneurons, and horizontal intracortical connections. This model has proven to account for phenomena such as visual saliency, which share with brightness induction the relevant effect of contextual influences (the ones modelled by CIWaM). In the proposed model, the input to the network is derived from a complete multiscale and multiorientation wavelet decomposition taken from the computational model (CIWaM).
This model successfully accounts for well known pyschophysical effects (among them: the White's and modied White's effects, the Todorovic, Chevreul, achromatic ring patterns, and grating induction effects) for static contexts and also for brigthness induction in dynamic contexts defined by modulating the luminance of surrounding areas. From a methodological point of view, we conclude that the results obtained by the computational model (CIWaM) are compatible with the ones obtained by the neurodynamical model proposed here. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN ![]() |
ISBN | Medium | |||
Area | Expedition | Conference | AV A | ||
Notes | CIC | Approved | no | ||
Call Number | Admin @ si @ OPD2012a | Serial | 2132 | ||
Permanent link to this record | |||||
Author | Thanh Ha Do; Salvatore Tabbone; Oriol Ramos Terrades | ||||
Title | Text/graphic separation using a sparse representation with multi-learned dictionaries | Type | Conference Article | ||
Year | 2012 | Publication | 21st International Conference on Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | Graphics Recognition; Layout Analysis; Document Understandin | ||||
Abstract | In this paper, we propose a new approach to extract text regions from graphical documents. In our method, we first empirically construct two sequences of learned dictionaries for the text and graphical parts respectively. Then, we compute the sparse representations of all different sizes and non-overlapped document patches in these learned dictionaries. Based on these representations, each patch can be classified into the text or graphic category by comparing its reconstruction errors. Same-sized patches in one category are then merged together to define the corresponding text or graphic layers which are combined to createfinal text/graphic layer. Finally, in a post-processing step, text regions are further filtered out by using some learned thresholds. | ||||
Address | Tsukuba | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN ![]() |
ISBN | Medium | |||
Area | Expedition | Conference | ICPR | ||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ DTR2012a | Serial | 2135 | ||
Permanent link to this record | |||||
Author | Thanh Ha Do; Salvatore Tabbone; Oriol Ramos Terrades | ||||
Title | Noise suppression over bi-level graphical documents using a sparse representation | Type | Conference Article | ||
Year | 2012 | Publication | Colloque International Francophone sur l'Écrit et le Document | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address | Bordeaux | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN ![]() |
ISBN | Medium | |||
Area | Expedition | Conference | CIFED | ||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ DTR2012b | Serial | 2136 | ||
Permanent link to this record | |||||
Author | Adriana Romero; Simeon Petkov; Carlo Gatta; M.Sabate; Petia Radeva | ||||
Title | Efficient automatic segmentation of vessels | Type | Conference Article | ||
Year | 2012 | Publication | 16th Conference on Medical Image Understanding and Analysis | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address | Swansea, United Kingdom | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN ![]() |
ISBN | Medium | |||
Area | Expedition | Conference | MIUA | ||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ | Serial | 2137 | ||
Permanent link to this record | |||||
Author | Pedro Martins; Carlo Gatta; Paulo Carvalho | ||||
Title | Feature-driven Maximally Stable Extremal Regions | Type | Conference Article | ||
Year | 2012 | Publication | 7th International Conference on Computer Vision Theory and Applications | Abbreviated Journal | |
Volume | Issue | Pages | 490-497 | ||
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN ![]() |
ISBN | Medium | |||
Area | Expedition | Conference | VISAPP | ||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ MGC2012 | Serial | 2139 | ||
Permanent link to this record | |||||
Author | Pedro Martins; Paulo Carvalho; Carlo Gatta | ||||
Title | Context Aware Keypoint Extraction for Robust Image Representation | Type | Conference Article | ||
Year | 2012 | Publication | 23rd British Machine Vision Conference | Abbreviated Journal | |
Volume | Issue | Pages | 100.1 - 100.12 | ||
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN ![]() |
ISBN | Medium | |||
Area | Expedition | Conference | BMVC | ||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ MCG2012a | Serial | 2140 | ||
Permanent link to this record | |||||
Author | Laura Igual; Joan Carles Soliva; Sergio Escalera; Roger Gimeno; Oscar Vilarroya; Petia Radeva | ||||
Title | Automatic Brain Caudate Nuclei Segmentation and Classification in Diagnostic of Attention-Deficit/Hyperactivity Disorder | Type | Journal Article | ||
Year | 2012 | Publication | Computerized Medical Imaging and Graphics | Abbreviated Journal | CMIG |
Volume | 36 | Issue | 8 | Pages | 591-600 |
Keywords | Automatic caudate segmentation; Attention-Deficit/Hyperactivity Disorder; Diagnostic test; Machine learning; Decision stumps; Dissociated dipoles | ||||
Abstract | We present a fully automatic diagnostic imaging test for Attention-Deficit/Hyperactivity Disorder diagnosis assistance based on previously found evidences of caudate nucleus volumetric abnormalities. The proposed method consists of different steps: a new automatic method for external and internal segmentation of caudate based on Machine Learning methodologies; the definition of a set of new volume relation features, 3D Dissociated Dipoles, used for caudate representation and classification. We separately validate the contributions using real data from a pediatric population and show precise internal caudate segmentation and discrimination power of the diagnostic test, showing significant performance improvements in comparison to other state-of-the-art methods. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN ![]() |
ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | OR; HuPBA; MILAB | Approved | no | ||
Call Number | Admin @ si @ ISE2012 | Serial | 2143 | ||
Permanent link to this record | |||||
Author | Laura Igual; Agata Lapedriza; Ricard Borras | ||||
Title | Robust Gait-Based Gender Classification using Depth Cameras | Type | Journal Article | ||
Year | 2013 | Publication | EURASIP Journal on Advances in Signal Processing | Abbreviated Journal | EURASIPJ |
Volume | 37 | Issue | 1 | Pages | 72-80 |
Keywords | |||||
Abstract | This article presents a new approach for gait-based gender recognition using depth cameras, that can run in real time. The main contribution of this study is a new fast feature extraction strategy that uses the 3D point cloud obtained from the frames in a gait cycle. For each frame, these points are aligned according to their centroid and grouped. After that, they are projected into their PCA plane, obtaining a representation of the cycle particularly robust against view changes. Then, final discriminative features are computed by first making a histogram of the projected points and then using linear discriminant analysis. To test the method we have used the DGait database, which is currently the only publicly available database for gait analysis that includes depth information. We have performed experiments on manually labeled cycles and over whole video sequences, and the results show that our method improves the accuracy significantly, compared with state-of-the-art systems which do not use depth information. Furthermore, our approach is insensitive to illumination changes, given that it discards the RGB information. That makes the method especially suitable for real applications, as illustrated in the last part of the experiments section. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN ![]() |
ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB; OR;MV | Approved | no | ||
Call Number | Admin @ si @ ILB2013 | Serial | 2144 | ||
Permanent link to this record | |||||
Author | Francesco Ciompi | ||||
Title | Multi-Class Learning for Vessel Characterization in Intravascular Ultrasound | Type | Book Whole | ||
Year | 2012 | Publication | PhD Thesis, Universitat de Barcelona-CVC | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | In this thesis we tackle the problem of automatic characterization of human coronary vessel in Intravascular Ultrasound (IVUS) image modality. The basis for the whole characterization process is machine learning applied to multi-class problems. In all the presented approaches, the Error-Correcting Output Codes (ECOC) framework is used as central element for the design of multi-class classifiers.
Two main topics are tackled in this thesis. First, the automatic detection of the vessel borders is presented. For this purpose, a novel context-aware classifier for multi-class classification of the vessel morphology is presented, namely ECOC-DRF. Based on ECOC-DRF, the lumen border and the media-adventitia border in IVUS are robustly detected by means of a novel holistic approach, achieving an error comparable with inter-observer variability and with state of the art methods. The two vessel borders define the atheroma area of the vessel. In this area, tissue characterization is required. For this purpose, we present a framework for automatic plaque characterization by processing both texture in IVUS images and spectral information in raw Radio Frequency data. Furthermore, a novel method for fusing in-vivo and in-vitro IVUS data for plaque characterization is presented, namely pSFFS. The method demonstrates to effectively fuse data generating a classifier that improves the tissue characterization in both in-vitro and in-vivo datasets. A novel method for automatic video summarization in IVUS sequences is also presented. The method aims to detect the key frames of the sequence, i.e., the frames representative of morphological changes. This novel method represents the basis for video summarization in IVUS as well as the markers for the partition of the vessel into morphological and clinically interesting events. Finally, multi-class learning based on ECOC is applied to lung tissue characterization in Computed Tomography. The novel proposed approach, based on supervised and unsupervised learning, achieves accurate tissue classification on a large and heterogeneous dataset. |
||||
Address | |||||
Corporate Author | Thesis | Ph.D. thesis | |||
Publisher | Ediciones Graficas Rey | Place of Publication | Editor | Petia Radeva;Oriol Pujol | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN ![]() |
ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ Cio2012 | Serial | 2146 | ||
Permanent link to this record | |||||
Author | Antonio Hernandez; Miguel Reyes; Victor Ponce; Sergio Escalera | ||||
Title | GrabCut-Based Human Segmentation in Video Sequences | Type | Journal Article | ||
Year | 2012 | Publication | Sensors | Abbreviated Journal | SENS |
Volume | 12 | Issue | 11 | Pages | 15376-15393 |
Keywords | segmentation; human pose recovery; GrabCut; GraphCut; Active Appearance Models; Conditional Random Field | ||||
Abstract | In this paper, we present a fully-automatic Spatio-Temporal GrabCut human segmentation methodology that combines tracking and segmentation. GrabCut initialization is performed by a HOG-based subject detection, face detection, and skin color model. Spatial information is included by Mean Shift clustering whereas temporal coherence is considered by the historical of Gaussian Mixture Models. Moreover, full face and pose recovery is obtained by combining human segmentation with Active Appearance Models and Conditional Random Fields. Results over public datasets and in a new Human Limb dataset show a robust segmentation and recovery of both face and pose using the presented methodology. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN ![]() |
ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | HuPBA;MILAB | Approved | no | ||
Call Number | Admin @ si @ HRP2012 | Serial | 2147 | ||
Permanent link to this record |