Records |
Author |
Eric Amiel |
Title |
Visualisation de vaisseaux sanguins |
Type |
Report |
Year |
2005 |
Publication |
Rapport de Stage |
Abbreviated Journal |
|
Volume ![sorted by Volume (numeric) field, descending order (down)](img/sort_desc.gif) |
|
Issue |
|
Pages |
|
Keywords |
|
Abstract |
|
Address |
|
Corporate Author |
Université Paul Sabatier Toulouse III |
Thesis |
Bachelor's thesis |
Publisher |
Université Paul Sabatier Toulouse III |
Place of Publication |
Toulouse |
Editor |
Enric Marti |
Language |
French |
Summary Language |
French |
Original Title |
|
Series Editor |
IUP Systèmes Intelligents |
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
IAM |
Approved |
no |
Call Number |
IAM @ iam @ Ami2005 |
Serial |
1690 |
Permanent link to this record |
|
|
|
Author |
Debora Gil; Agnes Borras; Manuel Ballester; Francesc Carreras; Ruth Aris; Manuel Vazquez; Enric Marti; Ferran Poveda |
Title |
MIOCARDIA: Integrating cardiac function and muscular architecture for a better diagnosis |
Type |
Conference Article |
Year |
2011 |
Publication |
14th International Symposium on Applied Sciences in Biomedical and Communication Technologies |
Abbreviated Journal |
|
Volume ![sorted by Volume (numeric) field, descending order (down)](img/sort_desc.gif) |
|
Issue |
|
Pages |
|
Keywords |
|
Abstract |
Deep understanding of myocardial structure of the heart would unravel crucial knowledge for clinical and medical procedures. The MIOCARDIA project is a multidisciplinary project in cooperation with l'Hospital de la Santa Creu i de Sant Pau, Clinica la Creu Blanca and Barcelona Supercomputing Center. The ultimate goal of this project is defining a computational model of the myocardium. The model takes into account the deep interrelation between the anatomy and the mechanics of the heart. The paper explains the workflow of the MIOCARDIA project. It also introduces a multiresolution reconstruction technique based on DT-MRI streamlining for simplified global myocardial model generation. Our reconstructions can restore the most complex myocardial structures and provides evidences of a global helical organization. |
Address |
Barcelona; Spain |
Corporate Author |
Association for Computing Machinery |
Thesis |
|
Publisher |
|
Place of Publication |
Barcelona, Spain |
Editor |
Association for Computing Machinery |
Language |
english |
Summary Language |
english |
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
978-1-4503-0913-4 |
Medium |
|
Area |
|
Expedition |
|
Conference |
ISABEL |
Notes |
IAM |
Approved |
no |
Call Number |
IAM @ iam @ GGB2011 |
Serial |
1691 |
Permanent link to this record |
|
|
|
Author |
Ivo Everts; Jan van Gemert; Theo Gevers |
Title |
Evaluation of Color STIPs for Human Action Recognition |
Type |
Conference Article |
Year |
2013 |
Publication |
IEEE Conference on Computer Vision and Pattern Recognition |
Abbreviated Journal |
|
Volume ![sorted by Volume (numeric) field, descending order (down)](img/sort_desc.gif) |
|
Issue |
|
Pages |
2850-2857 |
Keywords |
|
Abstract |
This paper is concerned with recognizing realistic human actions in videos based on spatio-temporal interest points (STIPs). Existing STIP-based action recognition approaches operate on intensity representations of the image data. Because of this, these approaches are sensitive to disturbing photometric phenomena such as highlights and shadows. Moreover, valuable information is neglected by discarding chromaticity from the photometric representation. These issues are addressed by Color STIPs. Color STIPs are multi-channel reformulations of existing intensity-based STIP detectors and descriptors, for which we consider a number of chromatic representations derived from the opponent color space. This enhanced modeling of appearance improves the quality of subsequent STIP detection and description. Color STIPs are shown to substantially outperform their intensity-based counterparts on the challenging UCF~sports, UCF11 and UCF50 action recognition benchmarks. Moreover, the results show that color STIPs are currently the single best low-level feature choice for STIP-based approaches to human action recognition. |
Address |
Portland; oregon; June 2013 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
1063-6919 |
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
CVPR |
Notes |
ALTRES;ISE |
Approved |
no |
Call Number |
Admin @ si @ EGG2013 |
Serial |
2364 |
Permanent link to this record |
|
|
|
Author |
Fares Alnajar; Theo Gevers; Roberto Valenti; Sennay Ghebreab |
Title |
Calibration-free Gaze Estimation using Human Gaze Patterns |
Type |
Conference Article |
Year |
2013 |
Publication |
15th IEEE International Conference on Computer Vision |
Abbreviated Journal |
|
Volume ![sorted by Volume (numeric) field, descending order (down)](img/sort_desc.gif) |
|
Issue |
|
Pages |
137-144 |
Keywords |
|
Abstract |
We present a novel method to auto-calibrate gaze estimators based on gaze patterns obtained from other viewers. Our method is based on the observation that the gaze patterns of humans are indicative of where a new viewer will look at [12]. When a new viewer is looking at a stimulus, we first estimate a topology of gaze points (initial gaze points). Next, these points are transformed so that they match the gaze patterns of other humans to find the correct gaze points. In a flexible uncalibrated setup with a web camera and no chin rest, the proposed method was tested on ten subjects and ten images. The method estimates the gaze points after looking at a stimulus for a few seconds with an average accuracy of 4.3 im. Although the reported performance is lower than what could be achieved with dedicated hardware or calibrated setup, the proposed method still provides a sufficient accuracy to trace the viewer attention. This is promising considering the fact that auto-calibration is done in a flexible setup , without the use of a chin rest, and based only on a few seconds of gaze initialization data. To the best of our knowledge, this is the first work to use human gaze patterns in order to auto-calibrate gaze estimators. |
Address |
Sydney |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
ICCV |
Notes |
ALTRES;ISE |
Approved |
no |
Call Number |
Admin @ si @ AGV2013 |
Serial |
2365 |
Permanent link to this record |
|
|
|
Author |
Hamdi Dibeklioglu; Albert Ali Salah; Theo Gevers |
Title |
Like Father, Like Son: Facial Expression Dynamics for Kinship Verification |
Type |
Conference Article |
Year |
2013 |
Publication |
15th IEEE International Conference on Computer Vision |
Abbreviated Journal |
|
Volume ![sorted by Volume (numeric) field, descending order (down)](img/sort_desc.gif) |
|
Issue |
|
Pages |
1497-1504 |
Keywords |
|
Abstract |
Kinship verification from facial appearance is a difficult problem. This paper explores the possibility of employing facial expression dynamics in this problem. By using features that describe facial dynamics and spatio-temporal appearance over smile expressions, we show that it is possible to improve the state of the art in this problem, and verify that it is indeed possible to recognize kinship by resemblance of facial expressions. The proposed method is tested on different kin relationships. On the average, 72.89% verification accuracy is achieved on spontaneous smiles. |
Address |
Sydney |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
ICCV |
Notes |
ALTRES;ISE |
Approved |
no |
Call Number |
Admin @ si @ DSG2013 |
Serial |
2366 |
Permanent link to this record |
|
|
|
Author |
Jorge Bernal; F. Javier Sanchez; Fernando Vilariño |
Title |
Current Challenges on Polyp Detection in Colonoscopy Videos: From Region Segmentation to Region Classification. a Pattern Recognition-based Approach.ased Approach |
Type |
Conference Article |
Year |
2011 |
Publication |
2nd International Workshop on Medical Image Analysis and Descriptionfor Diagnosis Systems |
Abbreviated Journal |
|
Volume ![sorted by Volume (numeric) field, descending order (down)](img/sort_desc.gif) |
|
Issue |
|
Pages |
62-71 |
Keywords |
Medical Imaging, Colonoscopy, Pattern Recognition, Segmentation, Polyp Detection, Region Description, Machine Learning, Real-time. |
Abstract |
In this paper we present our approach on real-time polyp detection in colonoscopy videos. Our method consists of three stages: Image Segmentation, Region Description and Image Classification. Taking into account the constraints of our project, we introduce our segmentation system that is based on the model of appearance of the polyp that we have defined after observing real videos from colonoscopy processes. The output of this stage will ideally be a low number of regions of which one of them should cover the whole polyp region (if there is one in the image). This regions will be described in terms of features and, as a result of a machine learning schema, classified based on the values that they have for the several features that we will use on their description. Although we are still on the early stages of the project, we present some preliminary segmentation results that indicates that we are going in a good direction. |
Address |
Rome, Italy |
Corporate Author |
|
Thesis |
|
Publisher |
SciTePress |
Place of Publication |
|
Editor |
Djemal, Khalifa |
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
800 |
Expedition |
|
Conference |
MIAD |
Notes |
MV;SIAI |
Approved |
no |
Call Number |
IAM @ iam @ BSV2011a |
Serial |
1695 |
Permanent link to this record |
|
|
|
Author |
Petia Radeva; Jordi Vitria; Fernando Vilariño; Panagiota Spyridonos; Fernando Azpiroz; Juan Malagelada; Fosca de Iorio; Anna Accarino |
Title |
Cascade analysis for intestinal contraction detection |
Type |
Patent |
Year |
2009 |
Publication |
US 2009/0284589 A1 |
Abbreviated Journal |
USPO |
Volume ![sorted by Volume (numeric) field, descending order (down)](img/sort_desc.gif) |
|
Issue |
|
Pages |
1-25 |
Keywords |
|
Abstract |
A method and system cascade analysisi for intestinal contraction detection is provided by extracting from image frames captured in-vivo. The method and system also relate to the detection of turbid liquids in intestinal tracts, to automatic detection of video image frames taken in the gastrointestinal tract including a field of view obstructed by turbid media, and more particulary, to extraction of image data obstructed by turbid media. |
Address |
|
Corporate Author |
US Patent Office |
Thesis |
|
Publisher |
US Patent Office |
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
MILAB; OR; MV;SIAI |
Approved |
no |
Call Number |
IAM @ iam @ RVV2009 |
Serial |
1700 |
Permanent link to this record |
|
|
|
Author |
Panagiota Spyridonos; Fernando Vilariño; Jordi Vitria; Petia Radeva; Fernando Azpiroz; Juan Malagelada |
Title |
Device, system and method for automatic detection of contractile activity in an image frame |
Type |
Patent |
Year |
2011 |
Publication |
US 2011/0044515 A1 |
Abbreviated Journal |
|
Volume ![sorted by Volume (numeric) field, descending order (down)](img/sort_desc.gif) |
|
Issue |
|
Pages |
|
Keywords |
|
Abstract |
A device, system and method for automatic detection of contractile activity of a body lumen in an image frame is provided, wherein image frames during contractile activity are captured and/or image frames including contractile activity are automatically detected, such as through pattern recognition and/or feature extraction to trace image frames including contractions, e.g., with wrinkle patterns. A manual procedure of annotation of contractions, e.g. tonic contractions in capsule endoscopy, may consist of the visualization of the whole video by a specialist, and the labeling of the contraction frames. Embodiments of the present invention may be suitable for implementation in an in vivo imaging system. |
Address |
Pearl Cohen Zedek Latzer, LLP, 1500 Broadway 12th Floor, New York (NY) 10036 (US) |
Corporate Author |
US Patent Office |
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
MV;OR;MILAB;SIAI |
Approved |
no |
Call Number |
IAM @ iam @ SVV2011 |
Serial |
1701 |
Permanent link to this record |
|
|
|
Author |
Fernando Vilariño; Panagiota Spyridonos; Petia Radeva; Jordi Vitria; Fernando Azpiroz; Juan Malagelada |
Title |
Method for automatic classification of in vivo images |
Type |
Patent |
Year |
2010 |
Publication |
US 2010/0046816 |
Abbreviated Journal |
|
Volume ![sorted by Volume (numeric) field, descending order (down)](img/sort_desc.gif) |
|
Issue |
|
Pages |
|
Keywords |
|
Abstract |
A method for automatically detecting a post-duodenal boundary in an image stream of the gastrointestinal (GI) tract. The image stream is sampled to obtain a reduced set of images for processing. The reduced set of images is filtered to remove non-valid frames or non-valid portions of frames, thereby generating a filtered set of valid images. A polar representation of the valid images is generated. Textural features of the polar representation are processed to detect the post-duodenal boundary of the GI tract. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
800 |
Expedition |
|
Conference |
|
Notes |
MV;OR;MILAB;SIAI |
Approved |
no |
Call Number |
IAM @ iam @ VSR2010 |
Serial |
1702 |
Permanent link to this record |
|
|
|
Author |
Gerard Lacey; Fernando Vilariño |
Title |
Endoscopy system with motion sensors |
Type |
Patent |
Year |
2011 |
Publication |
US 2011/0032347 A1 |
Abbreviated Journal |
|
Volume ![sorted by Volume (numeric) field, descending order (down)](img/sort_desc.gif) |
|
Issue |
|
Pages |
|
Keywords |
|
Abstract |
An endoscopy system (1) comprises an endoscope (2) with a camera (3) at its tip. The endoscope extends through an endoscope guide (4) for guiding movement of the endoscope and for measurement of its movement as it enters the body. The guide (4) comprises a generally conical body (5) having a through passage (105) through which the endoscope (2) extends. A motion sensor comprises an optical transmitter (7) and a detector (8) mounted alongside the passage (105) to measure the insertion-withdrawal linear motion and also rotation of the endoscope by the endoscopist's hand. The system (1) also comprises a flexure controller (10) having wheels operated by the endoscopist. The camera (3), the motion sensor (7/8), and the flexure controller (10) are all connected to a processor (11) which feeds a display. |
Address |
Jacobson Holman PPLC; 400 Seventh Street, N.W. Suite 600; Whashington DC 20004 DC |
Corporate Author |
USPTO |
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
800 |
Expedition |
|
Conference |
|
Notes |
MV;SIAI |
Approved |
no |
Call Number |
IAM @ iam @ LaV2011 |
Serial |
1703 |
Permanent link to this record |
|
|
|
Author |
Fernando Vilariño; Panagiota Spyridonos; Petia Radeva; Jordi Vitria; Fernando Azpiroz; Juan Malagelada |
Title |
Device, system and method for measurement and analysis of contractile activity |
Type |
Patent |
Year |
2009 |
Publication |
US 2009/0202117 A1 |
Abbreviated Journal |
|
Volume ![sorted by Volume (numeric) field, descending order (down)](img/sort_desc.gif) |
|
Issue |
|
Pages |
|
Keywords |
|
Abstract |
A method and system for determining intestinal dysfunction condition are provided by classifying and analyzing image frames captured in-vivo. The method and system also relate to the detection of contractile activity in intestinal tracts, to automatic detection of video image frames taken in the gastrointestinal tract including contractile activity, and more particularly to measurement and analysis of contractile activity of the GI tract based on image intensity of in vivo image data. |
Address |
Pearl Cohen Zedek Latzer |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
800 |
Expedition |
|
Conference |
|
Notes |
MV;OR;MILAB;SIAI |
Approved |
no |
Call Number |
IAM @ iam @ VSR2009 |
Serial |
1704 |
Permanent link to this record |
|
|
|
Author |
Victor Ponce; Sergio Escalera; Xavier Baro |
Title |
Multi-modal Social Signal Analysis for Predicting Agreement in Conversation Settings |
Type |
Conference Article |
Year |
2013 |
Publication |
15th ACM International Conference on Multimodal Interaction |
Abbreviated Journal |
|
Volume ![sorted by Volume (numeric) field, descending order (down)](img/sort_desc.gif) |
|
Issue |
|
Pages |
495-502 |
Keywords |
|
Abstract |
In this paper we present a non-invasive ambient intelligence framework for the analysis of non-verbal communication applied to conversational settings. In particular, we apply feature extraction techniques to multi-modal audio-RGB-depth data. We compute a set of behavioral indicators that define communicative cues coming from the fields of psychology and observational methodology. We test our methodology over data captured in victim-offender mediation scenarios. Using different state-of-the-art classification approaches, our system achieve upon 75% of recognition predicting agreement among the parts involved in the conversations, using as ground truth the experts opinions. |
Address |
Sidney; Australia; December 2013 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
978-1-4503-2129-7 |
Medium |
|
Area |
|
Expedition |
|
Conference |
ICMI |
Notes |
HuPBA;MV |
Approved |
no |
Call Number |
Admin @ si @ PEB2013 |
Serial |
2488 |
Permanent link to this record |
|
|
|
Author |
Salvatore Tabbone; Oriol Ramos Terrades |
Title |
An Overview of Symbol Recognition |
Type |
Book Chapter |
Year |
2014 |
Publication |
Handbook of Document Image Processing and Recognition |
Abbreviated Journal |
|
Volume ![sorted by Volume (numeric) field, descending order (down)](img/sort_desc.gif) |
D |
Issue |
|
Pages |
523-551 |
Keywords |
Pattern recognition; Shape descriptors; Structural descriptors; Symbolrecognition; Symbol spotting |
Abstract |
According to the Cambridge Dictionaries Online, a symbol is a sign, shape, or object that is used to represent something else. Symbol recognition is a subfield of general pattern recognition problems that focuses on identifying, detecting, and recognizing symbols in technical drawings, maps, or miscellaneous documents such as logos and musical scores. This chapter aims at providing the reader an overview of the different existing ways of describing and recognizing symbols and how the field has evolved to attain a certain degree of maturity. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
Springer London |
Place of Publication |
|
Editor |
D. Doermann; K. Tombre |
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
978-0-85729-858-4 |
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
DAG; 600.077 |
Approved |
no |
Call Number |
Admin @ si @ TaT2014 |
Serial |
2489 |
Permanent link to this record |
|
|
|
Author |
Naila Murray; Maria Vanrell; Xavier Otazu; C. Alejandro Parraga |
Title |
Saliency Estimation Using a Non-Parametric Low-Level Vision Model |
Type |
Conference Article |
Year |
2011 |
Publication |
IEEE conference on Computer Vision and Pattern Recognition |
Abbreviated Journal |
|
Volume ![sorted by Volume (numeric) field, descending order (down)](img/sort_desc.gif) |
|
Issue |
|
Pages |
433-440 |
Keywords |
Gaussian mixture model;ad hoc parameter selection;center-surround inhibition windows;center-surround mechanism;color appearance model;convolution;eye-fixation data;human vision;innate spatial pooling mechanism;inverse wavelet transform;low-level visual front-end;nonparametric low-level vision model;saliency estimation;saliency map;scale integration;scale-weighted center-surround response;scale-weighting function;visual task;Gaussian processes;biology;biology computing;colour vision;computer vision;visual perception;wavelet transforms |
Abstract |
Many successful models for predicting attention in a scene involve three main steps: convolution with a set of filters, a center-surround mechanism and spatial pooling to construct a saliency map. However, integrating spatial information and justifying the choice of various parameter values remain open problems. In this paper we show that an efficient model of color appearance in human vision, which contains a principled selection of parameters as well as an innate spatial pooling mechanism, can be generalized to obtain a saliency model that outperforms state-of-the-art models. Scale integration is achieved by an inverse wavelet transform over the set of scale-weighted center-surround responses. The scale-weighting function (termed ECSF) has been optimized to better replicate psychophysical data on color appearance, and the appropriate sizes of the center-surround inhibition windows have been determined by training a Gaussian Mixture Model on eye-fixation data, thus avoiding ad-hoc parameter selection. Additionally, we conclude that the extension of a color appearance model to saliency estimation adds to the evidence for a common low-level visual front-end for different visual tasks. |
Address |
Colorado Springs |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
1063-6919 |
ISBN |
978-1-4577-0394-2 |
Medium |
|
Area |
|
Expedition |
|
Conference |
CVPR |
Notes |
CIC |
Approved |
no |
Call Number |
Admin @ si @ MVO2011 |
Serial |
1757 |
Permanent link to this record |
|
|
|
Author |
Fadi Dornaika; Bogdan Raducanu |
Title |
Subtle Facial Expression Recognition in Still Images and Videos |
Type |
Book Chapter |
Year |
2011 |
Publication |
Advances in Face Image Analysis: Techniques and Technologies |
Abbreviated Journal |
|
Volume ![sorted by Volume (numeric) field, descending order (down)](img/sort_desc.gif) |
|
Issue |
14 |
Pages |
259-277 |
Keywords |
|
Abstract |
This chapter addresses the recognition of basic facial expressions. It has three main contributions. First, the authors introduce a view- and texture independent schemes that exploits facial action parameters estimated by an appearance-based 3D face tracker. they represent the learned facial actions associated with different facial expressions by time series. Two dynamic recognition schemes are proposed: (1) the first is based on conditional predictive models and on an analysis-synthesis scheme, and (2) the second is based on examples allowing straightforward use of machine learning approaches. Second, the authors propose an efficient recognition scheme based on the detection of keyframes in videos. Third, the authors compare the dynamic scheme with a static one based on analyzing individual snapshots and show that in general the former performs better than the latter. The authors then provide evaluations of performance using Linear Discriminant Analysis (LDA), Non parametric Discriminant Analysis (NDA), and Support Vector Machines (SVM). |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
IGI-Global |
Place of Publication |
New York, USA |
Editor |
Yu-Jin Zhang |
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
978-1-6152-0991-0 |
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
OR;MV |
Approved |
no |
Call Number |
Admin @ si @ DoR2011 |
Serial |
1751 |
Permanent link to this record |