|
Records |
Links |
|
Author |
Santiago Segui; Laura Igual; Jordi Vitria |
|
|
Title |
Bagged One Class Classifiers in the Presence of Outliers |
Type |
Journal Article |
|
Year |
2013 |
Publication |
International Journal of Pattern Recognition and Artificial Intelligence |
Abbreviated Journal |
IJPRAI |
|
|
Volume |
27 |
Issue |
5 |
Pages |
1350014-1350035 |
|
|
Keywords |
One-class Classifier; Ensemble Methods; Bagging and Outliers |
|
|
Abstract |
The problem of training classifiers only with target data arises in many applications where non-target data are too costly, difficult to obtain, or not available at all. Several one-class classification methods have been presented to solve this problem, but most of the methods are highly sensitive to the presence of outliers in the target class. Ensemble methods have therefore been proposed as a powerful way to improve the classification performance of binary/multi-class learning algorithms by introducing diversity into classifiers.
However, their application to one-class classification has been rather limited. In
this paper, we present a new ensemble method based on a non-parametric weighted bagging strategy for one-class classification, to improve accuracy in the presence of outliers. While the standard bagging strategy assumes a uniform data distribution, the method we propose here estimates a probability density based on a forest structure of the data. This assumption allows the estimation of data distribution from the computation of simple univariate and bivariate kernel densities. Experiments using original and noisy versions of 20 different datasets show that bagging ensemble methods applied to different one-class classifiers outperform base one-class classification methods. Moreover, we show that, in noisy versions of the datasets, the non-parametric weighted bagging strategy we propose outperforms the classical bagging strategy in a statistically significant way. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
OR; 600.046;MV |
Approved |
no |
|
|
Call Number |
Admin @ si @ SIV2013 |
Serial |
2256 |
|
Permanent link to this record |
|
|
|
|
Author |
Miguel Reyes; Albert Clapes; Jose Ramirez; Juan R Revilla; Sergio Escalera |
|
|
Title |
Automatic Digital Biometry Analysis based on Depth Maps |
Type |
Journal Article |
|
Year |
2013 |
Publication |
Computers in Industry |
Abbreviated Journal |
COMPUTIND |
|
|
Volume |
64 |
Issue |
9 |
Pages |
1316-1325 |
|
|
Keywords |
Multi-modal data fusion; Depth maps; Posture analysis; Anthropometric data; Musculo-skeletal disorders; Gesture analysis |
|
|
Abstract |
World Health Organization estimates that 80% of the world population is affected by back-related disorders during his life. Current practices to analyze musculo-skeletal disorders (MSDs) are expensive, subjective, and invasive. In this work, we propose a tool for static body posture analysis and dynamic range of movement estimation of the skeleton joints based on 3D anthropometric information from multi-modal data. Given a set of keypoints, RGB and depth data are aligned, depth surface is reconstructed, keypoints are matched, and accurate measurements about posture and spinal curvature are computed. Given a set of joints, range of movement measurements is also obtained. Moreover, gesture recognition based on joint movements is performed to look for the correctness in the development of physical exercises. The system shows high precision and reliable measurements, being useful for posture reeducation purposes to prevent MSDs, as well as tracking the posture evolution of patients in rehabilitation treatments. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Elsevier |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HuPBA;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ RCR2013 |
Serial |
2252 |
|
Permanent link to this record |
|
|
|
|
Author |
Albert Clapes; Miguel Reyes; Sergio Escalera |
|
|
Title |
Multi-modal User Identification and Object Recognition Surveillance System |
Type |
Journal Article |
|
Year |
2013 |
Publication |
Pattern Recognition Letters |
Abbreviated Journal |
PRL |
|
|
Volume |
34 |
Issue |
7 |
Pages |
799-808 |
|
|
Keywords |
Multi-modal RGB-Depth data analysis; User identification; Object recognition; Intelligent surveillance; Visual features; Statistical learning |
|
|
Abstract |
We propose an automatic surveillance system for user identification and object recognition based on multi-modal RGB-Depth data analysis. We model a RGBD environment learning a pixel-based background Gaussian distribution. Then, user and object candidate regions are detected and recognized using robust statistical approaches. The system robustly recognizes users and updates the system in an online way, identifying and detecting new actors in the scene. Moreover, segmented objects are described, matched, recognized, and updated online using view-point 3D descriptions, being robust to partial occlusions and local 3D viewpoint rotations. Finally, the system saves the historic of user–object assignments, being specially useful for surveillance scenarios. The system has been evaluated on a novel data set containing different indoor/outdoor scenarios, objects, and users, showing accurate recognition and better performance than standard state-of-the-art approaches. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Elsevier |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HUPBA; 600.046; 605.203;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ CRE2013 |
Serial |
2248 |
|
Permanent link to this record |
|
|
|
|
Author |
Mohammad Ali Bagheri; Qigang Gao; Sergio Escalera |
|
|
Title |
A Genetic-based Subspace Analysis Method for Improving Error-Correcting Output Coding |
Type |
Journal Article |
|
Year |
2013 |
Publication |
Pattern Recognition |
Abbreviated Journal |
PR |
|
|
Volume |
46 |
Issue |
10 |
Pages |
2830-2839 |
|
|
Keywords |
Error Correcting Output Codes; Evolutionary computation; Multiclass classification; Feature subspace; Ensemble classification |
|
|
Abstract |
Two key factors affecting the performance of Error Correcting Output Codes (ECOC) in multiclass classification problems are the independence of binary classifiers and the problem-dependent coding design. In this paper, we propose an evolutionary algorithm-based approach to the design of an application-dependent codematrix in the ECOC framework. The central idea of this work is to design a three-dimensional codematrix, where the third dimension is the feature space of the problem domain. In order to do that, we consider the feature space in the design process of the codematrix with the aim of improving the independence and accuracy of binary classifiers. The proposed method takes advantage of some basic concepts of ensemble classification, such as diversity of classifiers, and also benefits from the evolutionary approach for optimizing the three-dimensional codematrix, taking into account the problem domain. We provide a set of experimental results using a set of benchmark datasets from the UCI Machine Learning Repository, as well as two real multiclass Computer Vision problems. Both sets of experiments are conducted using two different base learners: Neural Networks and Decision Trees. The results show that the proposed method increases the classification accuracy in comparison with the state-of-the-art ECOC coding techniques. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Elsevier |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0031-3203 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HuPBA;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ BGE2013a |
Serial |
2247 |
|
Permanent link to this record |
|
|
|
|
Author |
Fernando Barrera; Felipe Lumbreras; Angel Sappa |
|
|
Title |
Multispectral Piecewise Planar Stereo using Manhattan-World Assumption |
Type |
Journal Article |
|
Year |
2013 |
Publication |
Pattern Recognition Letters |
Abbreviated Journal |
PRL |
|
|
Volume |
34 |
Issue |
1 |
Pages |
52-61 |
|
|
Keywords |
Multispectral stereo rig; Dense disparity maps from multispectral stereo; Color and infrared images |
|
|
Abstract |
This paper proposes a new framework for extracting dense disparity maps from a multispectral stereo rig. The system is constructed with an infrared and a color camera. It is intended to explore novel multispectral stereo matching approaches that will allow further extraction of semantic information. The proposed framework consists of three stages. Firstly, an initial sparse disparity map is generated by using a cost function based on feature matching in a multiresolution scheme. Then, by looking at the color image, a set of planar hypotheses is defined to describe the surfaces on the scene. Finally, the previous stages are combined by reformulating the disparity computation as a global minimization problem. The paper has two main contributions. The first contribution combines mutual information with a shape descriptor based on gradient in a multiresolution scheme. The second contribution, which is based on the Manhattan-world assumption, extracts a dense disparity representation using the graph cut algorithm. Experimental results in outdoor scenarios are provided showing the validity of the proposed framework. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS; 600.054; 600.055; 605.203 |
Approved |
no |
|
|
Call Number |
Admin @ si @ BLS2013 |
Serial |
2245 |
|
Permanent link to this record |
|
|
|
|
Author |
Naveen Onkarappa; Angel Sappa |
|
|
Title |
A Novel Space Variant Image Representation |
Type |
Journal Article |
|
Year |
2013 |
Publication |
Journal of Mathematical Imaging and Vision |
Abbreviated Journal |
JMIV |
|
|
Volume |
47 |
Issue |
1-2 |
Pages |
48-59 |
|
|
Keywords |
Space-variant representation; Log-polar mapping; Onboard vision applications |
|
|
Abstract |
Traditionally, in machine vision images are represented using cartesian coordinates with uniform sampling along the axes. On the contrary, biological vision systems represent images using polar coordinates with non-uniform sampling. For various advantages provided by space-variant representations many researchers are interested in space-variant computer vision. In this direction the current work proposes a novel and simple space variant representation of images. The proposed representation is compared with the classical log-polar mapping. The log-polar representation is motivated by biological vision having the characteristic of higher resolution at the fovea and reduced resolution at the periphery. On the contrary to the log-polar, the proposed new representation has higher resolution at the periphery and lower resolution at the fovea. Our proposal is proved to be a better representation in navigational scenarios such as driver assistance systems and robotics. The experimental results involve analysis of optical flow fields computed on both proposed and log-polar representations. Additionally, an egomotion estimation application is also shown as an illustrative example. The experimental analysis comprises results from synthetic as well as real sequences. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer US |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0924-9907 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS; 600.055; 605.203; 601.215 |
Approved |
no |
|
|
Call Number |
Admin @ si @ OnS2013a |
Serial |
2243 |
|
Permanent link to this record |
|
|
|
|
Author |
Olivier Penacchio; Xavier Otazu; Laura Dempere-Marco |
|
|
Title |
A Neurodynamical Model of Brightness Induction in V1 |
Type |
Journal Article |
|
Year |
2013 |
Publication |
PloS ONE |
Abbreviated Journal |
Plos |
|
|
Volume |
8 |
Issue |
5 |
Pages |
e64086 |
|
|
Keywords |
|
|
|
Abstract |
Brightness induction is the modulation of the perceived intensity of an area by the luminance of surrounding areas. Recent neurophysiological evidence suggests that brightness information might be explicitly represented in V1, in contrast to the more common assumption that the striate cortex is an area mostly responsive to sensory information. Here we investigate possible neural mechanisms that offer a plausible explanation for such phenomenon. To this end, a neurodynamical model which is based on neurophysiological evidence and focuses on the part of V1 responsible for contextual influences is presented. The proposed computational model successfully accounts for well known psychophysical effects for static contexts and also for brightness induction in dynamic contexts defined by modulating the luminance of surrounding areas. This work suggests that intra-cortical interactions in V1 could, at least partially, explain brightness induction effects and reveals how a common general architecture may account for several different fundamental processes, such as visual saliency and brightness induction, which emerge early in the visual processing pathway. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
CIC |
Approved |
no |
|
|
Call Number |
Admin @ si @ POD2013 |
Serial |
2242 |
|
Permanent link to this record |
|
|
|
|
Author |
Michal Drozdzal; Santiago Segui; Petia Radeva; Carolina Malagelada; Fernando Azpiroz; Jordi Vitria |
|
|
Title |
An Application for Efficient Error-Free Labeling of Medical Images |
Type |
Book Chapter |
|
Year |
2013 |
Publication |
Multimodal Interaction in Image and Video Applications |
Abbreviated Journal |
|
|
|
Volume |
48 |
Issue |
|
Pages |
1-16 |
|
|
Keywords |
|
|
|
Abstract |
In this chapter we describe an application for efficient error-free labeling of medical images. In this scenario, the compilation of a complete training set for building a realistic model of a given class of samples is not an easy task, making the process tedious and time consuming. For this reason, there is a need for interactive labeling applications that minimize the effort of the user while providing error-free labeling. We propose a new algorithm that is based on data similarity in feature space. This method actively explores data in order to find the best label-aligned clustering and exploits it to reduce the labeler effort, that is measured by the number of “clicks. Moreover, error-free labeling is guaranteed by the fact that all data and their labels proposals are visually revised by en expert. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Berlin Heidelberg |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1868-4394 |
ISBN |
978-3-642-35931-6 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB; OR;MV |
Approved |
no |
|
|
Call Number |
Admin @ si @ DSR2013 |
Serial |
2235 |
|
Permanent link to this record |
|
|
|
|
Author |
Marina Alberti; Simone Balocco; Xavier Carrillo; J. Mauri; Petia Radeva |
|
|
Title |
Automatic non-rigid temporal alignment of IVUS sequences: method and quantitative validation |
Type |
Journal Article |
|
Year |
2013 |
Publication |
Ultrasound in Medicine and Biology |
Abbreviated Journal |
UMB |
|
|
Volume |
39 |
Issue |
9 |
Pages |
1698-712 |
|
|
Keywords |
Intravascular ultrasound; Dynamic time warping; Non-rigid alignment; Sequence matching; Partial overlapping strategy |
|
|
Abstract |
Clinical studies on atherosclerosis regression/progression performed by intravascular ultrasound analysis would benefit from accurate alignment of sequences of the same patient before and after clinical interventions and at follow-up. In this article, a methodology for automatic alignment of intravascular ultrasound sequences based on the dynamic time warping technique is proposed. The non-rigid alignment is adapted to the specific task by applying it to multidimensional signals describing the morphologic content of the vessel. Moreover, dynamic time warping is embedded into a framework comprising a strategy to address partial overlapping between acquisitions and a term that regularizes non-physiologic temporal compression/expansion of the sequences. Extensive validation is performed on both synthetic and in vivo data. The proposed method reaches alignment errors of approximately 0.43 mm for pairs of sequences acquired during the same intervention phase and 0.77 mm for pairs of sequences acquired at successive intervention stages. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ ABC2013 |
Serial |
2313 |
|
Permanent link to this record |
|
|
|
|
Author |
Bogdan Raducanu; Fadi Dornaika |
|
|
Title |
Texture-independent recognition of facial expressions in image snapshots and videos |
Type |
Journal Article |
|
Year |
2013 |
Publication |
Machine Vision and Applications |
Abbreviated Journal |
MVA |
|
|
Volume |
24 |
Issue |
4 |
Pages |
811-820 |
|
|
Keywords |
|
|
|
Abstract |
This paper addresses the static and dynamic recognition of basic facial expressions. It has two main contributions. First, we introduce a view- and texture-independent scheme that exploits facial action parameters estimated by an appearance-based 3D face tracker. We represent the learned facial actions associated with different facial expressions by time series. Second, we compare this dynamic scheme with a static one based on analyzing individual snapshots and show that the former performs better than the latter. We provide evaluations of performance using three subspace learning techniques: linear discriminant analysis, non-parametric discriminant analysis and support vector machines. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer-Verlag |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0932-8092 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
OR; 600.046; 605.203;MV |
Approved |
no |
|
|
Call Number |
Admin @ si @ RaD2013 |
Serial |
2230 |
|
Permanent link to this record |
|
|
|
|
Author |
Ferran Diego; Joan Serrat; Antonio Lopez |
|
|
Title |
Joint spatio-temporal alignment of sequences |
Type |
Journal Article |
|
Year |
2013 |
Publication |
IEEE Transactions on Multimedia |
Abbreviated Journal |
TMM |
|
|
Volume |
15 |
Issue |
6 |
Pages |
1377-1387 |
|
|
Keywords |
video alignment |
|
|
Abstract |
Video alignment is important in different areas of computer vision such as wide baseline matching, action recognition, change detection, video copy detection and frame dropping prevention. Current video alignment methods usually deal with a relatively simple case of fixed or rigidly attached cameras or simultaneous acquisition. Therefore, in this paper we propose a joint video alignment for bringing two video sequences into a spatio-temporal alignment. Specifically, the novelty of the paper is to formulate the video alignment to fold the spatial and temporal alignment into a single alignment framework. This simultaneously satisfies a frame-correspondence and frame-alignment similarity; exploiting the knowledge among neighbor frames by a standard pairwise Markov random field (MRF). This new formulation is able to handle the alignment of sequences recorded at different times by independent moving cameras that follows a similar trajectory, and also generalizes the particular cases that of fixed geometric transformation and/or linear temporal mapping. We conduct experiments on different scenarios such as sequences recorded simultaneously or by moving cameras to validate the robustness of the proposed approach. The proposed method provides the highest video alignment accuracy compared to the state-of-the-art methods on sequences recorded from vehicles driving along the same track at different times. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1520-9210 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ DSL2013; ADAS @ adas @ |
Serial |
2228 |
|
Permanent link to this record |
|
|
|
|
Author |
Marc Castello; Jordi Gonzalez; Ariel Amato; Pau Baiget; Carles Fernandez; Josep M. Gonfaus; Ramon Mollineda; Marco Pedersoli; Nicolas Perez de la Blanca; Xavier Roca |
|
|
Title |
Exploiting Multimodal Interaction Techniques for Video-Surveillance |
Type |
Book Chapter |
|
Year |
2013 |
Publication |
Multimodal Interaction in Image and Video Applications Intelligent Systems Reference Library |
Abbreviated Journal |
|
|
|
Volume |
48 |
Issue |
8 |
Pages |
135-151 |
|
|
Keywords |
|
|
|
Abstract |
In this paper we present an example of a video surveillance application that exploits Multimodal Interactive (MI) technologies. The main objective of the so-called VID-Hum prototype was to develop a cognitive artificial system for both the detection and description of a particular set of human behaviours arising from real-world events. The main procedure of the prototype described in this chapter entails: (i) adaptation, since the system adapts itself to the most common behaviours (qualitative data) inferred from tracking (quantitative data) thus being able to recognize abnormal behaviors; (ii) feedback, since an advanced interface based on Natural Language understanding allows end-users the communicationwith the prototype by means of conceptual sentences; and (iii) multimodality, since a virtual avatar has been designed to describe what is happening in the scene, based on those textual interpretations generated by the prototype. Thus, the MI methodology has provided an adequate framework for all these cooperating processes. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Berlin Heidelberg |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1868-4394 |
ISBN |
978-3-642-35931-6 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ISE; 605.203; 600.049 |
Approved |
no |
|
|
Call Number |
CGA2013 |
Serial |
2222 |
|
Permanent link to this record |
|
|
|
|
Author |
Francisco Javier Orozco; Ognjen Rudovic; Jordi Gonzalez; Maja Pantic |
|
|
Title |
Hierarchical On-line Appearance-Based Tracking for 3D Head Pose, Eyebrows, Lips, Eyelids and Irises |
Type |
Journal Article |
|
Year |
2013 |
Publication |
Image and Vision Computing |
Abbreviated Journal |
IMAVIS |
|
|
Volume |
31 |
Issue |
4 |
Pages |
322-340 |
|
|
Keywords |
On-line appearance models; Levenberg–Marquardt algorithm; Line-search optimization; 3D face tracking; Facial action tracking; Eyelid tracking; Iris tracking |
|
|
Abstract |
In this paper, we propose an On-line Appearance-Based Tracker (OABT) for simultaneous tracking of 3D head pose, lips, eyebrows, eyelids and irises in monocular video sequences. In contrast to previously proposed tracking approaches, which deal with face and gaze tracking separately, our OABT can also be used for eyelid and iris tracking, as well as 3D head pose, lips and eyebrows facial actions tracking. Furthermore, our approach applies an on-line learning of changes in the appearance of the tracked target. Hence, the prior training of appearance models, which usually requires a large amount of labeled facial images, is avoided. Moreover, the proposed method is built upon a hierarchical combination of three OABTs, which are optimized using a Levenberg–Marquardt Algorithm (LMA) enhanced with line-search procedures. This, in turn, makes the proposed method robust to changes in lighting conditions, occlusions and translucent textures, as evidenced by our experiments. Finally, the proposed method achieves head and facial actions tracking in real-time. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Elsevier |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ISE; 605.203; 302.012; 302.018; 600.049 |
Approved |
no |
|
|
Call Number |
ORG2013 |
Serial |
2221 |
|
Permanent link to this record |
|
|
|
|
Author |
Muhammad Muzzamil Luqman; Jean-Yves Ramel; Josep Llados; Thierry Brouard |
|
|
Title |
Fuzzy Multilevel Graph Embedding |
Type |
Journal Article |
|
Year |
2013 |
Publication |
Pattern Recognition |
Abbreviated Journal |
PR |
|
|
Volume |
46 |
Issue |
2 |
Pages |
551-565 |
|
|
Keywords |
Pattern recognition; Graphics recognition; Graph clustering; Graph classification; Explicit graph embedding; Fuzzy logic |
|
|
Abstract |
Structural pattern recognition approaches offer the most expressive, convenient, powerful but computational expensive representations of underlying relational information. To benefit from mature, less expensive and efficient state-of-the-art machine learning models of statistical pattern recognition they must be mapped to a low-dimensional vector space. Our method of explicit graph embedding bridges the gap between structural and statistical pattern recognition. We extract the topological, structural and attribute information from a graph and encode numeric details by fuzzy histograms and symbolic details by crisp histograms. The histograms are concatenated to achieve a simple and straightforward embedding of graph into a low-dimensional numeric feature vector. Experimentation on standard public graph datasets shows that our method outperforms the state-of-the-art methods of graph embedding for richly attributed graphs. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Elsevier |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0031-3203 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG; 600.042; 600.045; 605.203 |
Approved |
no |
|
|
Call Number |
Admin @ si @ LRL2013a |
Serial |
2270 |
|
Permanent link to this record |
|
|
|
|
Author |
Anjan Dutta; Josep Llados; Umapada Pal |
|
|
Title |
A symbol spotting approach in graphical documents by hashing serialized graphs |
Type |
Journal Article |
|
Year |
2013 |
Publication |
Pattern Recognition |
Abbreviated Journal |
PR |
|
|
Volume |
46 |
Issue |
3 |
Pages |
752-768 |
|
|
Keywords |
Symbol spotting; Graphics recognition; Graph matching; Graph serialization; Graph factorization; Graph paths; Hashing |
|
|
Abstract |
In this paper we propose a symbol spotting technique in graphical documents. Graphs are used to represent the documents and a (sub)graph matching technique is used to detect the symbols in them. We propose a graph serialization to reduce the usual computational complexity of graph matching. Serialization of graphs is performed by computing acyclic graph paths between each pair of connected nodes. Graph paths are one-dimensional structures of graphs which are less expensive in terms of computation. At the same time they enable robust localization even in the presence of noise and distortion. Indexing in large graph databases involves a computational burden as well. We propose a graph factorization approach to tackle this problem. Factorization is intended to create a unified indexed structure over the database of graphical documents. Once graph paths are extracted, the entire database of graphical documents is indexed in hash tables by locality sensitive hashing (LSH) of shape descriptors of the paths. The hashing data structure aims to execute an approximate k-NN search in a sub-linear time. We have performed detailed experiments with various datasets of line drawings and compared our method with the state-of-the-art works. The results demonstrate the effectiveness and efficiency of our technique. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Elsevier |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0031-3203 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG; 600.042; 600.045; 605.203; 601.152 |
Approved |
no |
|
|
Call Number |
Admin @ si @ DLP2012 |
Serial |
2127 |
|
Permanent link to this record |