toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Bogdan Raducanu; Fadi Dornaika edit   pdf
doi  openurl
  Title Embedding new observations via sparse-coding for non-linear manifold learning Type Journal Article
  Year 2014 Publication Pattern Recognition Abbreviated Journal PR  
  Volume 47 Issue (down) 1 Pages 480-492  
  Keywords  
  Abstract Non-linear dimensionality reduction techniques are affected by two critical aspects: (i) the design of the adjacency graphs, and (ii) the embedding of new test data-the out-of-sample problem. For the first aspect, the proposed solutions, in general, were heuristically driven. For the second aspect, the difficulty resides in finding an accurate mapping that transfers unseen data samples into an existing manifold. Past works addressing these two aspects were heavily parametric in the sense that the optimal performance is only achieved for a suitable parameter choice that should be known in advance. In this paper, we demonstrate that the sparse representation theory not only serves for automatic graph construction as shown in recent works, but also represents an accurate alternative for out-of-sample embedding. Considering for a case study the Laplacian Eigenmaps, we applied our method to the face recognition problem. To evaluate the effectiveness of the proposed out-of-sample embedding, experiments are conducted using the K-nearest neighbor (KNN) and Kernel Support Vector Machines (KSVM) classifiers on six public face datasets. The experimental results show that the proposed model is able to achieve high categorization effectiveness as well as high consistency with non-linear embeddings/manifolds obtained in batch modes.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes OR;MV Approved no  
  Call Number Admin @ si @ RaD2013b Serial 2316  
Permanent link to this record
 

 
Author Cesar Isaza; Joaquin Salas; Bogdan Raducanu edit   pdf
doi  openurl
  Title Rendering ground truth data sets to detect shadows cast by static objects in outdoors Type Journal Article
  Year 2014 Publication Multimedia Tools and Applications Abbreviated Journal MTAP  
  Volume 70 Issue (down) 1 Pages 557-571  
  Keywords Synthetic ground truth data set; Sun position; Shadow detection; Static objects shadow detection  
  Abstract In our work, we are particularly interested in studying the shadows cast by static objects in outdoor environments, during daytime. To assess the accuracy of a shadow detection algorithm, we need ground truth information. The collection of such information is a very tedious task because it is a process that requires manual annotation. To overcome this severe limitation, we propose in this paper a methodology to automatically render ground truth using a virtual environment. To increase the degree of realism and usefulness of the simulated environment, we incorporate in the scenario the precise longitude, latitude and elevation of the actual location of the object, as well as the sun’s position for a given time and day. To evaluate our method, we consider a qualitative and a quantitative comparison. In the quantitative one, we analyze the shadow cast by a real object in a particular geographical location and its corresponding rendered model. To evaluate qualitatively the methodology, we use some ground truth images obtained both manually and automatically.  
  Address  
  Corporate Author Thesis  
  Publisher Springer US Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1380-7501 ISBN Medium  
  Area Expedition Conference  
  Notes OR;MV Approved no  
  Call Number Admin @ si @ ISR2014 Serial 2229  
Permanent link to this record
 

 
Author Laura Igual; Agata Lapedriza; Ricard Borras edit   pdf
doi  openurl
  Title Robust Gait-Based Gender Classification using Depth Cameras Type Journal Article
  Year 2013 Publication EURASIP Journal on Advances in Signal Processing Abbreviated Journal EURASIPJ  
  Volume 37 Issue (down) 1 Pages 72-80  
  Keywords  
  Abstract This article presents a new approach for gait-based gender recognition using depth cameras, that can run in real time. The main contribution of this study is a new fast feature extraction strategy that uses the 3D point cloud obtained from the frames in a gait cycle. For each frame, these points are aligned according to their centroid and grouped. After that, they are projected into their PCA plane, obtaining a representation of the cycle particularly robust against view changes. Then, final discriminative features are computed by first making a histogram of the projected points and then using linear discriminant analysis. To test the method we have used the DGait database, which is currently the only publicly available database for gait analysis that includes depth information. We have performed experiments on manually labeled cycles and over whole video sequences, and the results show that our method improves the accuracy significantly, compared with state-of-the-art systems which do not use depth information. Furthermore, our approach is insensitive to illumination changes, given that it discards the RGB information. That makes the method especially suitable for real applications, as illustrated in the last part of the experiments section.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; OR;MV Approved no  
  Call Number Admin @ si @ ILB2013 Serial 2144  
Permanent link to this record
 

 
Author Fadi Dornaika; Abdelmalik Moujahid; Bogdan Raducanu edit   pdf
url  doi
openurl 
  Title Facial expression recognition using tracked facial actions: Classifier performance analysis Type Journal Article
  Year 2013 Publication Engineering Applications of Artificial Intelligence Abbreviated Journal EAAI  
  Volume 26 Issue (down) 1 Pages 467-477  
  Keywords Visual face tracking; 3D deformable models; Facial actions; Dynamic facial expression recognition; Human–computer interaction  
  Abstract In this paper, we address the analysis and recognition of facial expressions in continuous videos. More precisely, we study classifiers performance that exploit head pose independent temporal facial action parameters. These are provided by an appearance-based 3D face tracker that simultaneously provides the 3D head pose and facial actions. The use of such tracker makes the recognition pose- and texture-independent. Two different schemes are studied. The first scheme adopts a dynamic time warping technique for recognizing expressions where training data are given by temporal signatures associated with different universal facial expressions. The second scheme models temporal signatures associated with facial actions with fixed length feature vectors (observations), and uses some machine learning algorithms in order to recognize the displayed expression. Experiments quantified the performance of different schemes. These were carried out on CMU video sequences and home-made video sequences. The results show that the use of dimension reduction techniques on the extracted time series can improve the classification performance. Moreover, these experiments show that the best recognition rate can be above 90%.  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes OR; 600.046;MV Approved no  
  Call Number Admin @ si @ DMR2013 Serial 2185  
Permanent link to this record
 

 
Author Rozenn Dhayot; Fernando Vilariño; Gerard Lacey edit  doi
openurl 
  Title Improving the Quality of Color Colonoscopy Videos Type Journal Article
  Year 2008 Publication EURASIP Journal on Image and Video Processing Abbreviated Journal EURASIP JIVP  
  Volume 139429 Issue (down) 1 Pages 1-9  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area 800 Expedition Conference  
  Notes MV;SIAI Approved no  
  Call Number fernando @ fernando @ Serial 2422  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: