|   | 
Details
   web
Records
Author Mariella Dimiccoli; Benoît Girard; Alain Berthoz; Daniel Bennequin
Title Striola Magica: a functional explanation of otolith organs Type Journal Article
Year 2013 Publication Journal of Computational Neuroscience Abbreviated Journal JCN
Volume 35 Issue 2 Pages 125-154
Keywords Otolith organs ;Striola; Vestibular pathway
Abstract Otolith end organs of vertebrates sense linear accelerations of the head and gravitation. The hair cells on their epithelia are responsible for transduction. In mammals, the striola, parallel to the line where hair cells reverse their polarization, is a narrow region centered on a curve with curvature and torsion. It has been shown that the striolar region is functionally different from the rest, being involved in a phasic vestibular pathway. We propose a mathematical and computational model that explains the necessity of this amazing geometry for the striola to be able to carry out its function. Our hypothesis, related to the biophysics of the hair cells and to the physiology of their afferent neurons, is that striolar afferents collect information from several type I hair cells to detect the jerk in a large domain of acceleration directions. This predicts a mean number of two calyces for afferent neurons, as measured in rodents. The domain of acceleration directions sensed by our striolar model is compatible with the experimental results obtained on monkeys considering all afferents. Therefore, the main result of our study is that phasic and tonic vestibular afferents cover the same geometrical fields, but at different dynamical and frequency domains.
Address
Corporate Author Thesis
Publisher Springer US Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1573-6873. 2013 ISBN Medium
Area Expedition Conference
Notes (up) MILAB Approved no
Call Number Admin @ si @DBG2013 Serial 2787
Permanent link to this record
 

 
Author Adriana Romero; Carlo Gatta
Title Do We Really Need All These Neurons? Type Conference Article
Year 2013 Publication 6th Iberian Conference on Pattern Recognition and Image Analysis Abbreviated Journal
Volume 7887 Issue Pages 460--467
Keywords Retricted Boltzmann Machine; hidden units; unsupervised learning; classification
Abstract Restricted Boltzmann Machines (RBMs) are generative neural networks that have received much attention recently. In particular, choosing the appropriate number of hidden units is important as it might hinder their representative power. According to the literature, RBM require numerous hidden units to approximate any distribution properly. In this paper, we present an experiment to determine whether such amount of hidden units is required in a classification context. We then propose an incremental algorithm that trains RBM reusing the previously trained parameters using a trade-off measure to determine the appropriate number of hidden units. Results on the MNIST and OCR letters databases show that using a number of hidden units, which is one order of magnitude smaller than the literature estimate, suffices to achieve similar performance. Moreover, the proposed algorithm allows to estimate the required number of hidden units without the need of training many RBM from scratch.
Address Madeira; Portugal; June 2013
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-642-38627-5 Medium
Area Expedition Conference IbPRIA
Notes (up) MILAB; 600.046 Approved no
Call Number Admin @ si @ RoG2013 Serial 2311
Permanent link to this record
 

 
Author Francesco Ciompi; Rui Hua; Simone Balocco; Marina Alberti; Oriol Pujol; Carles Caus; J. Mauri; Petia Radeva
Title Learning to Detect Stent Struts in Intravascular Ultrasound Type Conference Article
Year 2013 Publication 6th Iberian Conference on Pattern Recognition and Image Analysis Abbreviated Journal
Volume 7887 Issue Pages 575-583
Keywords
Abstract In this paper we tackle the automatic detection of struts elements (metallic braces of a stent device) in Intravascular Ultrasound (IVUS) sequences. The proposed method is based on context-aware classification of IVUS images, where we use Multi-Class Multi-Scale Stacked Sequential Learning (M2SSL). Additionally, we introduce a novel technique to reduce the amount of required contextual features. The comparison with binary and multi-class learning is also performed, using a dataset of IVUS images with struts manually annotated by an expert. The best performing configuration reaches a F-measure F = 63.97% .
Address Madeira; Portugal; June 2013
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-642-38627-5 Medium
Area Expedition Conference IbPRIA
Notes (up) MILAB; HuPBA; 605.203; 600.046 Approved no
Call Number Admin @ si @ CHB2013 Serial 2349
Permanent link to this record
 

 
Author Michal Drozdzal; Santiago Segui; Carolina Malagelada; Fernando Azpiroz; Petia Radeva
Title Adaptable image cuts for motility inspection using WCE Type Journal Article
Year 2013 Publication Computerized Medical Imaging and Graphics Abbreviated Journal CMIG
Volume 37 Issue 1 Pages 72-80
Keywords
Abstract The Wireless Capsule Endoscopy (WCE) technology allows the visualization of the whole small intestine tract. Since the capsule is freely moving, mainly by the means of peristalsis, the data acquired during the study gives a lot of information about the intestinal motility. However, due to: (1) huge amount of frames, (2) complex intestinal scene appearance and (3) intestinal dynamics that make difficult the visualization of the small intestine physiological phenomena, the analysis of the WCE data requires computer-aided systems to speed up the analysis. In this paper, we propose an efficient algorithm for building a novel representation of the WCE video data, optimal for motility analysis and inspection. The algorithm transforms the 3D video data into 2D longitudinal view by choosing the most informative, from the intestinal motility point of view, part of each frame. This step maximizes the lumen visibility in its longitudinal extension. The task of finding “the best longitudinal view” has been defined as a cost function optimization problem which global minimum is obtained by using Dynamic Programming. Validation on both synthetic data and WCE data shows that the adaptive longitudinal view is a good alternative to the traditional motility analysis done by video analysis. The proposed novel data representation a new, holistic insight into the small intestine motility, allowing to easily define and analyze motility events that are difficult to spot by analyzing WCE video. Moreover, the visual inspection of small intestine motility is 4 times faster then by means of video skimming of the WCE.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes (up) MILAB; OR; 600.046; 605.203 Approved no
Call Number Admin @ si @ DSM2012 Serial 2151
Permanent link to this record
 

 
Author Michal Drozdzal; Santiago Segui; Petia Radeva; Carolina Malagelada; Fernando Azpiroz; Jordi Vitria
Title An Application for Efficient Error-Free Labeling of Medical Images Type Book Chapter
Year 2013 Publication Multimodal Interaction in Image and Video Applications Abbreviated Journal
Volume 48 Issue Pages 1-16
Keywords
Abstract In this chapter we describe an application for efficient error-free labeling of medical images. In this scenario, the compilation of a complete training set for building a realistic model of a given class of samples is not an easy task, making the process tedious and time consuming. For this reason, there is a need for interactive labeling applications that minimize the effort of the user while providing error-free labeling. We propose a new algorithm that is based on data similarity in feature space. This method actively explores data in order to find the best label-aligned clustering and exploits it to reduce the labeler effort, that is measured by the number of “clicks. Moreover, error-free labeling is guaranteed by the fact that all data and their labels proposals are visually revised by en expert.
Address
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1868-4394 ISBN 978-3-642-35931-6 Medium
Area Expedition Conference
Notes (up) MILAB; OR;MV Approved no
Call Number Admin @ si @ DSR2013 Serial 2235
Permanent link to this record
 

 
Author Laura Igual; Agata Lapedriza; Ricard Borras
Title Robust Gait-Based Gender Classification using Depth Cameras Type Journal Article
Year 2013 Publication EURASIP Journal on Advances in Signal Processing Abbreviated Journal EURASIPJ
Volume 37 Issue 1 Pages 72-80
Keywords
Abstract This article presents a new approach for gait-based gender recognition using depth cameras, that can run in real time. The main contribution of this study is a new fast feature extraction strategy that uses the 3D point cloud obtained from the frames in a gait cycle. For each frame, these points are aligned according to their centroid and grouped. After that, they are projected into their PCA plane, obtaining a representation of the cycle particularly robust against view changes. Then, final discriminative features are computed by first making a histogram of the projected points and then using linear discriminant analysis. To test the method we have used the DGait database, which is currently the only publicly available database for gait analysis that includes depth information. We have performed experiments on manually labeled cycles and over whole video sequences, and the results show that our method improves the accuracy significantly, compared with state-of-the-art systems which do not use depth information. Furthermore, our approach is insensitive to illumination changes, given that it discards the RGB information. That makes the method especially suitable for real applications, as illustrated in the last part of the experiments section.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes (up) MILAB; OR;MV Approved no
Call Number Admin @ si @ ILB2013 Serial 2144
Permanent link to this record
 

 
Author Mirko Arnold; Anarta Ghosh; Glen Doherty; Hugh Mulcahy; Stephen Patchett; Gerard Lacey
Title Towards Automatic Direct Observation of Procedure and Skill (DOPS) in Colonoscopy Type Conference Article
Year 2013 Publication Proceedings of the International Conference on Computer Vision Theory and Applications Abbreviated Journal
Volume Issue Pages 48-53
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area 800 Expedition Conference VISIGRAPP
Notes (up) MV Approved no
Call Number fernando @ fernando @ Serial 2427
Permanent link to this record
 

 
Author Jorge Bernal; F. Javier Sanchez; Fernando Vilariño
Title Impact of Image Preprocessing Methods on Polyp Localization in Colonoscopy Frames Type Conference Article
Year 2013 Publication 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society Abbreviated Journal
Volume Issue Pages 7350 - 7354
Keywords
Abstract In this paper we present our image preprocessing methods as a key part of our automatic polyp localization scheme. These methods are used to assess the impact of different endoluminal scene elements when characterizing polyps. More precisely we tackle the influence of specular highlights, blood vessels and black mask surrounding the scene. Experimental results prove that the appropriate handling of these elements leads to a great improvement in polyp localization results.
Address Osaka; Japan; July 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1557-170X ISBN Medium
Area 800 Expedition Conference EMBC
Notes (up) MV; 600.047; 600.060;SIAI Approved no
Call Number Admin @ si @ BSV2013 Serial 2286
Permanent link to this record
 

 
Author Joan M. Nuñez; Jorge Bernal; F. Javier Sanchez; Fernando Vilariño
Title Blood Vessel Characterization in Colonoscopy Images to Improve Polyp Localization Type Conference Article
Year 2013 Publication Proceedings of the International Conference on Computer Vision Theory and Applications Abbreviated Journal
Volume 1 Issue Pages 162-171
Keywords Colonoscopy; Blood vessel; Linear features; Valley detection
Abstract This paper presents an approach to mitigate the contribution of blood vessels to the energy image used at different tasks of automatic colonoscopy image analysis. This goal is achieved by introducing a characterization of endoluminal scene objects which allows us to differentiate between the trace of 2-dimensional visual objects,such as vessels, and shades from 3-dimensional visual objects, such as folds. The proposed characterization is based on the influence that the object shape has in the resulting visual feature, and it leads to the development of a blood vessel attenuation algorithm. A database consisting of manually labelled masks was built in order to test the performance of our method, which shows an encouraging success in blood vessel mitigation while keeping other structures intact. Moreover, by extending our method to the only available polyp localization
algorithm tested on a public database, blood vessel mitigation proved to have a positive influence on the overall performance.
Address Barcelona; February 2013
Corporate Author Thesis
Publisher SciTePress Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area 800 Expedition Conference VISIGRAPP
Notes (up) MV; 600.054; 600.057;SIAI Approved no
Call Number IAM @ iam @ NBS2013 Serial 2198
Permanent link to this record
 

 
Author Carles Sanchez; Jorge Bernal; Debora Gil; F. Javier Sanchez
Title On-line lumen centre detection in gastrointestinal and respiratory endoscopy Type Conference Article
Year 2013 Publication Second International Workshop Clinical Image-Based Procedures Abbreviated Journal
Volume 8361 Issue Pages 31-38
Keywords Lumen centre detection; Bronchoscopy; Colonoscopy
Abstract We present in this paper a novel lumen centre detection for gastrointestinal and respiratory endoscopic images. The proposed method is based on the appearance and geometry of the lumen, which we defined as the darkest image region which centre is a hub of image gradients. Experimental results validated on the first public annotated gastro-respiratory database prove the reliability of the method for a wide range of images (with precision over 95 %).
Address Nagoya; Japan; September 2013
Corporate Author Thesis
Publisher Springer International Publishing Place of Publication Editor Erdt, Marius and Linguraru, Marius George and Oyarzun Laura, Cristina and Shekhar, Raj and Wesarg, Stefan and González Ballester, Miguel Angel and Drechsler, Klaus
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN 978-3-319-05665-4 Medium
Area 800 Expedition Conference CLIP
Notes (up) MV; IAM; 600.047; 600.044; 600.060 Approved no
Call Number Admin @ si @ SBG2013 Serial 2302
Permanent link to this record
 

 
Author Onur Ferhat; Fernando Vilariño
Title A Cheap Portable Eye-Tracker Solution for Common Setups Type Conference Article
Year 2013 Publication 17th European Conference on Eye Movements Abbreviated Journal
Volume Issue Pages
Keywords Low cost; eye-tracker; software; webcam; Raspberry Pi
Abstract We analyze the feasibility of a cheap eye-tracker where the hardware consists of a single webcam and a Raspberry Pi device. Our aim is to discover the limits of such a system and to see whether it provides an acceptable performance. We base our work on the open source Opengazer (Zielinski, 2013) and we propose several improvements to create a robust, real-time system. After assessing the accuracy of our eye-tracker in elaborated experiments involving 18 subjects under 4 different system setups, we developed a simple game to see how it performs in practice and we also installed it on a Raspberry Pi to create a portable stand-alone eye-tracker which achieves 1.62° horizontal accuracy with 3 fps refresh rate for a building cost of 70 Euros.
Address Lund; Sweden; August 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ECEM
Notes (up) MV;SIAI Approved no
Call Number Admin @ si @ FeV2013 Serial 2374
Permanent link to this record
 

 
Author Bogdan Raducanu; Fadi Dornaika
Title Texture-independent recognition of facial expressions in image snapshots and videos Type Journal Article
Year 2013 Publication Machine Vision and Applications Abbreviated Journal MVA
Volume 24 Issue 4 Pages 811-820
Keywords
Abstract This paper addresses the static and dynamic recognition of basic facial expressions. It has two main contributions. First, we introduce a view- and texture-independent scheme that exploits facial action parameters estimated by an appearance-based 3D face tracker. We represent the learned facial actions associated with different facial expressions by time series. Second, we compare this dynamic scheme with a static one based on analyzing individual snapshots and show that the former performs better than the latter. We provide evaluations of performance using three subspace learning techniques: linear discriminant analysis, non-parametric discriminant analysis and support vector machines.
Address
Corporate Author Thesis
Publisher Springer-Verlag Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0932-8092 ISBN Medium
Area Expedition Conference
Notes (up) OR; 600.046; 605.203;MV Approved no
Call Number Admin @ si @ RaD2013 Serial 2230
Permanent link to this record
 

 
Author Santiago Segui; Laura Igual; Jordi Vitria
Title Bagged One Class Classifiers in the Presence of Outliers Type Journal Article
Year 2013 Publication International Journal of Pattern Recognition and Artificial Intelligence Abbreviated Journal IJPRAI
Volume 27 Issue 5 Pages 1350014-1350035
Keywords One-class Classifier; Ensemble Methods; Bagging and Outliers
Abstract The problem of training classifiers only with target data arises in many applications where non-target data are too costly, difficult to obtain, or not available at all. Several one-class classification methods have been presented to solve this problem, but most of the methods are highly sensitive to the presence of outliers in the target class. Ensemble methods have therefore been proposed as a powerful way to improve the classification performance of binary/multi-class learning algorithms by introducing diversity into classifiers.
However, their application to one-class classification has been rather limited. In
this paper, we present a new ensemble method based on a non-parametric weighted bagging strategy for one-class classification, to improve accuracy in the presence of outliers. While the standard bagging strategy assumes a uniform data distribution, the method we propose here estimates a probability density based on a forest structure of the data. This assumption allows the estimation of data distribution from the computation of simple univariate and bivariate kernel densities. Experiments using original and noisy versions of 20 different datasets show that bagging ensemble methods applied to different one-class classifiers outperform base one-class classification methods. Moreover, we show that, in noisy versions of the datasets, the non-parametric weighted bagging strategy we propose outperforms the classical bagging strategy in a statistically significant way.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes (up) OR; 600.046;MV Approved no
Call Number Admin @ si @ SIV2013 Serial 2256
Permanent link to this record
 

 
Author Fadi Dornaika; Bogdan Raducanu
Title Out-of-Sample Embedding for Manifold Learning Applied to Face Recognition Type Conference Article
Year 2013 Publication IEEE International Workshop on Analysis and Modeling of Faces and Gestures Abbreviated Journal
Volume Issue Pages 862-868
Keywords
Abstract Manifold learning techniques are affected by two critical aspects: (i) the design of the adjacency graphs, and (ii) the embedding of new test data---the out-of-sample problem. For the first aspect, the proposed schemes were heuristically driven. For the second aspect, the difficulty resides in finding an accurate mapping that transfers unseen data samples into an existing manifold. Past works addressing these two aspects were heavily parametric in the sense that the optimal performance is only reached for a suitable parameter choice that should be known in advance. In this paper, we demonstrate that sparse coding theory not only serves for automatic graph reconstruction as shown in recent works, but also represents an accurate alternative for out-of-sample embedding. Considering for a case study the Laplacian Eigenmaps, we applied our method to the face recognition problem. To evaluate the effectiveness of the proposed out-of-sample embedding, experiments are conducted using the k-nearest neighbor (KNN) and Kernel Support Vector Machines (KSVM) classifiers on four public face databases. The experimental results show that the proposed model is able to achieve high categorization effectiveness as well as high consistency with non-linear embeddings/manifolds obtained in batch modes.
Address Portland; USA; June 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPRW
Notes (up) OR; 600.046;MV Approved no
Call Number Admin @ si @ DoR2013 Serial 2236
Permanent link to this record
 

 
Author Fadi Dornaika; Abdelmalik Moujahid; Bogdan Raducanu
Title Facial expression recognition using tracked facial actions: Classifier performance analysis Type Journal Article
Year 2013 Publication Engineering Applications of Artificial Intelligence Abbreviated Journal EAAI
Volume 26 Issue 1 Pages 467-477
Keywords Visual face tracking; 3D deformable models; Facial actions; Dynamic facial expression recognition; Human–computer interaction
Abstract In this paper, we address the analysis and recognition of facial expressions in continuous videos. More precisely, we study classifiers performance that exploit head pose independent temporal facial action parameters. These are provided by an appearance-based 3D face tracker that simultaneously provides the 3D head pose and facial actions. The use of such tracker makes the recognition pose- and texture-independent. Two different schemes are studied. The first scheme adopts a dynamic time warping technique for recognizing expressions where training data are given by temporal signatures associated with different universal facial expressions. The second scheme models temporal signatures associated with facial actions with fixed length feature vectors (observations), and uses some machine learning algorithms in order to recognize the displayed expression. Experiments quantified the performance of different schemes. These were carried out on CMU video sequences and home-made video sequences. The results show that the use of dimension reduction techniques on the extracted time series can improve the classification performance. Moreover, these experiments show that the best recognition rate can be above 90%.
Address
Corporate Author Thesis
Publisher Elsevier Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes (up) OR; 600.046;MV Approved no
Call Number Admin @ si @ DMR2013 Serial 2185
Permanent link to this record