|
Records |
Links |
|
Author |
Laura Igual; Agata Lapedriza; Ricard Borras |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
![find record details (via OpenURL) openurl](http://refbase.cvc.uab.es/img/xref.gif)
|
|
Title |
Robust Gait-Based Gender Classification using Depth Cameras |
Type |
Journal Article |
|
Year |
2013 |
Publication |
EURASIP Journal on Advances in Signal Processing |
Abbreviated Journal |
EURASIPJ |
|
|
Volume |
37 |
Issue |
1 |
Pages |
72-80 |
|
|
Keywords |
|
|
|
Abstract ![sorted by Abstract field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
This article presents a new approach for gait-based gender recognition using depth cameras, that can run in real time. The main contribution of this study is a new fast feature extraction strategy that uses the 3D point cloud obtained from the frames in a gait cycle. For each frame, these points are aligned according to their centroid and grouped. After that, they are projected into their PCA plane, obtaining a representation of the cycle particularly robust against view changes. Then, final discriminative features are computed by first making a histogram of the projected points and then using linear discriminant analysis. To test the method we have used the DGait database, which is currently the only publicly available database for gait analysis that includes depth information. We have performed experiments on manually labeled cycles and over whole video sequences, and the results show that our method improves the accuracy significantly, compared with state-of-the-art systems which do not use depth information. Furthermore, our approach is insensitive to illumination changes, given that it discards the RGB information. That makes the method especially suitable for real applications, as illustrated in the last part of the experiments section. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB; OR;MV |
Approved |
no |
|
|
Call Number |
Admin @ si @ ILB2013 |
Serial |
2144 |
|
Permanent link to this record |
|
|
|
|
Author |
Simone Balocco; Carlo Gatta; Francesco Ciompi; A. Wahle; Petia Radeva; S. Carlier; G. Unal; E. Sanidas; F. Mauri; X. Carillo; T. Kovarnik; C. Wang; H. Chen; T. P. Exarchos; D. I. Fotiadis; F. Destrempes; G. Cloutier; Oriol Pujol; Marina Alberti; E. G. Mendizabal-Ruiz; M. Rivera; T. Aksoy; R. W. Downe; I. A. Kakadiaris |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
![find record details (via OpenURL) openurl](http://refbase.cvc.uab.es/img/xref.gif)
|
|
Title |
Standardized evaluation methodology and reference database for evaluating IVUS image segmentation |
Type |
Journal Article |
|
Year |
2014 |
Publication |
Computerized Medical Imaging and Graphics |
Abbreviated Journal |
CMIG |
|
|
Volume |
38 |
Issue |
2 |
Pages |
70-90 |
|
|
Keywords |
IVUS (intravascular ultrasound); Evaluation framework; Algorithm comparison; Image segmentation |
|
|
Abstract ![sorted by Abstract field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
This paper describes an evaluation framework that allows a standardized and quantitative comparison of IVUS lumen and media segmentation algorithms. This framework has been introduced at the MICCAI 2011 Computing and Visualization for (Intra)Vascular Imaging (CVII) workshop, comparing the results of eight teams that participated.
We describe the available data-base comprising of multi-center, multi-vendor and multi-frequency IVUS datasets, their acquisition, the creation of the reference standard and the evaluation measures. The approaches address segmentation of the lumen, the media, or both borders; semi- or fully-automatic operation; and 2-D vs. 3-D methodology. Three performance measures for quantitative analysis have
been proposed. The results of the evaluation indicate that segmentation of the vessel lumen and media is possible with an accuracy that is comparable to manual annotation when semi-automatic methods are used, as well as encouraging results can be obtained also in case of fully-automatic segmentation. The analysis performed in this paper also highlights the challenges in IVUS segmentation that remains to be
solved. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB; LAMP; HuPBA; 600.046; 600.063; 600.079 |
Approved |
no |
|
|
Call Number |
Admin @ si @ BGC2013 |
Serial |
2314 |
|
Permanent link to this record |
|
|
|
|
Author |
Ole Larsen; Petia Radeva; Enric Marti |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
![find record details (via OpenURL) openurl](http://refbase.cvc.uab.es/img/xref.gif)
|
|
Title |
Bounds on the optimal elasticity parameters for a snake |
Type |
Journal Article |
|
Year |
1995 |
Publication |
Image Analysis and Processing |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
37-42 |
|
|
Keywords |
|
|
|
Abstract ![sorted by Abstract field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
This paper develops a formalism by which an estimate for the upper and lower bounds for the elasticity parameters for a snake can be obtained. Objects different in size and shape give rise to different bounds. The bounds can be obtained based on an analysis of the shape of the object of interest. Experiments on synthetic images show a good correlation between the estimated behaviour of the snake and the one actually observed. Experiments on real X-ray images show that the parameters for optimal segmentation lie within the estimated bounds. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB;IAM |
Approved |
no |
|
|
Call Number |
IAM @ iam @ LRM1995a |
Serial |
1559 |
|
Permanent link to this record |
|
|
|
|
Author |
Egils Avots; M. Daneshmanda; Andres Traumann; Sergio Escalera; G. Anbarjafaria |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
![find record details (via OpenURL) openurl](http://refbase.cvc.uab.es/img/xref.gif)
|
|
Title |
Automatic garment retexturing based on infrared information |
Type |
Journal Article |
|
Year |
2016 |
Publication |
Computers & Graphics |
Abbreviated Journal |
CG |
|
|
Volume |
59 |
Issue |
|
Pages |
28-38 |
|
|
Keywords |
Garment Retexturing; Texture Mapping; Infrared Images; RGB-D Acquisition Devices; Shading |
|
|
Abstract ![sorted by Abstract field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
This paper introduces a new automatic technique for garment retexturing using a single static image along with the depth and infrared information obtained using the Microsoft Kinect II as the RGB-D acquisition device. First, the garment is segmented out from the image using either the Breadth-First Search algorithm or the semi-automatic procedure provided by the GrabCut method. Then texture domain coordinates are computed for each pixel belonging to the garment using normalised 3D information. Afterwards, shading is applied to the new colours from the texture image. As the main contribution of the proposed method, the latter information is obtained based on extracting a linear map transforming the colour present on the infrared image to that of the RGB colour channels. One of the most important impacts of this strategy is that the resulting retexturing algorithm is colour-, pattern- and lighting-invariant. The experimental results show that it can be used to produce realistic representations, which is substantiated through implementing it under various experimentation scenarios, involving varying lighting intensities and directions. Successful results are accomplished also on video sequences, as well as on images of subjects taking different poses. Based on the Mean Opinion Score analysis conducted on many randomly chosen users, it has been shown to produce more realistic-looking results compared to the existing state-of-the-art methods suggested in the literature. From a wide perspective, the proposed method can be used for retexturing all sorts of segmented surfaces, although the focus of this study is on garment retexturing, and the investigation of the configurations is steered accordingly, since the experiments target an application in the context of virtual fitting rooms. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Elsevier |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HuPBA;MILAB; |
Approved |
no |
|
|
Call Number |
Admin @ si @ ADT2016 |
Serial |
2759 |
|
Permanent link to this record |
|
|
|
|
Author |
Adriana Romero; Carlo Gatta; Gustavo Camps-Valls |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
![find record details (via OpenURL) openurl](http://refbase.cvc.uab.es/img/xref.gif)
|
|
Title |
Unsupervised Deep Feature Extraction for Remote Sensing Image Classification |
Type |
Journal Article |
|
Year |
2016 |
Publication |
IEEE Transaction on Geoscience and Remote Sensing |
Abbreviated Journal |
TGRS |
|
|
Volume |
54 |
Issue |
3 |
Pages |
1349 - 1362 |
|
|
Keywords |
|
|
|
Abstract ![sorted by Abstract field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
This paper introduces the use of single-layer and deep convolutional networks for remote sensing data analysis. Direct application to multi- and hyperspectral imagery of supervised (shallow or deep) convolutional networks is very challenging given the high input data dimensionality and the relatively small amount of available labeled data. Therefore, we propose the use of greedy layerwise unsupervised pretraining coupled with a highly efficient algorithm for unsupervised learning of sparse features. The algorithm is rooted on sparse representations and enforces both population and lifetime sparsity of the extracted features, simultaneously. We successfully illustrate the expressive power of the extracted representations in several scenarios: classification of aerial scenes, as well as land-use classification in very high resolution or land-cover classification from multi- and hyperspectral images. The proposed algorithm clearly outperforms standard principal component analysis (PCA) and its kernel counterpart (kPCA), as well as current state-of-the-art algorithms of aerial classification, while being extremely computationally efficient at learning representations of data. Results show that single-layer convolutional networks can extract powerful discriminative features only when the receptive field accounts for neighboring pixels and are preferred when the classification requires high resolution and detailed results. However, deep architectures significantly outperform single-layer variants, capturing increasing levels of abstraction and complexity throughout the feature hierarchy. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0196-2892 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
LAMP; 600.079;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ RGC2016 |
Serial |
2723 |
|
Permanent link to this record |