|
Records |
Links |
|
Author |
Fadi Dornaika; Franck Davoine |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Facial expression recognition in continuous videos using dynamic programming |
Type |
Miscellaneous |
|
Year |
2005 |
Publication |
IEEE Int. Conference on Image Processing |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
Genova (Italy) |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
Admin @ si @ DoD2005a |
Serial |
597 |
|
Permanent link to this record |
|
|
|
|
Author |
Fadi Dornaika; Franck Davoine |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Facial expression recognition using auto-regressive models |
Type |
Miscellaneous |
|
Year |
2006 |
Publication |
18th International Conference on Pattern Recognition (ICPR´06), ISBN: 0–7695–2521–0, 4: 520–523 |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
Hong Kong |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
Admin @ si @ DoD2006a |
Serial |
734 |
|
Permanent link to this record |
|
|
|
|
Author |
Fadi Dornaika; Abdelmalik Moujahid; Bogdan Raducanu |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Facial expression recognition using tracked facial actions: Classifier performance analysis |
Type |
Journal Article |
|
Year |
2013 |
Publication |
Engineering Applications of Artificial Intelligence |
Abbreviated Journal |
EAAI |
|
|
Volume |
26 |
Issue |
1 |
Pages |
467-477 |
|
|
Keywords |
Visual face tracking; 3D deformable models; Facial actions; Dynamic facial expression recognition; Human–computer interaction |
|
|
Abstract |
In this paper, we address the analysis and recognition of facial expressions in continuous videos. More precisely, we study classifiers performance that exploit head pose independent temporal facial action parameters. These are provided by an appearance-based 3D face tracker that simultaneously provides the 3D head pose and facial actions. The use of such tracker makes the recognition pose- and texture-independent. Two different schemes are studied. The first scheme adopts a dynamic time warping technique for recognizing expressions where training data are given by temporal signatures associated with different universal facial expressions. The second scheme models temporal signatures associated with facial actions with fixed length feature vectors (observations), and uses some machine learning algorithms in order to recognize the displayed expression. Experiments quantified the performance of different schemes. These were carried out on CMU video sequences and home-made video sequences. The results show that the use of dimension reduction techniques on the extracted time series can improve the classification performance. Moreover, these experiments show that the best recognition rate can be above 90%. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Elsevier |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
OR; 600.046;MV |
Approved |
no |
|
|
Call Number |
Admin @ si @ DMR2013 |
Serial |
2185 |
|
Permanent link to this record |
|
|
|
|
Author |
Petia Radeva; Enric Marti |
![goto web page url](img/www.gif)
|
|
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Facial Features Segmentation by Model-Based Snakes |
Type |
Conference Article |
|
Year |
1995 |
Publication |
International Conference on Computing Analysis and Image Processing |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
Deformable models have recently been accepted as a standard technique to segment different features in facial images. Despite they give a good approximation of the salient features in a facial image, the resulting shapes of the segmentation process seem somewhat artificial with respect to the natural feature shapes. In this paper we show that active contour models (in particular, rubber snakes) give more close and natural representation of the detected feature shape. Besides, using snakes for facial segmentation frees us from the problem of determination of the numerous weigths of deformable models. Another advantage of rubber snakes is their reduced computational cost. Our experiments using rubber snakes for segmentation of facial snapshots have shown a significant improvement compared to deformable models. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
Bellaterra (Barcelona), Spain |
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB;IAM |
Approved |
no |
|
|
Call Number |
IAM @ iam @ RAM1995a |
Serial |
1633 |
|
Permanent link to this record |
|
|
|
|
Author |
Petia Radeva; Enric Marti |
![download PDF file pdf](img/file_PDF.gif)
|
|
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Facial Features Segmentation by Model-Based Snakes. |
Type |
Miscellaneous |
|
Year |
1995 |
Publication |
Trobada de Joves Investigadors, IIIA. |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
Bellaterra (Barcelona), Spain |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB; IAM |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ RaM1995a |
Serial |
130 |
|
Permanent link to this record |
|
|
|
|
Author |
Carme Julia; Angel Sappa; Felipe Lumbreras; Joan Serrat; Antonio Lopez |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Factorization with Missing and Noisy Data |
Type |
Conference Article |
|
Year |
2006 |
Publication |
6th International Conference on Computational Science |
Abbreviated Journal |
ICCS´06 |
|
|
Volume |
LNCS 3991 |
Issue |
|
Pages |
555–562 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
Reading (United Kingdom) |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ JSL2006b |
Serial |
653 |
|
Permanent link to this record |
|
|
|
|
Author |
Josep M. Gonfaus; Marco Pedersoli; Jordi Gonzalez; Andrea Vedaldi; Xavier Roca |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Factorized appearances for object detection |
Type |
Journal Article |
|
Year |
2015 |
Publication |
Computer Vision and Image Understanding |
Abbreviated Journal |
CVIU |
|
|
Volume |
138 |
Issue |
|
Pages |
92–101 |
|
|
Keywords |
Object recognition; Deformable part models; Learning and sharing parts; Discovering discriminative parts |
|
|
Abstract |
Deformable object models capture variations in an object’s appearance that can be represented as image deformations. Other effects such as out-of-plane rotations, three-dimensional articulations, and self-occlusions are often captured by considering mixture of deformable models, one per object aspect. A more scalable approach is representing instead the variations at the level of the object parts, applying the concept of a mixture locally. Combining a few part variations can in fact cheaply generate a large number of global appearances.
A limited version of this idea was proposed by Yang and Ramanan [1], for human pose dectection. In this paper we apply it to the task of generic object category detection and extend it in several ways. First, we propose a model for the relationship between part appearances more general than the tree of Yang and Ramanan [1], which is more suitable for generic categories. Second, we treat part locations as well as their appearance as latent variables so that training does not need part annotations but only the object bounding boxes. Third, we modify the weakly-supervised learning of Felzenszwalb et al. and Girshick et al. [2], [3] to handle a significantly more complex latent structure.
Our model is evaluated on standard object detection benchmarks and is found to improve over existing approaches, yielding state-of-the-art results for several object categories. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ISE; 600.063; 600.078 |
Approved |
no |
|
|
Call Number |
Admin @ si @ GPG2015 |
Serial |
2705 |
|
Permanent link to this record |
|
|
|
|
Author |
Patricia Marquez; H. Kause; A. Fuster; Aura Hernandez-Sabate; L. Florack; Debora Gil; Hans van Assen |
![download PDF file pdf](img/file_PDF.gif)
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Factors Affecting Optical Flow Performance in Tagging Magnetic Resonance Imaging |
Type |
Conference Article |
|
Year |
2014 |
Publication |
17th International Conference on Medical Image Computing and Computer Assisted Intervention |
Abbreviated Journal |
|
|
|
Volume |
8896 |
Issue |
|
Pages |
231-238 |
|
|
Keywords |
Optical flow; Performance Evaluation; Synthetic Database; ANOVA; Tagging Magnetic Resonance Imaging |
|
|
Abstract |
Changes in cardiac deformation patterns are correlated with cardiac pathologies. Deformation can be extracted from tagging Magnetic Resonance Imaging (tMRI) using Optical Flow (OF) techniques. For applications of OF in a clinical setting it is important to assess to what extent the performance of a particular OF method is stable across dierent clinical acquisition artifacts. This paper presents a statistical validation framework, based on ANOVA, to assess the motion and appearance factors that have the largest in uence on OF accuracy drop.
In order to validate this framework, we created a database of simulated tMRI data including the most common artifacts of MRI and test three dierent OF methods, including HARP. |
|
|
Address |
Boston; USA; September 2014 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer International Publishing |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN |
978-3-319-14677-5 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
STACOM |
|
|
Notes |
IAM; ADAS; 600.060; 601.145; 600.076; 600.075 |
Approved |
no |
|
|
Call Number |
Admin @ si @ MKF2014 |
Serial |
2495 |
|
Permanent link to this record |
|
|
|
|
Author |
Marta Nuñez-Garcia; Sonja Simpraga; M.Angeles Jurado; Maite Garolera; Roser Pueyo; Laura Igual |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
FADR: Functional-Anatomical Discriminative Regions for rest fMRI Characterization |
Type |
Conference Article |
|
Year |
2015 |
Publication |
Machine Learning in Medical Imaging, Proceedings of 6th International Workshop, MLMI 2015, Held in Conjunction with MICCAI 2015 |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
61-68 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
Munich; Germany; October 2015 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
MLMI |
|
|
Notes |
MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ NSJ2015 |
Serial |
2674 |
|
Permanent link to this record |
|
|
|
|
Author |
J. Filipe; Juan Andrade; J.L. Ferrier |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
FAF 2005 |
Type |
Miscellaneous |
|
Year |
2005 |
Publication |
Proceedings of the 2nd International Conference on Informatics in Control, Automation and Robotics, INSTICC Press |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
Barcelona (Spain) |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
Admin @ si @ FAF2005 |
Serial |
609 |
|
Permanent link to this record |
|
|
|
|
Author |
Tomas Sixta; Julio C. S. Jacques Junior; Pau Buch Cardona; Eduard Vazquez; Sergio Escalera |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
FairFace Challenge at ECCV 2020: Analyzing Bias in Face Recognition |
Type |
Conference Article |
|
Year |
2020 |
Publication |
ECCV Workshops |
Abbreviated Journal |
|
|
|
Volume |
12540 |
Issue |
|
Pages |
463-481 |
|
|
Keywords |
|
|
|
Abstract |
This work summarizes the 2020 ChaLearn Looking at People Fair Face Recognition and Analysis Challenge and provides a description of the top-winning solutions and analysis of the results. The aim of the challenge was to evaluate accuracy and bias in gender and skin colour of submitted algorithms on the task of 1:1 face verification in the presence of other confounding attributes. Participants were evaluated using an in-the-wild dataset based on reannotated IJB-C, further enriched 12.5K new images and additional labels. The dataset is not balanced, which simulates a real world scenario where AI-based models supposed to present fair outcomes are trained and evaluated on imbalanced data. The challenge attracted 151 participants, who made more 1.8K submissions in total. The final phase of the challenge attracted 36 active teams out of which 10 exceeded 0.999 AUC-ROC while achieving very low scores in the proposed bias metrics. Common strategies by the participants were face pre-processing, homogenization of data distributions, the use of bias aware loss functions and ensemble models. The analysis of top-10 teams shows higher false positive rates (and lower false negative rates) for females with dark skin tone as well as the potential of eyeglasses and young age to increase the false positive rates too. |
|
|
Address |
Virtual; August 2020 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ECCVW |
|
|
Notes |
HUPBA |
Approved |
no |
|
|
Call Number |
Admin @ si @ SJB2020 |
Serial |
3499 |
|
Permanent link to this record |
|
|
|
|
Author |
German Ros; J. Guerrero; Angel Sappa; Daniel Ponsa; Antonio Lopez |
![download PDF file pdf](img/file_PDF.gif)
|
|
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Fast and Robust l1-averaging-based Pose Estimation for Driving Scenarios |
Type |
Conference Article |
|
Year |
2013 |
Publication |
24th British Machine Vision Conference |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
SLAM |
|
|
Abstract |
Robust visual pose estimation is at the core of many computer vision applications, being fundamental for Visual SLAM and Visual Odometry problems. During the last decades, many approaches have been proposed to solve these problems, being RANSAC one of the most accepted and used. However, with the arrival of new challenges, such as large driving scenarios for autonomous vehicles, along with the improvements in the data gathering frameworks, new issues must be considered. One of these issues is the capability of a technique to deal with very large amounts of data while meeting the realtime
constraint. With this purpose in mind, we present a novel technique for the problem of robust camera-pose estimation that is more suitable for dealing with large amount of data, which additionally, helps improving the results. The method is based on a combination of a very fast coarse-evaluation function and a robust ℓ1-averaging procedure. Such scheme leads to high-quality results while taking considerably less time than RANSAC.
Experimental results on the challenging KITTI Vision Benchmark Suite are provided, showing the validity of the proposed approach. |
|
|
Address |
Bristol; UK; September 2013 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
BMVC |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ RGS2013b; ADAS @ adas @ |
Serial |
2274 |
|
Permanent link to this record |
|
|
|
|
Author |
David Aldavert; Arnau Ramisa; Ramon Lopez de Mantaras; Ricardo Toledo |
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Fast and Robust Object Segmentation with the Integral Linear Classifier |
Type |
Conference Article |
|
Year |
2010 |
Publication |
23rd IEEE Conference on Computer Vision and Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
1046–1053 |
|
|
Keywords |
|
|
|
Abstract |
We propose an efficient method, built on the popular Bag of Features approach, that obtains robust multiclass pixel-level object segmentation of an image in less than 500ms, with results comparable or better than most state of the art methods. We introduce the Integral Linear Classifier (ILC), that can readily obtain the classification score for any image sub-window with only 6 additions and 1 product by fusing the accumulation and classification steps in a single operation. In order to design a method as efficient as possible, our building blocks are carefully selected from the quickest in the state of the art. More precisely, we evaluate the performance of three popular local descriptors, that can be very efficiently computed using integral images, and two fast quantization methods: the Hierarchical K-Means, and the Extremely Randomized Forest. Finally, we explore the utility of adding spatial bins to the Bag of Features histograms and that of cascade classifiers to improve the obtained segmentation. Our method is compared to the state of the art in the difficult Graz-02 and PASCAL 2007 Segmentation Challenge datasets. |
|
|
Address |
San Francisco; CA; USA; June 2010 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1063-6919 |
ISBN |
978-1-4244-6984-0 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CVPR |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ ARL2010a |
Serial |
1311 |
|
Permanent link to this record |
|
|
|
|
Author |
Katerine Diaz; Francesc J. Ferri; W. Diaz |
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Fast Approximated Discriminative Common Vectors using rank-one SVD updates |
Type |
Conference Article |
|
Year |
2013 |
Publication |
20th International Conference On Neural Information Processing |
Abbreviated Journal |
|
|
|
Volume |
8228 |
Issue |
III |
Pages |
368-375 |
|
|
Keywords |
|
|
|
Abstract |
An efficient incremental approach to the discriminative common vector (DCV) method for dimensionality reduction and classification is presented. The proposal consists of a rank-one update along with an adaptive restriction on the rank of the null space which leads to an approximate but convenient solution. The algorithm can be implemented very efficiently in terms of matrix operations and space complexity, which enables its use in large-scale dynamic application domains. Deep comparative experimentation using publicly available high dimensional image datasets has been carried out in order to properly assess the proposed algorithm against several recent incremental formulations.
K. Diaz-Chito, F.J. Ferri, W. Diaz |
|
|
Address |
Daegu; Korea; November 2013 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Berlin Heidelberg |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN |
978-3-642-42050-4 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICONIP |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ DFD2013 |
Serial |
2439 |
|
Permanent link to this record |
|
|
|
|
Author |
Cristhian A. Aguilera-Carrasco; Cristhian Aguilera; Cristobal A. Navarro; Angel Sappa |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Fast CNN Stereo Depth Estimation through Embedded GPU Devices |
Type |
Journal Article |
|
Year |
2020 |
Publication |
Sensors |
Abbreviated Journal |
SENS |
|
|
Volume |
20 |
Issue |
11 |
Pages |
3249 |
|
|
Keywords |
stereo matching; deep learning; embedded GPU |
|
|
Abstract |
Current CNN-based stereo depth estimation models can barely run under real-time constraints on embedded graphic processing unit (GPU) devices. Moreover, state-of-the-art evaluations usually do not consider model optimization techniques, being that it is unknown what is the current potential on embedded GPU devices. In this work, we evaluate two state-of-the-art models on three different embedded GPU devices, with and without optimization methods, presenting performance results that illustrate the actual capabilities of embedded GPU devices for stereo depth estimation. More importantly, based on our evaluation, we propose the use of a U-Net like architecture for postprocessing the cost-volume, instead of a typical sequence of 3D convolutions, drastically augmenting the runtime speed of current models. In our experiments, we achieve real-time inference speed, in the range of 5–32 ms, for 1216 × 368 input stereo images on the Jetson TX2, Jetson Xavier, and Jetson Nano embedded devices. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MSIAU; 600.122 |
Approved |
no |
|
|
Call Number |
Admin @ si @ AAN2020 |
Serial |
3428 |
|
Permanent link to this record |