Home | << 1 2 3 4 5 6 7 8 9 10 >> [11–11] |
Records | |||||
---|---|---|---|---|---|
Author | Frederic Sampedro; Anna Domenech; Sergio Escalera; Ignasi Carrio | ||||
Title | Computing quantitative indicators of structural renal damage in pediatric DMSA scans | Type | Journal Article | ||
Year | 2017 | Publication | Revista Española de Medicina Nuclear e Imagen Molecular | Abbreviated Journal | REMNIM |
Volume | 36 | Issue | 2 | Pages | 72-77 |
Keywords | |||||
Abstract | OBJECTIVES:
The proposal and implementation of a computational framework for the quantification of structural renal damage from 99mTc-dimercaptosuccinic acid (DMSA) scans. The aim of this work is to propose, implement, and validate a computational framework for the quantification of structural renal damage from DMSA scans and in an observer-independent manner. MATERIALS AND METHODS: From a set of 16 pediatric DMSA-positive scans and 16 matched controls and using both expert-guided and automatic approaches, a set of image-derived quantitative indicators was computed based on the relative size, intensity and histogram distribution of the lesion. A correlation analysis was conducted in order to investigate the association of these indicators with other clinical data of interest in this scenario, including C-reactive protein (CRP), white cell count, vesicoureteral reflux, fever, relative perfusion, and the presence of renal sequelae in a 6-month follow-up DMSA scan. RESULTS: A fully automatic lesion detection and segmentation system was able to successfully classify DMSA-positive from negative scans (AUC=0.92, sensitivity=81% and specificity=94%). The image-computed relative size of the lesion correlated with the presence of fever and CRP levels (p<0.05), and a measurement derived from the distribution histogram of the lesion obtained significant performance results in the detection of permanent renal damage (AUC=0.86, sensitivity=100% and specificity=75%). CONCLUSIONS: The proposal and implementation of a computational framework for the quantification of structural renal damage from DMSA scans showed a promising potential to complement visual diagnosis and non-imaging indicators. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | HuPBA;MILAB; no menciona | Approved | no | ||
Call Number | Admin @ si @ SDE2017 | Serial | 2842 | ||
Permanent link to this record | |||||
Author | Patricia Suarez; Angel Sappa; Boris X. Vintimilla | ||||
Title | Cross-Spectral Image Patch Similarity using Convolutional Neural Network | Type | Conference Article | ||
Year | 2017 | Publication | IEEE International Workshop of Electronics, Control, Measurement, Signals and their application to Mechatronics | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | The ability to compare image regions (patches) has been the basis of many approaches to core computer vision problems, including object, texture and scene categorization. Hence, developing representations for image patches have been of interest in several works. The current work focuses on learning similarity between cross-spectral image patches with a 2 channel convolutional neural network (CNN) model. The proposed approach is an adaptation of a previous work, trying to obtain similar results than the state of the art but with a lowcost hardware. Hence, obtained results are compared with both
classical approaches, showing improvements, and a state of the art CNN based approach. |
||||
Address | San Sebastian; Spain; May 2017 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ECMSM | ||
Notes | ADAS; 600.086; 600.118 | Approved | no | ||
Call Number | Admin @ si @ SSV2017a | Serial | 2916 | ||
Permanent link to this record | |||||
Author | Cristhian A. Aguilera-Carrasco; Angel Sappa; Cristhian Aguilera; Ricardo Toledo | ||||
Title | Cross-Spectral Local Descriptors via Quadruplet Network | Type | Journal Article | ||
Year | 2017 | Publication | Sensors | Abbreviated Journal | SENS |
Volume | 17 | Issue | 4 | Pages | 873 |
Keywords | |||||
Abstract | This paper presents a novel CNN-based architecture, referred to as Q-Net, to learn local feature descriptors that are useful for matching image patches from two different spectral bands. Given correctly matched and non-matching cross-spectral image pairs, a quadruplet network is trained to map input image patches to a common Euclidean space, regardless of the input spectral band. Our approach is inspired by the recent success of triplet networks in the visible spectrum, but adapted for cross-spectral scenarios, where, for each matching pair, there are always two possible non-matching patches: one for each spectrum. Experimental evaluations on a public cross-spectral VIS-NIR dataset shows that the proposed approach improves the state-of-the-art. Moreover, the proposed technique can also be used in mono-spectral settings, obtaining a similar performance to triplet network descriptors, but requiring less training data. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ADAS; 600.086; 600.118 | Approved | no | ||
Call Number | Admin @ si @ ASA2017 | Serial | 2914 | ||
Permanent link to this record | |||||
Author | Debora Gil; Aura Hernandez-Sabate; David Castells; Jordi Carrabina | ||||
Title | CYBERH: Cyber-Physical Systems in Health for Personalized Assistance | Type | Conference Article | ||
Year | 2017 | Publication | International Symposium on Symbolic and Numeric Algorithms for Scientific Computing | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Assistance systems for e-Health applications have some specific requirements that demand of new methods for data gathering, analysis and modeling able to deal with SmallData:
1) systems should dynamically collect data from, both, the environment and the user to issue personalized recommendations; 2) data analysis should be able to tackle a limited number of samples prone to include non-informative data and possibly evolving in time due to changes in patient condition; 3) algorithms should run in real time with possibly limited computational resources and fluctuant internet access. Electronic medical devices (and CyberPhysical devices in general) can enhance the process of data gathering and analysis in several ways: (i) acquiring simultaneously multiple sensors data instead of single magnitudes (ii) filtering data; (iii) providing real-time implementations condition by isolating tasks in individual processors of multiprocessors Systems-on-chip (MPSoC) platforms and (iv) combining information through sensor fusion techniques. Our approach focus on both aspects of the complementary role of CyberPhysical devices and analysis of SmallData in the process of personalized models building for e-Health applications. In particular, we will address the design of Cyber-Physical Systems in Health for Personalized Assistance (CyberHealth) in two specific application cases: 1) A Smart Assisted Driving System (SADs) for dynamical assessment of the driving capabilities of Mild Cognitive Impaired (MCI) people; 2) An Intelligent Operating Room (iOR) for improving the yield of bronchoscopic interventions for in-vivo lung cancer diagnosis. |
||||
Address | Timisoara; Rumania; September 2017 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | SYNASC | ||
Notes | IAM; 600.085; 600.096; 600.075; 600.145 | Approved | no | ||
Call Number | Admin @ si @ GHC2017 | Serial | 3045 | ||
Permanent link to this record | |||||
Author | Albert Clapes; Tinne Tuytelaars; Sergio Escalera | ||||
Title | Darwintrees for action recognition | Type | Conference Article | ||
Year | 2017 | Publication | Chalearn Workshop on Action, Gesture, and Emotion Recognition: Large Scale Multimodal Gesture Recognition and Real versus Fake expressed emotions at ICCV | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICCVW | ||
Notes | HUPBA; no menciona | Approved | no | ||
Call Number | Admin @ si @ CTE2017 | Serial | 3069 | ||
Permanent link to this record | |||||
Author | Katerine Diaz; Jesus Martinez del Rincon; Aura Hernandez-Sabate | ||||
Title | Decremental generalized discriminative common vectors applied to images classification | Type | Journal Article | ||
Year | 2017 | Publication | Knowledge-Based Systems | Abbreviated Journal | KBS |
Volume | 131 | Issue | Pages | 46-57 | |
Keywords | Decremental learning; Generalized Discriminative Common Vectors; Feature extraction; Linear subspace methods; Classification | ||||
Abstract | In this paper, a novel decremental subspace-based learning method called Decremental Generalized Discriminative Common Vectors method (DGDCV) is presented. The method makes use of the concept of decremental learning, which we introduce in the field of supervised feature extraction and classification. By efficiently removing unnecessary data and/or classes for a knowledge base, our methodology is able to update the model without recalculating the full projection or accessing to the previously processed training data, while retaining the previously acquired knowledge. The proposed method has been validated in 6 standard face recognition datasets, showing a considerable computational gain without compromising the accuracy of the model. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ADAS; 600.118; 600.121 | Approved | no | ||
Call Number | Admin @ si @ DMH2017a | Serial | 3003 | ||
Permanent link to this record | |||||
Author | Maryam Asadi-Aghbolaghi; Albert Clapes; Marco Bellantonio; Hugo Jair Escalante; Victor Ponce; Xavier Baro; Isabelle Guyon; Shohreh Kasaei; Sergio Escalera | ||||
Title | Deep Learning for Action and Gesture Recognition in Image Sequences: A Survey | Type | Book Chapter | ||
Year | 2017 | Publication | Gesture Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 539-578 | ||
Keywords | Action recognition; Gesture recognition; Deep learning architectures; Fusion strategies | ||||
Abstract | Interest in automatic action and gesture recognition has grown considerably in the last few years. This is due in part to the large number of application domains for this type of technology. As in many other computer vision areas, deep learning based methods have quickly become a reference methodology for obtaining state-of-the-art performance in both tasks. This chapter is a survey of current deep learning based methodologies for action and gesture recognition in sequences of images. The survey reviews both fundamental and cutting edge methodologies reported in the last few years. We introduce a taxonomy that summarizes important aspects of deep learning for approaching both tasks. Details of the proposed architectures, fusion strategies, main datasets, and competitions are reviewed. Also, we summarize and discuss the main works proposed so far with particular interest on how they treat the temporal dimension of data, their highlighting features, and opportunities and challenges for future research. To the best of our knowledge this is the first survey in the topic. We foresee this survey will become a reference in this ever dynamic field of research. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | HUPBA; no proj | Approved | no | ||
Call Number | Admin @ si @ ACB2017a | Serial | 2981 | ||
Permanent link to this record | |||||
Author | Pau Rodriguez; Guillem Cucurull; Jordi Gonzalez; Josep M. Gonfaus; Kamal Nasrollahi; Thomas B. Moeslund; Xavier Roca | ||||
Title | Deep Pain: Exploiting Long Short-Term Memory Networks for Facial Expression Classification | Type | Journal Article | ||
Year | 2017 | Publication | IEEE Transactions on cybernetics | Abbreviated Journal | Cyber |
Volume | Issue | Pages | 1-11 | ||
Keywords | |||||
Abstract | Pain is an unpleasant feeling that has been shown to be an important factor for the recovery of patients. Since this is costly in human resources and difficult to do objectively, there is the need for automatic systems to measure it. In this paper, contrary to current state-of-the-art techniques in pain assessment, which are based on facial features only, we suggest that the performance can be enhanced by feeding the raw frames to deep learning models, outperforming the latest state-of-the-art results while also directly facing the problem of imbalanced data. As a baseline, our approach first uses convolutional neural networks (CNNs) to learn facial features from VGG_Faces, which are then linked to a long short-term memory to exploit the temporal relation between video frames. We further compare the performances of using the so popular schema based on the canonically normalized appearance versus taking into account the whole image. As a result, we outperform current state-of-the-art area under the curve performance in the UNBC-McMaster Shoulder Pain Expression Archive Database. In addition, to evaluate the generalization properties of our proposed methodology on facial motion recognition, we also report competitive results in the Cohn Kanade+ facial expression database. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ISE; 600.119; 600.098 | Approved | no | ||
Call Number | Admin @ si @ RCG2017a | Serial | 2926 | ||
Permanent link to this record | |||||
Author | Xinhang Song; Luis Herranz; Shuqiang Jiang | ||||
Title | Depth CNNs for RGB-D Scene Recognition: Learning from Scratch Better than Transferring from RGB-CNNs | Type | Conference Article | ||
Year | 2017 | Publication | 31st AAAI Conference on Artificial Intelligence | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | RGB-D scene recognition; weakly supervised; fine tune; CNN | ||||
Abstract | Scene recognition with RGB images has been extensively studied and has reached very remarkable recognition levels, thanks to convolutional neural networks (CNN) and large scene datasets. In contrast, current RGB-D scene data is much more limited, so often leverages RGB large datasets, by transferring pretrained RGB CNN models and fine-tuning with the target RGB-D dataset. However, we show that this approach has the limitation of hardly reaching bottom layers, which is key to learn modality-specific features. In contrast, we focus on the bottom layers, and propose an alternative strategy to learn depth features combining local weakly supervised training from patches followed by global fine tuning with images. This strategy is capable of learning very discriminative depth-specific features with limited depth images, without resorting to Places-CNN. In addition we propose a modified CNN architecture to further match the complexity of the model and the amount of data available. For RGB-D scene recognition, depth and RGB features are combined by projecting them in a common space and further leaning a multilayer classifier, which is jointly optimized in an end-to-end network. Our framework achieves state-of-the-art accuracy on NYU2 and SUN RGB-D in both depth only and combined RGB-D data. | ||||
Address | San Francisco CA; February 2017 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | AAAI | ||
Notes | LAMP; 600.120 | Approved | no | ||
Call Number | Admin @ si @ SHJ2017 | Serial | 2967 | ||
Permanent link to this record | |||||
Author | Hugo Jair Escalante; Isabelle Guyon; Sergio Escalera; Julio C. S. Jacques Junior; Xavier Baro; Evelyne Viegas; Yagmur Gucluturk; Umut Guclu; Marcel A. J. van Gerven; Rob van Lier; Meysam Madadi; Stephane Ayache | ||||
Title | Design of an Explainable Machine Learning Challenge for Video Interviews | Type | Conference Article | ||
Year | 2017 | Publication | International Joint Conference on Neural Networks | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | This paper reviews and discusses research advances on “explainable machine learning” in computer vision. We focus on a particular area of the “Looking at People” (LAP) thematic domain: first impressions and personality analysis. Our aim is to make the computational intelligence and computer vision communities aware of the importance of developing explanatory mechanisms for computer-assisted decision making applications, such as automating recruitment. Judgments based on personality traits are being made routinely by human resource departments to evaluate the candidates' capacity of social insertion and their potential of career growth. However, inferring personality traits and, in general, the process by which we humans form a first impression of people, is highly subjective and may be biased. Previous studies have demonstrated that learning machines can learn to mimic human decisions. In this paper, we go one step further and formulate the problem of explaining the decisions of the models as a means of identifying what visual aspects are important, understanding how they relate to decisions suggested, and possibly gaining insight into undesirable negative biases. We design a new challenge on explainability of learning machines for first impressions analysis. We describe the setting, scenario, evaluation metrics and preliminary outcomes of the competition. To the best of our knowledge this is the first effort in terms of challenges for explainability in computer vision. In addition our challenge design comprises several other quantitative and qualitative elements of novelty, including a “coopetition” setting, which combines competition and collaboration. | ||||
Address | Anchorage; Alaska; USA; May 2017 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | IJCNN | ||
Notes | HUPBA; no proj | Approved | no | ||
Call Number | Admin @ si @ EGE2017 | Serial | 2922 | ||
Permanent link to this record | |||||
Author | Marc Masana; Joost Van de Weijer; Luis Herranz;Andrew Bagdanov; Jose Manuel Alvarez | ||||
Title | Domain-adaptive deep network compression | Type | Conference Article | ||
Year | 2017 | Publication | 17th IEEE International Conference on Computer Vision | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Deep Neural Networks trained on large datasets can be easily transferred to new domains with far fewer labeled examples by a process called fine-tuning. This has the advantage that representations learned in the large source domain can be exploited on smaller target domains. However, networks designed to be optimal for the source task are often prohibitively large for the target task. In this work we address the compression of networks after domain transfer.
We focus on compression algorithms based on low-rank matrix decomposition. Existing methods base compression solely on learned network weights and ignore the statistics of network activations. We show that domain transfer leads to large shifts in network activations and that it is desirable to take this into account when compressing. We demonstrate that considering activation statistics when compressing weights leads to a rank-constrained regression problem with a closed-form solution. Because our method takes into account the target domain, it can more optimally remove the redundancy in the weights. Experiments show that our Domain Adaptive Low Rank (DALR) method significantly outperforms existing low-rank compression techniques. With our approach, the fc6 layer of VGG19 can be compressed more than 4x more than using truncated SVD alone – with only a minor or no loss in accuracy. When applied to domain-transferred networks it allows for compression down to only 5-20% of the original number of parameters with only a minor drop in performance. |
||||
Address | Venice; Italy; October 2017 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICCV | ||
Notes | LAMP; 601.305; 600.106; 600.120 | Approved | no | ||
Call Number | Admin @ si @ | Serial | 3034 | ||
Permanent link to this record | |||||
Author | Chirster Loob; Pejman Rasti; Iiris Lusi; Julio C. S. Jacques Junior; Xavier Baro; Sergio Escalera; Tomasz Sapinski; Dorota Kaminska; Gholamreza Anbarjafari | ||||
Title | Dominant and Complementary Multi-Emotional Facial Expression Recognition Using C-Support Vector Classification | Type | Conference Article | ||
Year | 2017 | Publication | 12th IEEE International Conference on Automatic Face and Gesture Recognition | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | We are proposing a new facial expression recognition model which introduces 30+ detailed facial expressions recognisable by any artificial intelligence interacting with a human. Throughout this research, we introduce two categories for the emotions, namely, dominant emotions and complementary emotions. In this research paper the complementary emotion is recognised by using the eye region if the dominant emotion is angry, fearful or sad, and if the dominant emotion is disgust or happiness the complementary emotion is mainly conveyed by the mouth. In order to verify the tagged dominant and complementary emotions, randomly chosen people voted for the recognised multi-emotional facial expressions. The average results of voting are showing that 73.88% of the voters agree on the correctness of the recognised multi-emotional facial expressions. | ||||
Address | Washington; DC; USA; May 2017 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | FG | ||
Notes | HUPBA; no menciona | Approved | no | ||
Call Number | Admin @ si @ LRL2017 | Serial | 2925 | ||
Permanent link to this record | |||||
Author | Albert Berenguel; Oriol Ramos Terrades; Josep Llados; Cristina Cañero | ||||
Title | e-Counterfeit: a mobile-server platform for document counterfeit detection | Type | Conference Article | ||
Year | 2017 | Publication | 14th IAPR International Conference on Document Analysis and Recognition | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | This paper presents a novel application to detect counterfeit identity documents forged by a scan-printing operation. Texture analysis approaches are proposed to extract validation features from security background that is usually printed in documents as IDs or banknotes. The main contribution of this work is the end-to-end mobile-server architecture, which provides a service for non-expert users and therefore can be used in several scenarios. The system also provides a crowdsourcing mode so labeled images can be gathered, generating databases for incremental training of the algorithms. | ||||
Address | Kyoto; Japan; November 2017 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICDAR | ||
Notes | DAG; 600.061; 600.097; 600.121 | Approved | no | ||
Call Number | Admin @ si @ BRL2018 | Serial | 3084 | ||
Permanent link to this record | |||||
Author | Jose Garcia-Rodriguez; Isabelle Guyon; Sergio Escalera; Alexandra Psarrou; Andrew Lewis; Miguel Cazorla | ||||
Title | Editorial: Special Issue on Computational Intelligence for Vision and Robotics | Type | Journal Article | ||
Year | 2017 | Publication | Neural Computing and Applications | Abbreviated Journal | Neural Computing and Applications |
Volume | 28 | Issue | 5 | Pages | 853–854 |
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | HuPBA;MILAB; no menciona | Approved | no | ||
Call Number | Admin @ si @ GGE2017 | Serial | 2845 | ||
Permanent link to this record | |||||
Author | Daniel Hernandez; Antonio Espinosa; David Vazquez; Antonio Lopez; Juan Carlos Moure | ||||
Title | Embedded Real-time Stixel Computation | Type | Conference Article | ||
Year | 2017 | Publication | GPU Technology Conference | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | GPU; CUDA; Stixels; Autonomous Driving | ||||
Abstract | |||||
Address | Silicon Valley; USA; May 2017 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | GTC | ||
Notes | ADAS; 600.118 | Approved | no | ||
Call Number | ADAS @ adas @ HEV2017a | Serial | 2879 | ||
Permanent link to this record |