Home | << 1 2 3 4 5 6 7 8 9 10 >> |
Records | |||||
---|---|---|---|---|---|
Author | Xim Cerda-Company; C. Alejandro Parraga; Xavier Otazu | ||||
Title | Which tone-mapping is the best? A comparative study of tone-mapping perceived quality | Type | Abstract | ||
Year | 2014 | Publication | Perception | Abbreviated Journal | |
Volume | 43 | Issue | Pages | 106 | |
Keywords | |||||
Abstract | Perception 43 ECVP Abstract Supplement
High-dynamic-range (HDR) imaging refers to the methods designed to increase the brightness dynamic range present in standard digital imaging techniques. This increase is achieved by taking the same picture under dierent exposure values and mapping the intensity levels into a single image by way of a tone-mapping operator (TMO). Currently, there is no agreement on how to evaluate the quality of dierent TMOs. In this work we psychophysically evaluate 15 dierent TMOs obtaining rankings based on the perceived properties of the resulting tone-mapped images. We performed two dierent experiments on a CRT calibrated display using 10 subjects: (1) a study of the internal relationships between grey-levels and (2) a pairwise comparison of the resulting 15 tone-mapped images. In (1) observers internally matched the grey-levels to a reference inside the tone-mapped images and in the real scene. In (2) observers performed a pairwise comparison of the tone-mapped images alongside the real scene. We obtained two rankings of the TMOs according their performance. In (1) the best algorithm was ICAM by J.Kuang et al (2007) and in (2) the best algorithm was a TMO by Krawczyk et al (2005). Our results also show no correlation between these two rankings. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ECVP | ||
Notes | NEUROBIT; 600.074 | Approved | no | ||
Call Number | Admin @ si @ CPO2014 | Serial | 2527 | ||
Permanent link to this record | |||||
Author | Sergio Escalera; Xavier Baro; Jordi Gonzalez; Miguel Angel Bautista; Meysam Madadi; Miguel Reyes; Victor Ponce; Hugo Jair Escalante; Jaime Shotton; Isabelle Guyon | ||||
Title | ChaLearn Looking at People Challenge 2014: Dataset and Results | Type | Conference Article | ||
Year | 2014 | Publication | ECCV Workshop on ChaLearn Looking at People | Abbreviated Journal | |
Volume | 8925 | Issue | Pages | 459-473 | |
Keywords | Human Pose Recovery; Behavior Analysis; Action and in- teractions; Multi-modal gestures; recognition | ||||
Abstract | This paper summarizes the ChaLearn Looking at People 2014 challenge data and the results obtained by the participants. The competition was split into three independent tracks: human pose recovery from RGB data, action and interaction recognition from RGB data sequences, and multi-modal gesture recognition from RGB-Depth sequences. For all the tracks, the goal was to perform user-independent recognition in sequences of continuous images using the overlapping Jaccard index as the evaluation measure. In this edition of the ChaLearn challenge, two large novel data sets were made publicly available and the Microsoft Codalab platform were used to manage the competition. Outstanding results were achieved in the three challenge tracks, with accuracy results of 0.20, 0.50, and 0.85 for pose recovery, action/interaction recognition, and multi-modal gesture recognition, respectively. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ECCVW | ||
Notes | HuPBA; ISE; 600.063;MV | Approved | no | ||
Call Number | Admin @ si @ EBG2014 | Serial | 2529 | ||
Permanent link to this record | |||||
Author | Xavier Perez Sala; Fernando De la Torre; Laura Igual; Sergio Escalera; Cecilio Angulo | ||||
Title | Subspace Procrustes Analysis | Type | Conference Article | ||
Year | 2014 | Publication | ECCV Workshop on ChaLearn Looking at People | Abbreviated Journal | |
Volume | 8925 | Issue | Pages | 654-668 | |
Keywords | |||||
Abstract | Procrustes Analysis (PA) has been a popular technique to align and build 2-D statistical models of shapes. Given a set of 2-D shapes PA is applied to remove rigid transformations. Then, a non-rigid 2-D model is computed by modeling (e.g., PCA) the residual. Although PA has been widely used, it has several limitations for modeling 2-D shapes: occluded landmarks and missing data can result in local minima solutions, and there is no guarantee that the 2-D shapes provide a uniform sampling of the 3-D space of rotations for the object. To address previous issues, this paper proposes Subspace PA (SPA). Given several instances of a 3-D object, SPA computes the mean and a 2-D subspace that can simultaneously model all rigid and non-rigid deformations of the 3-D object. We propose a discrete (DSPA) and continuous (CSPA) formulation for SPA, assuming that 3-D samples of an object are provided. DSPA extends the traditional PA, and produces unbiased 2-D models by uniformly sampling dierent views of the 3-D object. CSPA provides a continuous approach to uniformly sample the space of 3-D rotations, being more ecient in space and time. Experiments using SPA to learn 2-D models of bodies from motion capture data illustrate the benets of our approach. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ECCVW | ||
Notes | OR; HuPBA;MILAB | Approved | no | ||
Call Number | Admin @ si @ PTI2014 | Serial | 2539 | ||
Permanent link to this record | |||||
Author | Eloi Puertas; Miguel Angel Bautista; Daniel Sanchez; Sergio Escalera; Oriol Pujol | ||||
Title | Learning to Segment Humans by Stacking their Body Parts, | Type | Conference Article | ||
Year | 2014 | Publication | ECCV Workshop on ChaLearn Looking at People | Abbreviated Journal | |
Volume | 8925 | Issue | Pages | 685-697 | |
Keywords | Human body segmentation; Stacked Sequential Learning | ||||
Abstract | Human segmentation in still images is a complex task due to the wide range of body poses and drastic changes in environmental conditions. Usually, human body segmentation is treated in a two-stage fashion. First, a human body part detection step is performed, and then, human part detections are used as prior knowledge to be optimized by segmentation strategies. In this paper, we present a two-stage scheme based on Multi-Scale Stacked Sequential Learning (MSSL). We define an extended feature set by stacking a multi-scale decomposition of body
part likelihood maps. These likelihood maps are obtained in a first stage by means of a ECOC ensemble of soft body part detectors. In a second stage, contextual relations of part predictions are learnt by a binary classifier, obtaining an accurate body confidence map. The obtained confidence map is fed to a graph cut optimization procedure to obtain the final segmentation. Results show improved segmentation when MSSL is included in the human segmentation pipeline. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ECCVW | ||
Notes | HuPBA;MILAB | Approved | no | ||
Call Number | Admin @ si @ PBS2014 | Serial | 2553 | ||
Permanent link to this record | |||||
Author | Gabriel Villalonga; Sebastian Ramos; German Ros; David Vazquez; Antonio Lopez | ||||
Title | 3d Pedestrian Detection via Random Forest | Type | Miscellaneous | ||
Year | 2014 | Publication | European Conference on Computer Vision | Abbreviated Journal | |
Volume | Issue | Pages | 231-238 | ||
Keywords | Pedestrian Detection | ||||
Abstract | Our demo focuses on showing the extraordinary performance of our novel 3D pedestrian detector along with its simplicity and real-time capabilities. This detector has been designed for autonomous driving applications, but it can also be applied in other scenarios that cover both outdoor and indoor applications.
Our pedestrian detector is based on the combination of a random forest classifier with HOG-LBP features and the inclusion of a preprocessing stage based on 3D scene information in order to precisely determinate the image regions where the detector should search for pedestrians. This approach ends up in a high accurate system that runs real-time as it is required by many computer vision and robotics applications. |
||||
Address | Zurich; suiza; September 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ECCV-Demo | ||
Notes | ADAS; 600.076 | Approved | no | ||
Call Number | Admin @ si @ VRR2014 | Serial | 2570 | ||
Permanent link to this record | |||||
Author | Alicia Fornes; Josep Llados; Joan Mas; Joana Maria Pujadas-Mora; Anna Cabre | ||||
Title | A Bimodal Crowdsourcing Platform for Demographic Historical Manuscripts | Type | Conference Article | ||
Year | 2014 | Publication | Digital Access to Textual Cultural Heritage Conference | Abbreviated Journal | |
Volume | Issue | Pages | 103-108 | ||
Keywords | |||||
Abstract | In this paper we present a crowdsourcing web-based application for extracting information from demographic handwritten document images. The proposed application integrates two points of view: the semantic information for demographic research, and the ground-truthing for document analysis research. Concretely, the application has the contents view, where the information is recorded into forms, and the labeling view, with the word labels for evaluating document analysis techniques. The crowdsourcing architecture allows to accelerate the information extraction (many users can work simultaneously), validate the information, and easily provide feedback to the users. We finally show how the proposed application can be extended to other kind of demographic historical manuscripts. | ||||
Address | Madrid; May 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-4503-2588-2 | Medium | ||
Area | Expedition | Conference | DATeCH | ||
Notes | DAG; 600.061; 602.006; 600.077 | Approved | no | ||
Call Number | Admin @ si @ FLM2014 | Serial | 2516 | ||
Permanent link to this record | |||||
Author | David Fernandez; R.Manmatha; Josep Llados; Alicia Fornes | ||||
Title | Sequential Word Spotting in Historical Handwritten Documents | Type | Conference Article | ||
Year | 2014 | Publication | 11th IAPR International Workshop on Document Analysis and Systems | Abbreviated Journal | |
Volume | Issue | Pages | 101 - 105 | ||
Keywords | |||||
Abstract | In this work we present a handwritten word spotting approach that takes advantage of the a priori known order of appearance of the query words. Given an ordered sequence of query word instances, the proposed approach performs a
sequence alignment with the words in the target collection. Although the alignment is quite sparse, i.e. the number of words in the database is higher than the query set, the improvement in the overall performance is sensitively higher than isolated word spotting. As application dataset, we use a collection of handwritten marriage licenses taking advantage of the ordered index pages of family names. |
||||
Address | Tours; Francia; April 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-4799-3243-6 | Medium | ||
Area | Expedition | Conference | DAS | ||
Notes | DAG; 600.061; 600.056; 602.006; 600.077 | Approved | no | ||
Call Number | Admin @ si @ FML2014 | Serial | 2462 | ||
Permanent link to this record | |||||
Author | Christophe Rigaud; Dimosthenis Karatzas; Jean-Christophe Burie; Jean-Marc Ogier | ||||
Title | Color descriptor for content-based drawing retrieval | Type | Conference Article | ||
Year | 2014 | Publication | 11th IAPR International Workshop on Document Analysis and Systems | Abbreviated Journal | |
Volume | Issue | Pages | 267 - 271 | ||
Keywords | |||||
Abstract | Human detection in computer vision field is an active field of research. Extending this to human-like drawings such as the main characters in comic book stories is not trivial. Comics analysis is a very recent field of research at the intersection of graphics, texts, objects and people recognition. The detection of the main comic characters is an essential step towards a fully automatic comic book understanding. This paper presents a color-based approach for comics character retrieval using content-based drawing retrieval and color palette. | ||||
Address | Tours; Francia; April 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-4799-3243-6 | Medium | ||
Area | Expedition | Conference | DAS | ||
Notes | DAG; 600.056; 600.077 | Approved | no | ||
Call Number | Admin @ si @ RKB2014 | Serial | 2479 | ||
Permanent link to this record | |||||
Author | Dimosthenis Karatzas; Sergi Robles; Lluis Gomez | ||||
Title | An on-line platform for ground truthing and performance evaluation of text extraction systems | Type | Conference Article | ||
Year | 2014 | Publication | 11th IAPR International Workshop on Document Analysis and Systems | Abbreviated Journal | |
Volume | Issue | Pages | 242 - 246 | ||
Keywords | |||||
Abstract | This paper presents a set of on-line software tools for creating ground truth and calculating performance evaluation metrics for text extraction tasks such as localization, segmentation and recognition. The platform supports the definition of comprehensive ground truth information at different text representation levels while it offers centralised management and quality control of the ground truthing effort. It implements a range of state of the art performance evaluation algorithms and offers functionality for the definition of evaluation scenarios, on-line calculation of various performance metrics and visualisation of the results. The
presented platform, which comprises the backbone of the ICDAR 2011 (challenge 1) and 2013 (challenges 1 and 2) Robust Reading competitions, is now made available for public use. |
||||
Address | Tours; Francia; April 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-4799-3243-6 | Medium | ||
Area | Expedition | Conference | DAS | ||
Notes | DAG; 600.056; 600.077 | Approved | no | ||
Call Number | Admin @ si @ KRG2014 | Serial | 2491 | ||
Permanent link to this record | |||||
Author | P. Wang; V. Eglin; C. Garcia; C. Largeron; Josep Llados; Alicia Fornes | ||||
Title | A Novel Learning-free Word Spotting Approach Based on Graph Representation | Type | Conference Article | ||
Year | 2014 | Publication | 11th IAPR International Workshop on Document Analysis and Systems | Abbreviated Journal | |
Volume | Issue | Pages | 207-211 | ||
Keywords | |||||
Abstract | Effective information retrieval on handwritten document images has always been a challenging task. In this paper, we propose a novel handwritten word spotting approach based on graph representation. The presented model comprises both topological and morphological signatures of handwriting. Skeleton-based graphs with the Shape Context labelled vertexes are established for connected components. Each word image is represented as a sequence of graphs. In order to be robust to the handwriting variations, an exhaustive merging process based on DTW alignment result is introduced in the similarity measure between word images. With respect to the computation complexity, an approximate graph edit distance approach using bipartite matching is employed for graph matching. The experiments on the George Washington dataset and the marriage records from the Barcelona Cathedral dataset demonstrate that the proposed approach outperforms the state-of-the-art structural methods. | ||||
Address | Tours; France; April 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-4799-3243-6 | Medium | ||
Area | Expedition | Conference | DAS | ||
Notes | DAG; 600.061; 602.006; 600.077 | Approved | no | ||
Call Number | Admin @ si @ WEG2014b | Serial | 2517 | ||
Permanent link to this record | |||||
Author | Thanh Ha Do; Salvatore Tabbone; Oriol Ramos Terrades | ||||
Title | Spotting Symbol Using Sparsity over Learned Dictionary of Local Descriptors | Type | Conference Article | ||
Year | 2014 | Publication | 11th IAPR International Workshop on Document Analysis and Systems | Abbreviated Journal | |
Volume | Issue | Pages | 156-160 | ||
Keywords | |||||
Abstract | This paper proposes a new approach to spot symbols into graphical documents using sparse representations. More specifically, a dictionary is learned from a training database of local descriptors defined over the documents. Following their sparse representations, interest points sharing similar properties are used to define interest regions. Using an original adaptation of information retrieval techniques, a vector model for interest regions and for a query symbol is built based on its sparsity in a visual vocabulary where the visual words are columns in the learned dictionary. The matching process is performed comparing the similarity between vector models. Evaluation on SESYD datasets demonstrates that our method is promising. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-4799-3243-6 | Medium | ||
Area | Expedition | Conference | DAS | ||
Notes | DAG; 600.077 | Approved | no | ||
Call Number | Admin @ si @ DTR2014 | Serial | 2543 | ||
Permanent link to this record | |||||
Author | Marçal Rusiñol; J. Chazalon; Jean-Marc Ogier | ||||
Title | Combining Focus Measure Operators to Predict OCR Accuracy in Mobile-Captured Document Images | Type | Conference Article | ||
Year | 2014 | Publication | 11th IAPR International Workshop on Document Analysis and Systems | Abbreviated Journal | |
Volume | Issue | Pages | 181 - 185 | ||
Keywords | |||||
Abstract | Mobile document image acquisition is a new trend raising serious issues in business document processing workflows. Such digitization procedure is unreliable, and integrates many distortions which must be detected as soon as possible, on the mobile, to avoid paying data transmission fees, and losing information due to the inability to re-capture later a document with temporary availability. In this context, out-of-focus blur is major issue: users have no direct control over it, and it seriously degrades OCR recognition. In this paper, we concentrate on the estimation of focus quality, to ensure a sufficient legibility of a document image for OCR processing. We propose two contributions to improve OCR accuracy prediction for mobile-captured document images. First, we present 24 focus measures, never tested on document images, which are fast to compute and require no training. Second, we show that a combination of those measures enables state-of-the art performance regarding the correlation with OCR accuracy. The resulting approach is fast, robust, and easy to implement in a mobile device. Experiments are performed on a public dataset, and precise details about image processing are given. | ||||
Address | Tours; France; April 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-4799-3243-6 | Medium | ||
Area | Expedition | Conference | DAS | ||
Notes | DAG; 601.223; 600.077 | Approved | no | ||
Call Number | Admin @ si @ RCO2014a | Serial | 2545 | ||
Permanent link to this record | |||||
Author | Carlo Gatta; Adriana Romero; Joost Van de Weijer | ||||
Title | Unrolling loopy top-down semantic feedback in convolutional deep networks | Type | Conference Article | ||
Year | 2014 | Publication | Workshop on Deep Vision: Deep Learning for Computer Vision | Abbreviated Journal | |
Volume | Issue | Pages | 498-505 | ||
Keywords | |||||
Abstract | In this paper, we propose a novel way to perform top-down semantic feedback in convolutional deep networks for efficient and accurate image parsing. We also show how to add global appearance/semantic features, which have shown to improve image parsing performance in state-of-the-art methods, and was not present in previous convolutional approaches. The proposed method is characterised by an efficient training and a sufficiently fast testing. We use the well known SIFTflow dataset to numerically show the advantages provided by our contributions, and to compare with state-of-the-art image parsing convolutional based approaches. | ||||
Address | Columbus; Ohio; June 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPRW | ||
Notes | LAMP; MILAB; 601.160; 600.079 | Approved | no | ||
Call Number | Admin @ si @ GRW2014 | Serial | 2490 | ||
Permanent link to this record | |||||
Author | Marc Serra; Olivier Penacchio; Robert Benavente; Maria Vanrell; Dimitris Samaras | ||||
Title | The Photometry of Intrinsic Images | Type | Conference Article | ||
Year | 2014 | Publication | 27th IEEE Conference on Computer Vision and Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 1494-1501 | ||
Keywords | |||||
Abstract | Intrinsic characterization of scenes is often the best way to overcome the illumination variability artifacts that complicate most computer vision problems, from 3D reconstruction to object or material recognition. This paper examines the deficiency of existing intrinsic image models to accurately account for the effects of illuminant color and sensor characteristics in the estimation of intrinsic images and presents a generic framework which incorporates insights from color constancy research to the intrinsic image decomposition problem. The proposed mathematical formulation includes information about the color of the illuminant and the effects of the camera sensors, both of which modify the observed color of the reflectance of the objects in the scene during the acquisition process. By modeling these effects, we get a “truly intrinsic” reflectance image, which we call absolute reflectance, which is invariant to changes of illuminant or camera sensors. This model allows us to represent a wide range of intrinsic image decompositions depending on the specific assumptions on the geometric properties of the scene configuration and the spectral properties of the light source and the acquisition system, thus unifying previous models in a single general framework. We demonstrate that even partial information about sensors improves significantly the estimated reflectance images, thus making our method applicable for a wide range of sensors. We validate our general intrinsic image framework experimentally with both synthetic data and natural images. | ||||
Address | Columbus; Ohio; USA; June 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPR | ||
Notes | CIC; 600.052; 600.051; 600.074 | Approved | no | ||
Call Number | Admin @ si @ SPB2014 | Serial | 2506 | ||
Permanent link to this record | |||||
Author | M. Danelljan; Fahad Shahbaz Khan; Michael Felsberg; Joost Van de Weijer | ||||
Title | Adaptive color attributes for real-time visual tracking | Type | Conference Article | ||
Year | 2014 | Publication | 27th IEEE Conference on Computer Vision and Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 1090 - 1097 | ||
Keywords | |||||
Abstract | Visual tracking is a challenging problem in computer vision. Most state-of-the-art visual trackers either rely on luminance information or use simple color representations for image description. Contrary to visual tracking, for object
recognition and detection, sophisticated color features when combined with luminance have shown to provide excellent performance. Due to the complexity of the tracking problem, the desired color feature should be computationally efficient, and possess a certain amount of photometric invariance while maintaining high discriminative power. This paper investigates the contribution of color in a tracking-by-detection framework. Our results suggest that color attributes provides superior performance for visual tracking. We further propose an adaptive low-dimensional variant of color attributes. Both quantitative and attributebased evaluations are performed on 41 challenging benchmark color sequences. The proposed approach improves the baseline intensity-based tracker by 24% in median distance precision. Furthermore, we show that our approach outperforms state-of-the-art tracking methods while running at more than 100 frames per second. |
||||
Address | Nottingham; UK; September 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPR | ||
Notes | CIC; LAMP; 600.074; 600.079 | Approved | no | ||
Call Number | Admin @ si @ DKF2014 | Serial | 2509 | ||
Permanent link to this record |