toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Jaime Moreno; Xavier Otazu edit  doi
isbn  openurl
  Title Image compression algorithm based on Hilbert scanning of embedded quadTrees: an introduction of the Hi-SET coder Type Conference Article
  Year 2011 Publication IEEE International Conference on Multimedia and Expo Abbreviated Journal  
  Volume Issue Pages 1-6  
  Keywords  
  Abstract In this work we present an effective and computationally simple algorithm for image compression based on Hilbert Scanning of Embedded quadTrees (Hi-SET). It allows to represent an image as an embedded bitstream along a fractal function. Embedding is an important feature of modern image compression algorithms, in this way Salomon in [1, pg. 614] cite that another feature and perhaps a unique one is the fact of achieving the best quality for the number of bits input by the decoder at any point during the decoding. Hi-SET possesses also this latter feature. Furthermore, the coder is based on a quadtree partition strategy, that applied to image transformation structures such as discrete cosine or wavelet transform allows to obtain an energy clustering both in frequency and space. The coding algorithm is composed of three general steps, using just a list of significant pixels. The implementation of the proposed coder is developed for gray-scale and color image compression. Hi-SET compressed images are, on average, 6.20dB better than the ones obtained by other compression techniques based on the Hilbert scanning. Moreover, Hi-SET improves the image quality in 1.39dB and 1.00dB in gray-scale and color compression, respectively, when compared with JPEG2000 coder.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1945-7871 ISBN 978-1-61284-348-3 Medium  
  Area Expedition Conference (up) ICME  
  Notes CIC Approved no  
  Call Number Admin @ si @ MoO2011a Serial 2176  
Permanent link to this record
 

 
Author Marc Bolaños; R. Mestre; Estefania Talavera; Xavier Giro; Petia Radeva edit  doi
isbn  openurl
  Title Visual Summary of Egocentric Photostreams by Representative Keyframes Type Conference Article
  Year 2015 Publication IEEE International Conference on Multimedia and Expo ICMEW2015 Abbreviated Journal  
  Volume Issue Pages 1-6  
  Keywords egocentric; lifelogging; summarization; keyframes  
  Abstract Building a visual summary from an egocentric photostream captured by a lifelogging wearable camera is of high interest for different applications (e.g. memory reinforcement). In this paper, we propose a new summarization method based on keyframes selection that uses visual features extracted bymeans of a convolutional neural network. Our method applies an unsupervised clustering for dividing the photostreams into events, and finally extracts the most relevant keyframe for each event. We assess the results by applying a blind-taste test on a group of 20 people who assessed the quality of the
summaries.
 
  Address Torino; italy; July 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue 978-1-4799-7079-7 Edition  
  ISSN ISBN 978-1-4799-7079-7 Medium  
  Area Expedition Conference (up) ICME  
  Notes MILAB Approved no  
  Call Number Admin @ si @ BMT2015 Serial 2638  
Permanent link to this record
 

 
Author Sergio Escalera; Jordi Gonzalez; Xavier Baro; Miguel Reyes; Oscar Lopes; Isabelle Guyon; V. Athitsos; Hugo Jair Escalante edit   pdf
doi  isbn
openurl 
  Title Multi-modal Gesture Recognition Challenge 2013: Dataset and Results Type Conference Article
  Year 2013 Publication 15th ACM International Conference on Multimodal Interaction Abbreviated Journal  
  Volume Issue Pages 445-452  
  Keywords  
  Abstract The recognition of continuous natural gestures is a complex and challenging problem due to the multi-modal nature of involved visual cues (e.g. fingers and lips movements, subtle facial expressions, body pose, etc.), as well as technical limitations such as spatial and temporal resolution and unreliable
depth cues. In order to promote the research advance on this field, we organized a challenge on multi-modal gesture recognition. We made available a large video database of 13; 858 gestures from a lexicon of 20 Italian gesture categories recorded with a KinectTM camera, providing the audio, skeletal model, user mask, RGB and depth images. The focus of the challenge was on user independent multiple gesture learning. There are no resting positions and the gestures are performed in continuous sequences lasting 1-2 minutes, containing between 8 and 20 gesture instances in each sequence. As a result, the dataset contains around 1:720:800 frames. In addition to the 20 main gesture categories, ‘distracter’ gestures are included, meaning that additional audio
and gestures out of the vocabulary are included. The final evaluation of the challenge was defined in terms of the Levenshtein edit distance, where the goal was to indicate the real order of gestures within the sequence. 54 international teams participated in the challenge, and outstanding results
were obtained by the first ranked participants.
 
  Address Sidney; Australia; December 2013  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4503-2129-7 Medium  
  Area Expedition Conference (up) ICMI  
  Notes HUPBA; ISE; 600.063;MV Approved no  
  Call Number Admin @ si @ EGB2013 Serial 2373  
Permanent link to this record
 

 
Author David Vazquez; Antonio Lopez; Daniel Ponsa; Javier Marin edit   pdf
doi  isbn
openurl 
  Title Virtual Worlds and Active Learning for Human Detection Type Conference Article
  Year 2011 Publication 13th International Conference on Multimodal Interaction Abbreviated Journal  
  Volume Issue Pages 393-400  
  Keywords Pedestrian Detection; Human detection; Virtual; Domain Adaptation; Active Learning  
  Abstract Image based human detection is of paramount interest due to its potential applications in fields such as advanced driving assistance, surveillance and media analysis. However, even detecting non-occluded standing humans remains a challenge of intensive research. The most promising human detectors rely on classifiers developed in the discriminative paradigm, i.e., trained with labelled samples. However, labeling is a manual intensive step, especially in cases like human detection where it is necessary to provide at least bounding boxes framing the humans for training. To overcome such problem, some authors have proposed the use of a virtual world where the labels of the different objects are obtained automatically. This means that the human models (classifiers) are learnt using the appearance of rendered images, i.e., using realistic computer graphics. Later, these models are used for human detection in images of the real world. The results of this technique are surprisingly good. However, these are not always as good as the classical approach of training and testing with data coming from the same camera, or similar ones. Accordingly, in this paper we address the challenge of using a virtual world for gathering (while playing a videogame) a large amount of automatically labelled samples (virtual humans and background) and then training a classifier that performs equal, in real-world images, than the one obtained by equally training from manually labelled real-world samples. For doing that, we cast the problem as one of domain adaptation. In doing so, we assume that a small amount of manually labelled samples from real-world images is required. To collect these labelled samples we propose a non-standard active learning technique. Therefore, ultimately our human model is learnt by the combination of virtual and real world labelled samples (Fig. 1), which has not been done before. We present quantitative results showing that this approach is valid.  
  Address Alicante, Spain  
  Corporate Author Thesis  
  Publisher ACM DL Place of Publication New York, NY, USA, USA Editor  
  Language English Summary Language English Original Title Virtual Worlds and Active Learning for Human Detection  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4503-0641-6 Medium  
  Area Expedition Conference (up) ICMI  
  Notes ADAS Approved yes  
  Call Number ADAS @ adas @ VLP2011a Serial 1683  
Permanent link to this record
 

 
Author Victor Ponce; Sergio Escalera; Xavier Baro edit  doi
isbn  openurl
  Title Multi-modal Social Signal Analysis for Predicting Agreement in Conversation Settings Type Conference Article
  Year 2013 Publication 15th ACM International Conference on Multimodal Interaction Abbreviated Journal  
  Volume Issue Pages 495-502  
  Keywords  
  Abstract In this paper we present a non-invasive ambient intelligence framework for the analysis of non-verbal communication applied to conversational settings. In particular, we apply feature extraction techniques to multi-modal audio-RGB-depth data. We compute a set of behavioral indicators that define communicative cues coming from the fields of psychology and observational methodology. We test our methodology over data captured in victim-offender mediation scenarios. Using different state-of-the-art classification approaches, our system achieve upon 75% of recognition predicting agreement among the parts involved in the conversations, using as ground truth the experts opinions.  
  Address Sidney; Australia; December 2013  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4503-2129-7 Medium  
  Area Expedition Conference (up) ICMI  
  Notes HuPBA;MV Approved no  
  Call Number Admin @ si @ PEB2013 Serial 2488  
Permanent link to this record
 

 
Author Ruth Aylett; Ginevra Castellano; Bogdan Raducanu; Ana Paiva; Marc Hanheide edit  url
doi  isbn
openurl 
  Title Long-term socially perceptive and interactive robot companions: challenges and future perspectives Type Conference Article
  Year 2011 Publication 13th International Conference on Multimodal Interaction Abbreviated Journal  
  Volume Issue Pages 323-326  
  Keywords human-robot interaction, multimodal interaction, social robotics  
  Abstract This paper gives a brief overview of the challenges for multi-model perception and generation applied to robot companions located in human social environments. It reviews the current position in both perception and generation and the immediate technical challenges and goes on to consider the extra issues raised by embodiment and social context. Finally, it briefly discusses the impact of systems that must function continually over months rather than just for a few hours.  
  Address Alicante  
  Corporate Author Thesis  
  Publisher ACM Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4503-0641-6 Medium  
  Area Expedition Conference (up) ICMI  
  Notes OR;MV Approved no  
  Call Number Admin @ si @ ACR2011 Serial 1888  
Permanent link to this record
 

 
Author Javier M. Olaso; Alain Vazquez; Leila Ben Letaifa; Mikel de Velasco; Aymen Mtibaa; Mohamed Amine Hmani; Dijana Petrovska-Delacretaz; Gerard Chollet; Cesar Montenegro; Asier Lopez-Zorrilla; Raquel Justo; Roberto Santana; Jofre Tenorio-Laranga; Eduardo Gonzalez-Fraile; Begoña Fernandez-Ruanova; Gennaro Cordasco; Anna Esposito; Kristin Beck Gjellesvik; Anna Torp Johansen; Maria Stylianou Kornes; Colin Pickard; Cornelius Glackin; Gary Cahalane; Pau Buch; Cristina Palmero; Sergio Escalera; Olga Gordeeva; Olivier Deroo; Anaïs Fernandez; Daria Kyslitska; Jose Antonio Lozano; Maria Ines Torres; Stephan Schlogl edit  url
openurl 
  Title The EMPATHIC Virtual Coach: a demo Type Conference Article
  Year 2021 Publication 23rd ACM International Conference on Multimodal Interaction Abbreviated Journal  
  Volume Issue Pages 848-851  
  Keywords  
  Abstract The main objective of the EMPATHIC project has been the design and development of a virtual coach to engage the healthy-senior user and to enhance well-being through awareness of personal status. The EMPATHIC approach addresses this objective through multimodal interactions supported by the GROW coaching model. The paper summarizes the main components of the EMPATHIC Virtual Coach (EMPATHIC-VC) and introduces a demonstration of the coaching sessions in selected scenarios.  
  Address Virtual; October 2021  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up) ICMI  
  Notes HUPBA; no proj Approved no  
  Call Number Admin @ si @ OVB2021 Serial 3644  
Permanent link to this record
 

 
Author Sergio Escalera; Petia Radeva; Jordi Vitria; Xavier Baro; Bogdan Raducanu edit  url
doi  openurl
  Title Modelling and Analyzing Multimodal Dyadic Interactions Using Social Networks Type Conference Article
  Year 2010 Publication 12th International Conference on Multimodal Interfaces and 7th Workshop on Machine Learning for Multimodal Interaction. Abbreviated Journal  
  Volume Issue Pages  
  Keywords Social interaction; Multimodal fusion, Influence model; Social network analysis  
  Abstract Social network analysis became a common technique used to model and quantify the properties of social interactions. In this paper, we propose an integrated framework to explore the characteristics of a social network extracted from
multimodal dyadic interactions. First, speech detection is performed through an audio/visual fusion scheme based on stacked sequential learning. In the audio domain, speech is detected through clusterization of audio features. Clusters
are modelled by means of an One-state Hidden Markov Model containing a diagonal covariance Gaussian Mixture Model. In the visual domain, speech detection is performed through differential-based feature extraction from the segmented
mouth region, and a dynamic programming matching procedure. Second, in order to model the dyadic interactions, we employed the Influence Model whose states
encode the previous integrated audio/visual data. Third, the social network is extracted based on the estimated influences. For our study, we used a set of videos belonging to New York Times’ Blogging Heads opinion blog. The results
are reported both in terms of accuracy of the audio/visual data fusion and centrality measures used to characterize the social network.
 
  Address Beijing (China)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up) ICMI-MLI  
  Notes OR;MILAB;HUPBA;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ ERV2010 Serial 1427  
Permanent link to this record
 

 
Author Isabelle Guyon; Kristin Bennett; Gavin Cawley; Hugo Jair Escalante; Sergio Escalera; Tin Kam Ho; Nuria Macia; Bisakha Ray; Mehreen Saeed; Alexander Statnikov; Evelyne Viegas edit  url
doi  openurl
  Title AutoML Challenge 2015: Design and First Results Type Conference Article
  Year 2015 Publication 32nd International Conference on Machine Learning, ICML workshop, JMLR proceedings ICML15 Abbreviated Journal  
  Volume Issue Pages 1-8  
  Keywords AutoML Challenge; machine learning; model selection; meta-learning; repre- sentation learning; active learning  
  Abstract ChaLearn is organizing the Automatic Machine Learning (AutoML) contest 2015, which challenges participants to solve classi cation and regression problems without any human intervention. Participants' code is automatically run on the contest servers to train and test learning machines. However, there is no obligation to submit code; half of the prizes can be won by submitting prediction results only. Datasets of progressively increasing diculty are introduced throughout the six rounds of the challenge. (Participants can
enter the competition in any round.) The rounds alternate phases in which learners are tested on datasets participants have not seen (AutoML), and phases in which participants have limited time to tweak their algorithms on those datasets to improve performance (Tweakathon). This challenge will push the state of the art in fully automatic machine learning on a wide range of real-world problems. The platform will remain available beyond the termination of the challenge: http://codalab.org/AutoML.
 
  Address Lille; France; July 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up) ICML  
  Notes HuPBA;MILAB Approved no  
  Call Number Admin @ si @ GBC2015c Serial 2656  
Permanent link to this record
 

 
Author Isabelle Guyon; Imad Chaabane; Hugo Jair Escalante; Sergio Escalera; Damir Jajetic; James Robert Lloyd; Nuria Macia; Bisakha Ray; Lukasz Romaszko; Michele Sebag; Alexander Statnikov; Sebastien Treguer; Evelyne Viegas edit  openurl
  Title A brief Review of the ChaLearn AutoML Challenge: Any-time Any-dataset Learning without Human Intervention Type Conference Article
  Year 2016 Publication AutoML Workshop Abbreviated Journal  
  Volume Issue 1 Pages 1-8  
  Keywords AutoML Challenge; machine learning; model selection; meta-learning; repre- sentation learning; active learning  
  Abstract The ChaLearn AutoML Challenge team conducted a large scale evaluation of fully automatic, black-box learning machines for feature-based classification and regression problems. The test bed was composed of 30 data sets from a wide variety of application domains and ranged across different types of complexity. Over six rounds, participants succeeded in delivering AutoML software capable of being trained and tested without human intervention. Although improvements can still be made to close the gap between human-tweaked and AutoML models, this competition contributes to the development of fully automated environments by challenging practitioners to solve problems under specific constraints and sharing their approaches; the platform will remain available for post-challenge submissions at http://codalab.org/AutoML.  
  Address New York; USA; June 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up) ICML  
  Notes HuPBA;MILAB Approved no  
  Call Number Admin @ si @ GCE2016 Serial 2769  
Permanent link to this record
 

 
Author Zhengying Liu; Adrien Pavao; Zhen Xu; Sergio Escalera; Isabelle Guyon; Julio C. S. Jacques Junior; Meysam Madadi; Sebastien Treguer edit   pdf
openurl 
  Title How far are we from true AutoML: reflection from winning solutions and results of AutoDL challenge Type Conference Article
  Year 2020 Publication 7th ICML Workshop on Automated Machine Learning Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Following the completion of the AutoDL challenge (the final challenge in the ChaLearn
AutoDL challenge series 2019), we investigate winning solutions and challenge results to
answer an important motivational question: how far are we from achieving true AutoML?
On one hand, the winning solutions achieve good (accurate and fast) classification performance on unseen datasets. On the other hand, all winning solutions still contain a
considerable amount of hard-coded knowledge on the domain (or modality) such as image,
video, text, speech and tabular. This form of ad-hoc meta-learning could be replaced by
more automated forms of meta-learning in the future. Organizing a meta-learning challenge could help forging AutoML solutions that generalize to new unseen domains (e.g.
new types of sensor data) as well as gaining insights on the AutoML problem from a more
fundamental point of view. The datasets of the AutoDL challenge are a resource that can
be used for further benchmarks and the code of the winners has been outsourced, which is
a big step towards “democratizing” Deep Learning.
 
  Address Virtual; July 2020  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up) ICML  
  Notes HUPBA Approved no  
  Call Number Admin @ si @ LPX2020 Serial 3502  
Permanent link to this record
 

 
Author Marc Masana; Bartlomiej Twardowski; Joost Van de Weijer edit   pdf
openurl 
  Title On Class Orderings for Incremental Learning Type Conference Article
  Year 2020 Publication ICML Workshop on Continual Learning Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract The influence of class orderings in the evaluation of incremental learning has received very little attention. In this paper, we investigate the impact of class orderings for incrementally learned classifiers. We propose a method to compute various orderings for a dataset. The orderings are derived by simulated annealing optimization from the confusion matrix and reflect different incremental learning scenarios, including maximally and minimally confusing tasks. We evaluate a wide range of state-of-the-art incremental learning methods on the proposed orderings. Results show that orderings can have a significant impact on performance and the ranking of the methods.  
  Address Virtual; July 2020  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up) ICMLW  
  Notes LAMP; 600.120 Approved no  
  Call Number Admin @ si @ MTW2020 Serial 3505  
Permanent link to this record
 

 
Author David Berga; Marc Masana; Joost Van de Weijer edit   pdf
openurl 
  Title Disentanglement of Color and Shape Representations for Continual Learning Type Conference Article
  Year 2020 Publication ICML Workshop on Continual Learning Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract We hypothesize that disentangled feature representations suffer less from catastrophic forgetting. As a case study we perform explicit disentanglement of color and shape, by adjusting the network architecture. We tested classification accuracy and forgetting in a task-incremental setting with Oxford-102 Flowers dataset. We combine our method with Elastic Weight Consolidation, Learning without Forgetting, Synaptic Intelligence and Memory Aware Synapses, and show that feature disentanglement positively impacts continual learning performance.  
  Address Virtual; July 2020  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up) ICMLW  
  Notes LAMP; 600.120 Approved no  
  Call Number Admin @ si @ BMW2020 Serial 3506  
Permanent link to this record
 

 
Author Albin Soutif; Marc Masana; Joost Van de Weijer; Bartlomiej Twardowski edit   pdf
openurl 
  Title On the importance of cross-task features for class-incremental learning Type Conference Article
  Year 2021 Publication Theory and Foundation of continual learning workshop of ICML Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract In class-incremental learning, an agent with limited resources needs to learn a sequence of classification tasks, forming an ever growing classification problem, with the constraint of not being able to access data from previous tasks. The main difference with task-incremental learning, where a task-ID is available at inference time, is that the learner also needs to perform crosstask discrimination, i.e. distinguish between classes that have not been seen together. Approaches to tackle this problem are numerous and mostly make use of an external memory (buffer) of non-negligible size. In this paper, we ablate the learning of crosstask features and study its influence on the performance of basic replay strategies used for class-IL. We also define a new forgetting measure for class-incremental learning, and see that forgetting is not the principal cause of low performance. Our experimental results show that future algorithms for class-incremental learning should not only prevent forgetting, but also aim to improve the quality of the cross-task features. This is especially important when the number of classes per task is small.  
  Address Virtual; July 2021  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up) ICMLW  
  Notes LAMP Approved no  
  Call Number Admin @ si @ SMW2021 Serial 3588  
Permanent link to this record
 

 
Author Muhammad Anwer Rao; Fahad Shahbaz Khan; Joost Van de Weijer; Jorma Laaksonen edit   pdf
doi  openurl
  Title Combining Holistic and Part-based Deep Representations for Computational Painting Categorization Type Conference Article
  Year 2016 Publication 6th International Conference on Multimedia Retrieval Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Automatic analysis of visual art, such as paintings, is a challenging inter-disciplinary research problem. Conventional approaches only rely on global scene characteristics by encoding holistic information for computational painting categorization.We argue that such approaches are sub-optimal and that discriminative common visual structures provide complementary information for painting classification. We present an approach that encodes both the global scene layout and discriminative latent common structures for computational painting categorization. The region of interests are automatically extracted, without any manual part labeling, by training class-specific deformable part-based models. Both holistic and region-of-interests are then described using multi-scale dense convolutional features. These features are pooled separately using Fisher vector encoding and concatenated afterwards in a single image representation. Experiments are performed on a challenging dataset with 91 different painters and 13 diverse painting styles. Our approach outperforms the standard method, which only employs the global scene characteristics. Furthermore, our method achieves state-of-the-art results outperforming a recent multi-scale deep features based approach [11] by 6.4% and 3.8% respectively on artist and style classification.  
  Address New York; USA; June 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up) ICMR  
  Notes LAMP; 600.068; 600.079;ADAS Approved no  
  Call Number Admin @ si @ RKW2016 Serial 2763  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: