|   | 
Details
   web
Records
Author (up) Carme Julia; Felipe Lumbreras; Angel Sappa
Title A Factorization-based Approach to Photometric Stereo Type Journal Article
Year 2011 Publication International Journal of Imaging Systems and Technology Abbreviated Journal IJIST
Volume 21 Issue 1 Pages 115-119
Keywords
Abstract This article presents an adaptation of a factorization technique to tackle the photometric stereo problem. That is to recover the surface normals and reflectance of an object from a set of images obtained under different lighting conditions. The main contribution of the proposed approach is to consider pixels in shadow and saturated regions as missing data, in order to reduce their influence to the result. Concretely, an adapted Alternation technique is used to deal with missing data. Experimental results considering both synthetic and real images show the viability of the proposed factorization-based strategy. © 2011 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 21, 115–119, 2011.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS Approved no
Call Number Admin @ si @ JLS2011; ADAS @ adas @ Serial 1711
Permanent link to this record
 

 
Author (up) Carme Julia; Joan Serrat; Antonio Lopez; Felipe Lumbreras; Daniel Ponsa
Title Motion segmentation through factorization. Application to night driving assistance Type Miscellaneous
Year 2006 Publication International Conference on Computer Vision Theory and Applications, (2) Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS Approved no
Call Number ADAS @ adas @ JSL2006a Serial 638
Permanent link to this record
 

 
Author (up) Carola Figueroa Flores
Title Visual Saliency for Object Recognition, and Object Recognition for Visual Saliency Type Book Whole
Year 2021 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal
Volume Issue Pages
Keywords computer vision; visual saliency; fine-grained object recognition; convolutional neural networks; images classification
Abstract For humans, the recognition of objects is an almost instantaneous, precise and
extremely adaptable process. Furthermore, we have the innate capability to learn
new object classes from only few examples. The human brain lowers the complexity
of the incoming data by filtering out part of the information and only processing
those things that capture our attention. This, mixed with our biological predisposition to respond to certain shapes or colors, allows us to recognize in a simple
glance the most important or salient regions from an image. This mechanism can
be observed by analyzing on which parts of images subjects place attention; where
they fix their eyes when an image is shown to them. The most accurate way to
record this behavior is to track eye movements while displaying images.
Computational saliency estimation aims to identify to what extent regions or
objects stand out with respect to their surroundings to human observers. Saliency
maps can be used in a wide range of applications including object detection, image
and video compression, and visual tracking. The majority of research in the field has
focused on automatically estimating saliency maps given an input image. Instead, in
this thesis, we set out to incorporate saliency maps in an object recognition pipeline:
we want to investigate whether saliency maps can improve object recognition
results.
In this thesis, we identify several problems related to visual saliency estimation.
First, to what extent the estimation of saliency can be exploited to improve the
training of an object recognition model when scarce training data is available. To
solve this problem, we design an image classification network that incorporates
saliency information as input. This network processes the saliency map through a
dedicated network branch and uses the resulting characteristics to modulate the
standard bottom-up visual characteristics of the original image input. We will refer to this technique as saliency-modulated image classification (SMIC). In extensive
experiments on standard benchmark datasets for fine-grained object recognition,
we show that our proposed architecture can significantly improve performance,
especially on dataset with scarce training data.
Next, we address the main drawback of the above pipeline: SMIC requires an
explicit saliency algorithm that must be trained on a saliency dataset. To solve this,
we implement a hallucination mechanism that allows us to incorporate the saliency
estimation branch in an end-to-end trained neural network architecture that only
needs the RGB image as an input. A side-effect of this architecture is the estimation
of saliency maps. In experiments, we show that this architecture can obtain similar
results on object recognition as SMIC but without the requirement of ground truth
saliency maps to train the system.
Finally, we evaluated the accuracy of the saliency maps that occur as a sideeffect of object recognition. For this purpose, we use a set of benchmark datasets
for saliency evaluation based on eye-tracking experiments. Surprisingly, the estimated saliency maps are very similar to the maps that are computed from human
eye-tracking experiments. Our results show that these saliency maps can obtain
competitive results on benchmark saliency maps. On one synthetic saliency dataset
this method even obtains the state-of-the-art without the need of ever having seen
an actual saliency image for training.
Address March 2021
Corporate Author Thesis Ph.D. thesis
Publisher Ediciones Graficas Rey Place of Publication Editor Joost Van de Weijer;Bogdan Raducanu
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-84-122714-4-7 Medium
Area Expedition Conference
Notes LAMP; 600.120 Approved no
Call Number Admin @ si @ Fig2021 Serial 3600
Permanent link to this record
 

 
Author (up) Carola Figueroa Flores; Abel Gonzalez-Garcia; Joost Van de Weijer; Bogdan Raducanu
Title Saliency for fine-grained object recognition in domains with scarce training data Type Journal Article
Year 2019 Publication Pattern Recognition Abbreviated Journal PR
Volume 94 Issue Pages 62-73
Keywords
Abstract This paper investigates the role of saliency to improve the classification accuracy of a Convolutional Neural Network (CNN) for the case when scarce training data is available. Our approach consists in adding a saliency branch to an existing CNN architecture which is used to modulate the standard bottom-up visual features from the original image input, acting as an attentional mechanism that guides the feature extraction process. The main aim of the proposed approach is to enable the effective training of a fine-grained recognition model with limited training samples and to improve the performance on the task, thereby alleviating the need to annotate a large dataset. The vast majority of saliency methods are evaluated on their ability to generate saliency maps, and not on their functionality in a complete vision pipeline. Our proposed pipeline allows to evaluate saliency methods for the high-level task of object recognition. We perform extensive experiments on various fine-grained datasets (Flowers, Birds, Cars, and Dogs) under different conditions and show that saliency can considerably improve the network’s performance, especially for the case of scarce training data. Furthermore, our experiments show that saliency methods that obtain improved saliency maps (as measured by traditional saliency benchmarks) also translate to saliency methods that yield improved performance gains when applied in an object recognition pipeline.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes LAMP; OR; 600.109; 600.141; 600.120 Approved no
Call Number Admin @ si @ FGW2019 Serial 3264
Permanent link to this record
 

 
Author (up) Carola Figueroa Flores; Bogdan Raducanu; David Berga; Joost Van de Weijer
Title Hallucinating Saliency Maps for Fine-Grained Image Classification for Limited Data Domains Type Conference Article
Year 2021 Publication 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications Abbreviated Journal
Volume 4 Issue Pages 163-171
Keywords
Abstract arXiv:2007.12562
Most of the saliency methods are evaluated on their ability to generate saliency maps, and not on their functionality in a complete vision pipeline, like for instance, image classification. In the current paper, we propose an approach which does not require explicit saliency maps to improve image classification, but they are learned implicitely, during the training of an end-to-end image classification task. We show that our approach obtains similar results as the case when the saliency maps are provided explicitely. Combining RGB data with saliency maps represents a significant advantage for object recognition, especially for the case when training data is limited. We validate our method on several datasets for fine-grained classification tasks (Flowers, Birds and Cars). In addition, we show that our saliency estimation method, which is trained without any saliency groundtruth data, obtains competitive results on real image saliency benchmark (Toronto), and outperforms deep saliency models with synthetic images (SID4VAM).
Address Virtual; February 2021
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference VISAPP
Notes LAMP Approved no
Call Number Admin @ si @ FRB2021c Serial 3540
Permanent link to this record
 

 
Author (up) Carola Figueroa Flores; David Berga; Joost Van de Weijer; Bogdan Raducanu
Title Saliency for free: Saliency prediction as a side-effect of object recognition Type Journal Article
Year 2021 Publication Pattern Recognition Letters Abbreviated Journal PRL
Volume 150 Issue Pages 1-7
Keywords Saliency maps; Unsupervised learning; Object recognition
Abstract Saliency is the perceptual capacity of our visual system to focus our attention (i.e. gaze) on relevant objects instead of the background. So far, computational methods for saliency estimation required the explicit generation of a saliency map, process which is usually achieved via eyetracking experiments on still images. This is a tedious process that needs to be repeated for each new dataset. In the current paper, we demonstrate that is possible to automatically generate saliency maps without ground-truth. In our approach, saliency maps are learned as a side effect of object recognition. Extensive experiments carried out on both real and synthetic datasets demonstrated that our approach is able to generate accurate saliency maps, achieving competitive results when compared with supervised methods.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes LAMP; 600.147; 600.120 Approved no
Call Number Admin @ si @ FBW2021 Serial 3559
Permanent link to this record
 

 
Author (up) Carolina Malagelada; F.De Lorio; Fernando Azpiroz; Santiago Segui; Petia Radeva; Anna Accarino; J.Santos; Juan R. Malagelada
Title Intestinal Dysmotility in Patients with Functional Intestinal Disorders Demonstrated by Computer Vision Analysis of Capsule Endoscopy Images Type Conference Article
Year 2010 Publication 18th United European Gastroenterology Week Abbreviated Journal
Volume 56 Issue 3 Pages A19-20
Keywords
Abstract
Address Barcelona
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference UEGW
Notes MILAB Approved no
Call Number Admin @ si @ MLA2010 Serial 1779
Permanent link to this record
 

 
Author (up) Carolina Malagelada; F.De Lorio; Santiago Segui; S. Mendez; Michal Drozdzal; Jordi Vitria; Petia Radeva; J.Santos; Anna Accarino; Juan R. Malagelada; Fernando Azpiroz
Title Functional gut disorders or disordered gut function? Small bowel dysmotility evidenced by an original technique Type Journal Article
Year 2012 Publication Neurogastroenterology & Motility Abbreviated Journal NEUMOT
Volume 24 Issue 3 Pages 223-230
Keywords capsule endoscopy;computer vision analysis;machine learning technique;small bowel motility
Abstract JCR Impact Factor 2010: 3.349
Background This study aimed to determine the proportion of cases with abnormal intestinal motility among patients with functional bowel disorders. To this end, we applied an original method, previously developed in our laboratory, for analysis of endoluminal images obtained by capsule endoscopy. This novel technology is based on computer vision and machine learning techniques.
 Methods The endoscopic capsule (Pillcam SB1; Given Imaging, Yokneam, Israel) was administered to 80 patients with functional bowel disorders and 70 healthy subjects. Endoluminal image analysis was performed with a computer vision program developed for the evaluation of contractile events (luminal occlusions and radial wrinkles), non-contractile patterns (open tunnel and smooth wall patterns), type of content (secretions, chyme) and motion of wall and contents. Normality range and discrimination of abnormal cases were established by a machine learning technique. Specifically, an iterative classifier (one-class support vector machine) was applied in a random population of 50 healthy subjects as a training set and the remaining subjects (20 healthy subjects and 80 patients) as a test set.
 Key Results The classifier identified as abnormal 29% of patients with functional diseases of the bowel (23 of 80), and as normal 97% of healthy subjects (68 of 70) (P < 0.05 by chi-squared test). Patients identified as abnormal clustered in two groups, which exhibited either a hyper- or a hypodynamic motility pattern. The motor behavior was unrelated to clinical features.
Conclusions &  Inferences With appropriate methodology, abnormal intestinal motility can be demonstrated in a significant proportion of patients with functional bowel disorders, implying a pathologic disturbance of gut physiology.
Address
Corporate Author Thesis
Publisher Wiley Online Library Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB; OR; MV Approved no
Call Number Admin @ si @ MLS2012 Serial 1830
Permanent link to this record
 

 
Author (up) Carolina Malagelada; Fosca De Iorio; Fernando Azpiroz; Anna Accarino; Santiago Segui; Petia Radeva; Juan R. Malagelada
Title New Insight Into Intestinal Motor Function via Noninvasive Endoluminal Image Analysis Type Journal
Year 2008 Publication Gastroenterology Abbreviated Journal
Volume 135 Issue 4 Pages 1155–1162
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB Approved no
Call Number BCNPCL @ bcnpcl @ MDA2008 Serial 1040
Permanent link to this record
 

 
Author (up) Carolina Malagelada; Michal Drozdzal; Santiago Segui; Sara Mendez; Jordi Vitria; Petia Radeva; Javier Santos; Anna Accarino; Juan R. Malagelada; Fernando Azpiroz
Title Classification of functional bowel disorders by objective physiological criteria based on endoluminal image analysis Type Journal Article
Year 2015 Publication American Journal of Physiology-Gastrointestinal and Liver Physiology Abbreviated Journal AJPGI
Volume 309 Issue 6 Pages G413--G419
Keywords capsule endoscopy; computer vision analysis; functional bowel disorders; intestinal motility; machine learning
Abstract We have previously developed an original method to evaluate small bowel motor function based on computer vision analysis of endoluminal images obtained by capsule endoscopy. Our aim was to demonstrate intestinal motor abnormalities in patients with functional bowel disorders by endoluminal vision analysis. Patients with functional bowel disorders (n = 205) and healthy subjects (n = 136) ingested the endoscopic capsule (Pillcam-SB2, Given-Imaging) after overnight fast and 45 min after gastric exit of the capsule a liquid meal (300 ml, 1 kcal/ml) was administered. Endoluminal image analysis was performed by computer vision and machine learning techniques to define the normal range and to identify clusters of abnormal function. After training the algorithm, we used 196 patients and 48 healthy subjects, completely naive, as test set. In the test set, 51 patients (26%) were detected outside the normal range (P < 0.001 vs. 3 healthy subjects) and clustered into hypo- and hyperdynamic subgroups compared with healthy subjects. Patients with hypodynamic behavior (n = 38) exhibited less luminal closure sequences (41 ± 2% of the recording time vs. 61 ± 2%; P < 0.001) and more static sequences (38 ± 3 vs. 20 ± 2%; P < 0.001); in contrast, patients with hyperdynamic behavior (n = 13) had an increased proportion of luminal closure sequences (73 ± 4 vs. 61 ± 2%; P = 0.029) and more high-motion sequences (3 ± 1 vs. 0.5 ± 0.1%; P < 0.001). Applying an original methodology, we have developed a novel classification of functional gut disorders based on objective, physiological criteria of small bowel function.
Address
Corporate Author Thesis
Publisher American Physiological Society Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB; OR;MV Approved no
Call Number Admin @ si @ MDS2015 Serial 2666
Permanent link to this record
 

 
Author (up) Cesar de Souza
Title Action Recognition in Videos: Data-efficient approaches for supervised learning of human action classification models for video Type Book Whole
Year 2018 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal
Volume Issue Pages
Keywords
Abstract In this dissertation, we explore different ways to perform human action recognition in video clips. We focus on data efficiency, proposing new approaches that alleviate the need for laborious and time-consuming manual data annotation. In the first part of this dissertation, we start by analyzing previous state-of-the-art models, comparing their differences and similarities in order to pinpoint where their real strengths come from. Leveraging this information, we then proceed to boost the classification accuracy of shallow models to levels that rival deep neural networks. We introduce hybrid video classification architectures based on carefully designed unsupervised representations of handcrafted spatiotemporal features classified by supervised deep networks. We show in our experiments that our hybrid model combine the best of both worlds: it is data efficient (trained on 150 to 10,000 short clips) and yet improved significantly on the state of the art, including deep models trained on millions of manually labeled images and videos. In the second part of this research, we investigate the generation of synthetic training data for action recognition, as it has recently shown promising results for a variety of other computer vision tasks. We propose an interpretable parametric generative model of human action videos that relies on procedural generation and other computer graphics techniques of modern game engines. We generate a diverse, realistic, and physically plausible dataset of human action videos, called PHAV for “Procedural Human Action Videos”. It contains a total of 39,982 videos, with more than 1,000 examples for each action of 35 categories. Our approach is not limited to existing motion capture sequences, and we procedurally define 14 synthetic actions. We then introduce deep multi-task representation learning architectures to mix synthetic and real videos, even if the action categories differ. Our experiments on the UCF-101 and HMDB-51 benchmarks suggest that combining our large set of synthetic videos with small real-world datasets can boost recognition performance, outperforming fine-tuning state-of-the-art unsupervised generative models of videos.
Address April 2018
Corporate Author Thesis Ph.D. thesis
Publisher Ediciones Graficas Rey Place of Publication Editor Antonio Lopez;Naila Murray
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS; 600.118 Approved no
Call Number Admin @ si @ Sou2018 Serial 3127
Permanent link to this record
 

 
Author (up) Cesar de Souza; Adrien Gaidon; Eleonora Vig; Antonio Lopez
Title System and method for video classification using a hybrid unsupervised and supervised multi-layer architecture Type Patent
Year 2018 Publication US9946933B2 Abbreviated Journal
Volume Issue Pages
Keywords US9946933B2
Abstract A computer-implemented video classification method and system are disclosed. The method includes receiving an input video including a sequence of frames. At least one transformation of the input video is generated, each transformation including a sequence of frames. For the input video and each transformation, local descriptors are extracted from the respective sequence of frames. The local descriptors of the input video and each transformation are aggregated to form an aggregated feature vector with a first set of processing layers learned using unsupervised learning. An output classification value is generated for the input video, based on the aggregated feature vector with a second set of processing layers learned using supervised learning.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS; 600.118 Approved no
Call Number Admin @ si @ SGV2018 Serial 3255
Permanent link to this record
 

 
Author (up) Cesar de Souza; Adrien Gaidon; Eleonora Vig; Antonio Lopez
Title Sympathy for the Details: Dense Trajectories and Hybrid Classification Architectures for Action Recognition Type Conference Article
Year 2016 Publication 14th European Conference on Computer Vision Abbreviated Journal
Volume Issue Pages 697-716
Keywords
Abstract Action recognition in videos is a challenging task due to the complexity of the spatio-temporal patterns to model and the difficulty to acquire and learn on large quantities of video data. Deep learning, although a breakthrough for image classification and showing promise for videos, has still not clearly superseded action recognition methods using hand-crafted features, even when training on massive datasets. In this paper, we introduce hybrid video classification architectures based on carefully designed unsupervised representations of hand-crafted spatio-temporal features classified by supervised deep networks. As we show in our experiments on five popular benchmarks for action recognition, our hybrid model combines the best of both worlds: it is data efficient (trained on 150 to 10000 short clips) and yet improves significantly on the state of the art, including recent deep models trained on millions of manually labelled images and videos.
Address Amsterdam; The Netherlands; October 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ECCV
Notes ADAS; 600.076; 600.085 Approved no
Call Number Admin @ si @ SGV2016 Serial 2824
Permanent link to this record
 

 
Author (up) Cesar de Souza; Adrien Gaidon; Yohann Cabon; Antonio Lopez
Title Procedural Generation of Videos to Train Deep Action Recognition Networks Type Conference Article
Year 2017 Publication 30th IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal
Volume Issue Pages 2594-2604
Keywords
Abstract Deep learning for human action recognition in videos is making significant progress, but is slowed down by its dependency on expensive manual labeling of large video collections. In this work, we investigate the generation of synthetic training data for action recognition, as it has recently shown promising results for a variety of other computer vision tasks. We propose an interpretable parametric generative model of human action videos that relies on procedural generation and other computer graphics techniques of modern game engines. We generate a diverse, realistic, and physically plausible dataset of human action videos, called PHAV for ”Procedural Human Action Videos”. It contains a total of 39, 982 videos, with more than 1, 000 examples for each action of 35 categories. Our approach is not limited to existing motion capture sequences, and we procedurally define 14 synthetic actions. We introduce a deep multi-task representation learning architecture to mix synthetic and real videos, even if the action categories differ. Our experiments on the UCF101 and HMDB51 benchmarks suggest that combining our large set of synthetic videos with small real-world datasets can boost recognition performance, significantly
outperforming fine-tuning state-of-the-art unsupervised generative models of videos.
Address Honolulu; Hawaii; July 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPR
Notes ADAS; 600.076; 600.085; 600.118 Approved no
Call Number Admin @ si @ SGC2017 Serial 3051
Permanent link to this record
 

 
Author (up) Cesar de Souza; Adrien Gaidon; Yohann Cabon; Naila Murray; Antonio Lopez
Title Generating Human Action Videos by Coupling 3D Game Engines and Probabilistic Graphical Models Type Journal Article
Year 2020 Publication International Journal of Computer Vision Abbreviated Journal IJCV
Volume 128 Issue Pages 1505–1536
Keywords Procedural generation; Human action recognition; Synthetic data; Physics
Abstract Deep video action recognition models have been highly successful in recent years but require large quantities of manually-annotated data, which are expensive and laborious to obtain. In this work, we investigate the generation of synthetic training data for video action recognition, as synthetic data have been successfully used to supervise models for a variety of other computer vision tasks. We propose an interpretable parametric generative model of human action videos that relies on procedural generation, physics models and other components of modern game engines. With this model we generate a diverse, realistic, and physically plausible dataset of human action videos, called PHAV for “Procedural Human Action Videos”. PHAV contains a total of 39,982 videos, with more than 1000 examples for each of 35 action categories. Our video generation approach is not limited to existing motion capture sequences: 14 of these 35 categories are procedurally-defined synthetic actions. In addition, each video is represented with 6 different data modalities, including RGB, optical flow and pixel-level semantic labels. These modalities are generated almost simultaneously using the Multiple Render Targets feature of modern GPUs. In order to leverage PHAV, we introduce a deep multi-task (i.e. that considers action classes from multiple datasets) representation learning architecture that is able to simultaneously learn from synthetic and real video datasets, even when their action categories differ. Our experiments on the UCF-101 and HMDB-51 benchmarks suggest that combining our large set of synthetic videos with small real-world datasets can boost recognition performance. Our approach also significantly outperforms video representations produced by fine-tuning state-of-the-art unsupervised generative models of videos.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS; 600.124; 600.118 Approved no
Call Number Admin @ si @ SGC2019 Serial 3303
Permanent link to this record