Home | [151–160] << 161 162 163 164 165 166 167 168 169 170 >> [171–180] |
![]() |
Records | |||||
---|---|---|---|---|---|
Author | E. Bondi ; L. Sidenari; Andrew Bagdanov; Alberto del Bimbo | ||||
Title | Real-time people counting from depth imagery of crowded environments | Type | Conference Article | ||
Year | 2014 | Publication | 11th IEEE International Conference on Advanced Video and Signal based Surveillance | Abbreviated Journal | |
Volume | Issue | Pages | 337 - 342 | ||
Keywords | |||||
Abstract | In this paper we describe a system for automatic people counting in crowded environments. The approach we propose is a counting-by-detection method based on depth imagery. It is designed to be deployed as an autonomous appliance for crowd analysis in video surveillance application scenarios. Our system performs foreground/background segmentation on depth image streams in order to coarsely segment persons, then depth information is used to localize head candidates which are then tracked in time on an automatically estimated ground plane. The system runs in real-time, at a frame-rate of about 20 fps. We collected a dataset of RGB-D sequences representing three typical and challenging surveillance scenarios, including crowds, queuing and groups. An extensive comparative evaluation is given between our system and more complex, Latent SVM-based head localization for person counting applications. | ||||
Address | Seoul; Korea; August 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | AVSS | ||
Notes ![]() |
LAMP; 600.079 | Approved | no | ||
Call Number | Admin @ si @ BSB2014 | Serial | 2540 | ||
Permanent link to this record | |||||
Author | Joost Van de Weijer; Fahad Shahbaz Khan | ||||
Title | An Overview of Color Name Applications in Computer Vision | Type | Conference Article | ||
Year | 2015 | Publication | Computational Color Imaging Workshop | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | color features; color names; object recognition | ||||
Abstract | In this article we provide an overview of color name applications in computer vision. Color names are linguistic labels which humans use to communicate color. Computational color naming learns a mapping from pixels values to color names. In recent years color names have been applied to a wide variety of computer vision applications, including image classification, object recognition, texture classification, visual tracking and action recognition. Here we provide an overview of these results which show that in general color names outperform photometric invariants as a color representation. | ||||
Address | Saint Etienne; France; March 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CCIW | ||
Notes ![]() |
LAMP; 600.079; 600.068 | Approved | no | ||
Call Number | Admin @ si @ WeK2015 | Serial | 2586 | ||
Permanent link to this record | |||||
Author | Mikhail Mozerov; Joost Van de Weijer | ||||
Title | Global Color Sparseness and a Local Statistics Prior for Fast Bilateral Filtering | Type | Journal Article | ||
Year | 2015 | Publication | IEEE Transactions on Image Processing | Abbreviated Journal | TIP |
Volume | 24 | Issue | 12 | Pages | 5842-5853 |
Keywords | |||||
Abstract | The property of smoothing while preserving edges makes the bilateral filter a very popular image processing tool. However, its non-linear nature results in a computationally costly operation. Various works propose fast approximations to the bilateral filter. However, the majority does not generalize to vector input as is the case with color images. We propose a fast approximation to the bilateral filter for color images. The filter is based on two ideas. First, the number of colors, which occur in a single natural image, is limited. We exploit this color sparseness to rewrite the initial non-linear bilateral filter as a number of linear filter operations. Second, we impose a statistical prior to the image values that are locally present within the filter window. We show that this statistical prior leads to a closed-form solution of the bilateral filter. Finally, we combine both ideas into a single fast and accurate bilateral filter for color images. Experimental results show that our bilateral filter based on the local prior yields an extremely fast bilateral filter approximation, but with limited accuracy, which has potential application in real-time video filtering. Our bilateral filter, which combines color sparseness and local statistics, yields a fast and accurate bilateral filter approximation and obtains the state-of-the-art results. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1057-7149 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes ![]() |
LAMP; 600.079;ISE | Approved | no | ||
Call Number | Admin @ si @ MoW2015b | Serial | 2689 | ||
Permanent link to this record | |||||
Author | Pedro Martins; Paulo Carvalho; Carlo Gatta | ||||
Title | Context-aware features and robust image representations | Type | Journal Article | ||
Year | 2014 | Publication | Journal of Visual Communication and Image Representation | Abbreviated Journal | JVCIR |
Volume | 25 | Issue | 2 | Pages | 339-348 |
Keywords | |||||
Abstract | Local image features are often used to efficiently represent image content. The limited number of types of features that a local feature extractor responds to might be insufficient to provide a robust image representation. To overcome this limitation, we propose a context-aware feature extraction formulated under an information theoretic framework. The algorithm does not respond to a specific type of features; the idea is to retrieve complementary features which are relevant within the image context. We empirically validate the method by investigating the repeatability, the completeness, and the complementarity of context-aware features on standard benchmarks. In a comparison with strictly local features, we show that our context-aware features produce more robust image representations. Furthermore, we study the complementarity between strictly local features and context-aware ones to produce an even more robust representation. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes ![]() |
LAMP; 600.079;MILAB | Approved | no | ||
Call Number | Admin @ si @ MCG2014 | Serial | 2467 | ||
Permanent link to this record | |||||
Author | Adriana Romero; Carlo Gatta; Gustavo Camps-Valls | ||||
Title | Unsupervised Deep Feature Extraction for Remote Sensing Image Classification | Type | Journal Article | ||
Year | 2016 | Publication | IEEE Transaction on Geoscience and Remote Sensing | Abbreviated Journal | TGRS |
Volume | 54 | Issue | 3 | Pages | 1349 - 1362 |
Keywords | |||||
Abstract | This paper introduces the use of single-layer and deep convolutional networks for remote sensing data analysis. Direct application to multi- and hyperspectral imagery of supervised (shallow or deep) convolutional networks is very challenging given the high input data dimensionality and the relatively small amount of available labeled data. Therefore, we propose the use of greedy layerwise unsupervised pretraining coupled with a highly efficient algorithm for unsupervised learning of sparse features. The algorithm is rooted on sparse representations and enforces both population and lifetime sparsity of the extracted features, simultaneously. We successfully illustrate the expressive power of the extracted representations in several scenarios: classification of aerial scenes, as well as land-use classification in very high resolution or land-cover classification from multi- and hyperspectral images. The proposed algorithm clearly outperforms standard principal component analysis (PCA) and its kernel counterpart (kPCA), as well as current state-of-the-art algorithms of aerial classification, while being extremely computationally efficient at learning representations of data. Results show that single-layer convolutional networks can extract powerful discriminative features only when the receptive field accounts for neighboring pixels and are preferred when the classification requires high resolution and detailed results. However, deep architectures significantly outperform single-layer variants, capturing increasing levels of abstraction and complexity throughout the feature hierarchy. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0196-2892 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes ![]() |
LAMP; 600.079;MILAB | Approved | no | ||
Call Number | Admin @ si @ RGC2016 | Serial | 2723 | ||
Permanent link to this record | |||||
Author | M. Campos-Taberner; Adriana Romero; Carlo Gatta; Gustavo Camps-Valls | ||||
Title | Shared feature representations of LiDAR and optical images: Trading sparsity for semantic discrimination | Type | Conference Article | ||
Year | 2015 | Publication | IEEE International Geoscience and Remote Sensing Symposium IGARSS2015 | Abbreviated Journal | |
Volume | Issue | Pages | 4169 - 4172 | ||
Keywords | |||||
Abstract | This paper studies the level of complementary information conveyed by extremely high resolution LiDAR and optical images. We pursue this goal following an indirect approach via unsupervised spatial-spectral feature extraction. We used a recently presented unsupervised convolutional neural network trained to enforce both population and lifetime spar-sity in the feature representation. We derived independent and joint feature representations, and analyzed the sparsity scores and the discriminative power. Interestingly, the obtained results revealed that the RGB+LiDAR representation is no longer sparse, and the derived basis functions merge color and elevation yielding a set of more expressive colored edge filters. The joint feature representation is also more discriminative when used for clustering and topological data visualization. | ||||
Address | Milan; Italy; July 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | IGARSS | ||
Notes ![]() |
LAMP; 600.079;MILAB | Approved | no | ||
Call Number | Admin @ si @ CRG2015 | Serial | 2724 | ||
Permanent link to this record | |||||
Author | Laura Lopez-Fuentes; Joost Van de Weijer; Marc Bolaños; Harald Skinnemoen | ||||
Title | Multi-modal Deep Learning Approach for Flood Detection | Type | Conference Article | ||
Year | 2017 | Publication | MediaEval Benchmarking Initiative for Multimedia Evaluation | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | In this paper we propose a multi-modal deep learning approach to detect floods in social media posts. Social media posts normally contain some metadata and/or visual information, therefore in order to detect the floods we use this information. The model is based on a Convolutional Neural Network which extracts the visual features and a bidirectional Long Short-Term Memory network to extract the semantic features from the textual metadata. We validate the
method on images extracted from Flickr which contain both visual information and metadata and compare the results when using both, visual information only or metadata only. This work has been done in the context of the MediaEval Multimedia Satellite Task. |
||||
Address | Dublin; Ireland; September 2017 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | MediaEval | ||
Notes ![]() |
LAMP; 600.084; 600.109; 600.120 | Approved | no | ||
Call Number | Admin @ si @ LWB2017a | Serial | 2974 | ||
Permanent link to this record | |||||
Author | Laura Lopez-Fuentes; Alessandro Farasin; Harald Skinnemoen; Paolo Garza | ||||
Title | Deep Learning models for passability detection of flooded roads | Type | Conference Article | ||
Year | 2018 | Publication | MediaEval 2018 Multimedia Benchmark Workshop | Abbreviated Journal | |
Volume | 2283 | Issue | Pages | ||
Keywords | |||||
Abstract | In this paper we study and compare several approaches to detect floods and evidence for passability of roads by conventional means in Twitter. We focus on tweets containing both visual information (a picture shared by the user) and metadata, a combination of text and related extra information intrinsic to the Twitter API. This work has been done in the context of the MediaEval 2018 Multimedia Satellite Task. | ||||
Address | Sophia Antipolis; France; October 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | MediaEval | ||
Notes ![]() |
LAMP; 600.084; 600.109; 600.120 | Approved | no | ||
Call Number | Admin @ si @ LFS2018 | Serial | 3224 | ||
Permanent link to this record | |||||
Author | Laura Lopez-Fuentes; Claudio Rossi; Harald Skinnemoen | ||||
Title | River segmentation for flood monitoring | Type | Conference Article | ||
Year | 2017 | Publication | Data Science for Emergency Management at Big Data 2017 | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Floods are major natural disasters which cause deaths and material damages every year. Monitoring these events is crucial in order to reduce both the affected people and the economic losses. In this work we train and test three different Deep Learning segmentation algorithms to estimate the water area from river images, and compare their performances. We discuss the implementation of a novel data chain aimed to monitor river water levels by automatically process data collected from surveillance cameras, and to give alerts in case of high increases of the water level or flooding. We also create and openly publish the first image dataset for river water segmentation. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes ![]() |
LAMP; 600.084; 600.120 | Approved | no | ||
Call Number | Admin @ si @ LRS2017 | Serial | 3078 | ||
Permanent link to this record | |||||
Author | Mikhail Mozerov; Joost Van de Weijer | ||||
Title | One-view occlusion detection for stereo matching with a fully connected CRF model | Type | Journal Article | ||
Year | 2019 | Publication | IEEE Transactions on Image Processing | Abbreviated Journal | TIP |
Volume | 28 | Issue | 6 | Pages | 2936-2947 |
Keywords | Stereo matching; energy minimization; fully connected MRF model; geodesic distance filter | ||||
Abstract | In this paper, we extend the standard belief propagation (BP) sequential technique proposed in the tree-reweighted sequential method [15] to the fully connected CRF models with the geodesic distance affinity. The proposed method has been applied to the stereo matching problem. Also a new approach to the BP marginal solution is proposed that we call one-view occlusion detection (OVOD). In contrast to the standard winner takes all (WTA) estimation, the proposed OVOD solution allows to find occluded regions in the disparity map and simultaneously improve the matching result. As a result we can perform only
one energy minimization process and avoid the cost calculation for the second view and the left-right check procedure. We show that the OVOD approach considerably improves results for cost augmentation and energy minimization techniques in comparison with the standard one-view affinity space implementation. We apply our method to the Middlebury data set and reach state-ofthe-art especially for median, average and mean squared error metrics. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes ![]() |
LAMP; 600.098; 600.109; 602.133; 600.120 | Approved | no | ||
Call Number | Admin @ si @ MoW2019 | Serial | 3221 | ||
Permanent link to this record | |||||
Author | Esteve Cervantes; Long Long Yu; Andrew Bagdanov; Marc Masana; Joost Van de Weijer | ||||
Title | Hierarchical Part Detection with Deep Neural Networks | Type | Conference Article | ||
Year | 2016 | Publication | 23rd IEEE International Conference on Image Processing | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | Object Recognition; Part Detection; Convolutional Neural Networks | ||||
Abstract | Part detection is an important aspect of object recognition. Most approaches apply object proposals to generate hundreds of possible part bounding box candidates which are then evaluated by part classifiers. Recently several methods have investigated directly regressing to a limited set of bounding boxes from deep neural network representation. However, for object parts such methods may be unfeasible due to their relatively small size with respect to the image. We propose a hierarchical method for object and part detection. In a single network we first detect the object and then regress to part location proposals based only on the feature representation inside the object. Experiments show that our hierarchical approach outperforms a network which directly regresses the part locations. We also show that our approach obtains part detection accuracy comparable or better than state-of-the-art on the CUB-200 bird and Fashionista clothing item datasets with only a fraction of the number of part proposals. | ||||
Address | Phoenix; Arizona; USA; September 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICIP | ||
Notes ![]() |
LAMP; 600.106 | Approved | no | ||
Call Number | Admin @ si @ CLB2016 | Serial | 2762 | ||
Permanent link to this record | |||||
Author | Ozan Caglayan; Walid Aransa; Yaxing Wang; Marc Masana; Mercedes Garcıa-Martinez; Fethi Bougares; Loic Barrault; Joost Van de Weijer | ||||
Title | Does Multimodality Help Human and Machine for Translation and Image Captioning? | Type | Conference Article | ||
Year | 2016 | Publication | 1st conference on machine translation | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | This paper presents the systems developed by LIUM and CVC for the WMT16 Multimodal Machine Translation challenge. We explored various comparative methods, namely phrase-based systems and attentional recurrent neural networks models trained using monomodal or multimodal data. We also performed a human evaluation in order to estimate theusefulness of multimodal data for human machine translation and image description generation. Our systems obtained the best results for both tasks according to the automatic evaluation metrics BLEU and METEOR. | ||||
Address | Berlin; Germany; August 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | WMT | ||
Notes ![]() |
LAMP; 600.106 ; 600.068 | Approved | no | ||
Call Number | Admin @ si @ CAW2016 | Serial | 2761 | ||
Permanent link to this record | |||||
Author | Xialei Liu; Joost Van de Weijer; Andrew Bagdanov | ||||
Title | RankIQA: Learning from Rankings for No-reference Image Quality Assessment | Type | Conference Article | ||
Year | 2017 | Publication | 17th IEEE International Conference on Computer Vision | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | We propose a no-reference image quality assessment (NR-IQA) approach that learns from rankings (RankIQA). To address the problem of limited IQA dataset size, we train a Siamese Network to rank images in terms of image quality by using synthetically generated distortions for which relative image quality is known. These ranked image sets can be automatically generated without laborious human labeling. We then use fine-tuning to transfer the knowledge represented in the trained Siamese Network to a traditional CNN that estimates absolute image quality from single images. We demonstrate how our approach can be made significantly more efficient than traditional Siamese Networks by forward propagating a batch of images through a single network and backpropagating gradients derived from all pairs of images in the batch. Experiments on the TID2013 benchmark show that we improve the state-of-the-art by over 5%. Furthermore, on the LIVE benchmark we show that our approach is superior to existing NR-IQA techniques and that we even outperform the state-of-the-art in full-reference IQA (FR-IQA) methods without having to resort to high-quality reference images to infer IQA. | ||||
Address | Venice; Italy; October 2017 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICCV | ||
Notes ![]() |
LAMP; 600.106; 600.109; 600.120 | Approved | no | ||
Call Number | Admin @ si @ LWB2017b | Serial | 3036 | ||
Permanent link to this record | |||||
Author | Yaxing Wang; Abel Gonzalez-Garcia; Joost Van de Weijer; Luis Herranz | ||||
Title | SDIT: Scalable and Diverse Cross-domain Image Translation | Type | Conference Article | ||
Year | 2019 | Publication | 27th ACM International Conference on Multimedia | Abbreviated Journal | |
Volume | Issue | Pages | 1267–1276 | ||
Keywords | |||||
Abstract | Recently, image-to-image translation research has witnessed remarkable progress. Although current approaches successfully generate diverse outputs or perform scalable image transfer, these properties have not been combined into a single method. To address this limitation, we propose SDIT: Scalable and Diverse image-to-image translation. These properties are combined into a single generator. The diversity is determined by a latent variable which is randomly sampled from a normal distribution. The scalability is obtained by conditioning the network on the domain attributes. Additionally, we also exploit an attention mechanism that permits the generator to focus on the domain-specific attribute. We empirically demonstrate the performance of the proposed method on face mapping and other datasets beyond faces. | ||||
Address | Nice; Francia; October 2019 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ACM-MM | ||
Notes ![]() |
LAMP; 600.106; 600.109; 600.141; 600.120 | Approved | no | ||
Call Number | Admin @ si @ WGW2019 | Serial | 3363 | ||
Permanent link to this record | |||||
Author | Chenshen Wu; Luis Herranz; Xialei Liu; Joost Van de Weijer; Bogdan Raducanu | ||||
Title | Memory Replay GANs: Learning to Generate New Categories without Forgetting | Type | Conference Article | ||
Year | 2018 | Publication | 32nd Annual Conference on Neural Information Processing Systems | Abbreviated Journal | |
Volume | Issue | Pages | 5966-5976 | ||
Keywords | |||||
Abstract | Previous works on sequential learning address the problem of forgetting in discriminative models. In this paper we consider the case of generative models. In particular, we investigate generative adversarial networks (GANs) in the task of learning new categories in a sequential fashion. We first show that sequential fine tuning renders the network unable to properly generate images from previous categories (ie forgetting). Addressing this problem, we propose Memory Replay GANs (MeRGANs), a conditional GAN framework that integrates a memory replay generator. We study two methods to prevent forgetting by leveraging these replays, namely joint training with replay and replay alignment. Qualitative and quantitative experimental results in MNIST, SVHN and LSUN datasets show that our memory replay approach can generate competitive images while significantly mitigating the forgetting of previous categories. | ||||
Address | Montreal; Canada; December 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | NIPS | ||
Notes ![]() |
LAMP; 600.106; 600.109; 602.200; 600.120 | Approved | no | ||
Call Number | Admin @ si @ WHL2018 | Serial | 3249 | ||
Permanent link to this record |