Home | [111–120] << 121 122 123 124 125 126 127 128 129 130 >> [131–140] |
Records | |||||
---|---|---|---|---|---|
Author | A. Pujol; H. Wechsler; Juan J. Villanueva | ||||
Title | Learning and Caricaturing the Face Space Using Self-Organization and Hebbian Learning for Face Processing. | Type | Miscellaneous | ||
Year | 2001 | Publication | 11th International Conference on Image Analysis and Processing, ICIAP 2001, 273–278. | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address | Italia. | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | ISE @ ise @ PWV2001 | Serial | 205 | ||
Permanent link to this record | |||||
Author | Javier Marin; David Vazquez; David Geronimo; Antonio Lopez | ||||
Title | Learning Appearance in Virtual Scenarios for Pedestrian Detection | Type | Conference Article | ||
Year | 2010 | Publication | 23rd IEEE Conference on Computer Vision and Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 137–144 | ||
Keywords | Pedestrian Detection; Domain Adaptation | ||||
Abstract | Detecting pedestrians in images is a key functionality to avoid vehicle-to-pedestrian collisions. The most promising detectors rely on appearance-based pedestrian classifiers trained with labelled samples. This paper addresses the following question: can a pedestrian appearance model learnt in virtual scenarios work successfully for pedestrian detection in real images? (Fig. 1). Our experiments suggest a positive answer, which is a new and relevant conclusion for research in pedestrian detection. More specifically, we record training sequences in virtual scenarios and then appearance-based pedestrian classifiers are learnt using HOG and linear SVM. We test such classifiers in a publicly available dataset provided by Daimler AG for pedestrian detection benchmarking. This dataset contains real world images acquired from a moving car. The obtained result is compared with the one given by a classifier learnt using samples coming from real images. The comparison reveals that, although virtual samples were not specially selected, both virtual and real based training give rise to classifiers of similar performance. | ||||
Address | San Francisco; CA; USA; June 2010 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | English | Summary Language | English | Original Title | Learning Appearance in Virtual Scenarios for Pedestrian Detection |
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1063-6919 | ISBN | 978-1-4244-6984-0 | Medium | |
Area | Expedition | Conference | CVPR | ||
Notes | ADAS | Approved | no | ||
Call Number | ADAS @ adas @ MVG2010 | Serial | 1304 | ||
Permanent link to this record | |||||
Author | Meysam Madadi; Hugo Bertiche; Wafa Bouzouita; Isabelle Guyon; Sergio Escalera | ||||
Title | Learning Cloth Dynamics: 3D+Texture Garment Reconstruction Benchmark | Type | Conference Article | ||
Year | 2021 | Publication | Proceedings of Machine Learning Research | Abbreviated Journal | |
Volume | 133 | Issue | Pages | 57-76 | |
Keywords | |||||
Abstract | Human avatars are important targets in many computer applications. Accurately tracking, capturing, reconstructing and animating the human body, face and garments in 3D are critical for human-computer interaction, gaming, special effects and virtual reality. In the past, this has required extensive manual animation. Regardless of the advances in human body and face reconstruction, still modeling, learning and analyzing human dynamics need further attention. In this paper we plan to push the research in this direction, e.g. understanding human dynamics in 2D and 3D, with special attention to garments. We provide a large-scale dataset (more than 2M frames) of animated garments with variable topology and type, calledCLOTH3D++. The dataset contains RGBA video sequences paired with its corresponding 3D data. We pay special care to garment dynamics and realistic rendering of RGB data, including lighting, fabric type and texture. With this dataset, we hold a competition at NeurIPS2020. We design three tracks so participants can compete to develop the best method to perform 3D garment reconstruction in a sequence from (1) 3D-to-3D garments, (2) RGB-to-3D garments, and (3) RGB-to-3D garments plus texture. We also provide a baseline method, based on graph convolutional networks, for each track. Baseline results show that there is a lot of room for improvements. However, due to the challenging nature of the problem, no participant could outperform the baselines. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | HUPBA; no proj | Approved | no | ||
Call Number | Admin @ si @ MBB2021 | Serial | 3655 | ||
Permanent link to this record | |||||
Author | Joost Van de Weijer; Cordelia Schmid; Jakob Verbeek; Diane Larlus | ||||
Title | Learning Color Names for Real-World Applications | Type | Journal Article | ||
Year | 2009 | Publication | IEEE Transaction in Image Processing | Abbreviated Journal | TIP |
Volume | 18 | Issue | 7 | Pages | 1512–1524 |
Keywords | |||||
Abstract | Color names are required in real-world applications such as image retrieval and image annotation. Traditionally, they are learned from a collection of labelled color chips. These color chips are labelled with color names within a well-defined experimental setup by human test subjects. However naming colors in real-world images differs significantly from this experimental setting. In this paper, we investigate how color names learned from color chips compare to color names learned from real-world images. To avoid hand labelling real-world images with color names we use Google Image to collect a data set. Due to limitations of Google Image this data set contains a substantial quantity of wrongly labelled data. We propose several variants of the PLSA model to learn color names from this noisy data. Experimental results show that color names learned from real-world images significantly outperform color names learned from labelled color chips for both image retrieval and image annotation. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1057-7149 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | CAT @ cat @ WSV2009 | Serial | 1195 | ||
Permanent link to this record | |||||
Author | Xavier Boix | ||||
Title | Learning Conditional Random Fields for Stereo | Type | Report | ||
Year | 2009 | Publication | CVC Technical Report | Abbreviated Journal | |
Volume | 136 | Issue | Pages | ||
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Computer Vision Center | Thesis | Master's thesis | ||
Publisher | Place of Publication | Bellaterra, Barcelona | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | CIC | Approved | no | ||
Call Number | Admin @ si @ Boi2009 | Serial | 2395 | ||
Permanent link to this record | |||||
Author | Sounak Dey; Anjan Dutta; Suman Ghosh; Ernest Valveny; Josep Llados; Umapada Pal | ||||
Title | Learning Cross-Modal Deep Embeddings for Multi-Object Image Retrieval using Text and Sketch | Type | Conference Article | ||
Year | 2018 | Publication | 24th International Conference on Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 916 - 921 | ||
Keywords | |||||
Abstract | In this work we introduce a cross modal image retrieval system that allows both text and sketch as input modalities for the query. A cross-modal deep network architecture is formulated to jointly model the sketch and text input modalities as well as the the image output modality, learning a common embedding between text and images and between sketches and images. In addition, an attention model is used to selectively focus the attention on the different objects of the image, allowing for retrieval with multiple objects in the query. Experiments show that the proposed method performs the best in both single and multiple object image retrieval in standard datasets. | ||||
Address | Beijing; China; August 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICPR | ||
Notes | DAG; 602.167; 602.168; 600.097; 600.084; 600.121; 600.129 | Approved | no | ||
Call Number | Admin @ si @ DDG2018b | Serial | 3152 | ||
Permanent link to this record | |||||
Author | Cristhian A. Aguilera-Carrasco; F. Aguilera; Angel Sappa; C. Aguilera; Ricardo Toledo | ||||
Title | Learning cross-spectral similarity measures with deep convolutional neural networks | Type | Conference Article | ||
Year | 2016 | Publication | 29th IEEE Conference on Computer Vision and Pattern Recognition Worshops | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | The simultaneous use of images from different spectracan be helpful to improve the performance of many computer vision tasks. The core idea behind the usage of crossspectral approaches is to take advantage of the strengths of each spectral band providing a richer representation of a scene, which cannot be obtained with just images from one spectral band. In this work we tackle the cross-spectral image similarity problem by using Convolutional Neural Networks (CNNs). We explore three different CNN architectures to compare the similarity of cross-spectral image patches. Specifically, we train each network with images from the visible and the near-infrared spectrum, and then test the result with two public cross-spectral datasets. Experimental results show that CNN approaches outperform the current state-of-art on both cross-spectral datasets. Additionally, our experiments show that some CNN architectures are capable of generalizing between different crossspectral domains. | ||||
Address | Las vegas; USA; June 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPRW | ||
Notes | ADAS; 600.086; 600.076 | Approved | no | ||
Call Number | Admin @ si @AAS2016 | Serial | 2809 | ||
Permanent link to this record | |||||
Author | B. Zhou; Agata Lapedriza; J. Xiao; A. Torralba; A. Oliva | ||||
Title | Learning Deep Features for Scene Recognition using Places Database | Type | Conference Article | ||
Year | 2014 | Publication | 28th Annual Conference on Neural Information Processing Systems | Abbreviated Journal | |
Volume | Issue | Pages | 487-495 | ||
Keywords | |||||
Abstract | |||||
Address | Montreal; Canada; December 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | NIPS | ||
Notes | OR;MV | Approved | no | ||
Call Number | Admin @ si @ ZLX2014 | Serial | 2621 | ||
Permanent link to this record | |||||
Author | Xinhang Song; Shuqiang Jiang; Luis Herranz; Chengpeng Chen | ||||
Title | Learning Effective RGB-D Representations for Scene Recognition | Type | Journal Article | ||
Year | 2019 | Publication | IEEE Transactions on Image Processing | Abbreviated Journal | TIP |
Volume | 28 | Issue | 2 | Pages | 980-993 |
Keywords | |||||
Abstract | Deep convolutional networks can achieve impressive results on RGB scene recognition thanks to large data sets such as places. In contrast, RGB-D scene recognition is still underdeveloped in comparison, due to two limitations of RGB-D data we address in this paper. The first limitation is the lack of depth data for training deep learning models. Rather than fine tuning or transferring RGB-specific features, we address this limitation by proposing an architecture and a two-step training approach that directly learns effective depth-specific features using weak supervision via patches. The resulting RGB-D model also benefits from more complementary multimodal features. Another limitation is the short range of depth sensors (typically 0.5 m to 5.5 m), resulting in depth images not capturing distant objects in the scenes that RGB images can. We show that this limitation can be addressed by using RGB-D videos, where more comprehensive depth information is accumulated as the camera travels across the scenes. Focusing on this scenario, we introduce the ISIA RGB-D video data set to evaluate RGB-D scene recognition with videos. Our video recognition architecture combines convolutional and recurrent neural networks that are trained in three steps with increasingly complex data to learn effective features (i.e., patches, frames, and sequences). Our approach obtains the state-of-the-art performances on RGB-D image (NYUD2 and SUN RGB-D) and video (ISIA RGB-D) scene recognition. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | LAMP; 600.141; 600.120 | Approved | no | ||
Call Number | Admin @ si @ SJH2019 | Serial | 3247 | ||
Permanent link to this record | |||||
Author | Raul Gomez; Lluis Gomez; Jaume Gibert; Dimosthenis Karatzas | ||||
Title | Learning from# Barcelona Instagram data what Locals and Tourists post about its Neighbourhoods | Type | Conference Article | ||
Year | 2018 | Publication | 15th European Conference on Computer Vision Workshops | Abbreviated Journal | |
Volume | 11134 | Issue | Pages | 530-544 | |
Keywords | |||||
Abstract | Massive tourism is becoming a big problem for some cities, such as Barcelona, due to its concentration in some neighborhoods. In this work we gather Instagram data related to Barcelona consisting on images-captions pairs and, using the text as a supervisory signal, we learn relations between images, words and neighborhoods. Our goal is to learn which visual elements appear in photos when people is posting about each neighborhood. We perform a language separate treatment of the data and show that it can be extrapolated to a tourists and locals separate analysis, and that tourism is reflected in Social Media at a neighborhood level. The presented pipeline allows analyzing the differences between the images that tourists and locals associate to the different neighborhoods. The proposed method, which can be extended to other cities or subjects, proves that Instagram data can be used to train multi-modal (image and text) machine learning models that are useful to analyze publications about a city at a neighborhood level. We publish the collected dataset, InstaBarcelona and the code used in the analysis. | ||||
Address | Munich; Alemanya; September 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ECCVW | ||
Notes | DAG; 600.129; 601.338; 600.121 | Approved | no | ||
Call Number | Admin @ si @ GGG2018b | Serial | 3176 | ||
Permanent link to this record | |||||
Author | Pau Riba; Andreas Fischer; Josep Llados; Alicia Fornes | ||||
Title | Learning Graph Distances with Message Passing Neural Networks | Type | Conference Article | ||
Year | 2018 | Publication | 24th International Conference on Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 2239-2244 | ||
Keywords | ★Best Paper Award★ | ||||
Abstract | Graph representations have been widely used in pattern recognition thanks to their powerful representation formalism and rich theoretical background. A number of error-tolerant graph matching algorithms such as graph edit distance have been proposed for computing a distance between two labelled graphs. However, they typically suffer from a high
computational complexity, which makes it difficult to apply these matching algorithms in a real scenario. In this paper, we propose an efficient graph distance based on the emerging field of geometric deep learning. Our method employs a message passing neural network to capture the graph structure and learns a metric with a siamese network approach. The performance of the proposed graph distance is validated in two application cases, graph classification and graph retrieval of handwritten words, and shows a promising performance when compared with (approximate) graph edit distance benchmarks. |
||||
Address | Beijing; China; August 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICPR | ||
Notes | DAG; 600.097; 603.057; 601.302; 600.121 | Approved | no | ||
Call Number | Admin @ si @ RFL2018 | Serial | 3168 | ||
Permanent link to this record | |||||
Author | Pau Riba; Andreas Fischer; Josep Llados; Alicia Fornes | ||||
Title | Learning graph edit distance by graph neural networks | Type | Journal Article | ||
Year | 2021 | Publication | Pattern Recognition | Abbreviated Journal | PR |
Volume | 120 | Issue | Pages | 108132 | |
Keywords | |||||
Abstract | The emergence of geometric deep learning as a novel framework to deal with graph-based representations has faded away traditional approaches in favor of completely new methodologies. In this paper, we propose a new framework able to combine the advances on deep metric learning with traditional approximations of the graph edit distance. Hence, we propose an efficient graph distance based on the novel field of geometric deep learning. Our method employs a message passing neural network to capture the graph structure, and thus, leveraging this information for its use on a distance computation. The performance of the proposed graph distance is validated on two different scenarios. On the one hand, in a graph retrieval of handwritten words i.e. keyword spotting, showing its superior performance when compared with (approximate) graph edit distance benchmarks. On the other hand, demonstrating competitive results for graph similarity learning when compared with the current state-of-the-art on a recent benchmark dataset. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG; 600.140; 600.121 | Approved | no | ||
Call Number | Admin @ si @ RFL2021 | Serial | 3611 | ||
Permanent link to this record | |||||
Author | Pau Riba; Andreas Fischer; Josep Llados; Alicia Fornes | ||||
Title | Learning Graph Edit Distance by Graph NeuralNetworks | Type | Miscellaneous | ||
Year | 2020 | Publication | Arxiv | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | The emergence of geometric deep learning as a novel framework to deal with graph-based representations has faded away traditional approaches in favor of completely new methodologies. In this paper, we propose a new framework able to combine the advances on deep metric learning with traditional approximations of the graph edit distance. Hence, we propose an efficient graph distance based on the novel field of geometric deep learning. Our method employs a message passing neural network to capture the graph structure, and thus, leveraging this information for its use on a distance computation. The performance of the proposed graph distance is validated on two different scenarios. On the one hand, in a graph retrieval of handwritten words~\ie~keyword spotting, showing its superior performance when compared with (approximate) graph edit distance benchmarks. On the other hand, demonstrating competitive results for graph similarity learning when compared with the current state-of-the-art on a recent benchmark dataset. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG; 600.121; 600.140; 601.302 | Approved | no | ||
Call Number | Admin @ si @ RFL2020 | Serial | 3555 | ||
Permanent link to this record | |||||
Author | Marco Buzzelli; Joost Van de Weijer; Raimondo Schettini | ||||
Title | Learning Illuminant Estimation from Object Recognition | Type | Conference Article | ||
Year | 2018 | Publication | 25th International Conference on Image Processing | Abbreviated Journal | |
Volume | Issue | Pages | 3234 - 3238 | ||
Keywords | Illuminant estimation; computational color constancy; semi-supervised learning; deep learning; convolutional neural networks | ||||
Abstract | In this paper we present a deep learning method to estimate the illuminant of an image. Our model is not trained with illuminant annotations, but with the objective of improving performance on an auxiliary task such as object recognition. To the best of our knowledge, this is the first example of a deep
learning architecture for illuminant estimation that is trained without ground truth illuminants. We evaluate our solution on standard datasets for color constancy, and compare it with state of the art methods. Our proposal is shown to outperform most deep learning methods in a cross-dataset evaluation setup, and to present competitive results in a comparison with parametric solutions. |
||||
Address | Athens; Greece; October 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICIP | ||
Notes | LAMP; 600.109; 600.120 | Approved | no | ||
Call Number | Admin @ si @ BWS2018 | Serial | 3157 | ||
Permanent link to this record | |||||
Author | Vassileios Balntas; Edgar Riba; Daniel Ponsa; Krystian Mikolajczyk | ||||
Title | Learning local feature descriptors with triplets and shallow convolutional neural networks | Type | Conference Article | ||
Year | 2016 | Publication | 27th British Machine Vision Conference | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | It has recently been demonstrated that local feature descriptors based on convolutional neural networks (CNN) can significantly improve the matching performance. Previous work on learning such descriptors has focused on exploiting pairs of positive and negative patches to learn discriminative CNN representations. In this work, we propose to utilize triplets of training samples, together with in-triplet mining of hard negatives.
We show that our method achieves state of the art results, without the computational overhead typically associated with mining of negatives and with lower complexity of the network architecture. We compare our approach to recently introduced convolutional local feature descriptors, and demonstrate the advantages of the proposed methods in terms of performance and speed. We also examine different loss functions associated with triplets. |
||||
Address | York; UK; September 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | BMVC | ||
Notes | ADAS; 600.086 | Approved | no | ||
Call Number | Admin @ si @ BRP2016 | Serial | 2818 | ||
Permanent link to this record |