toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Jaykishan Patel; Alban Flachot; Javier Vazquez; David H. Brainard; Thomas S. A. Wallis; Marcus A. Brubaker; Richard F. Murray edit  url
openurl 
  Title A deep convolutional neural network trained to infer surface reflectance is deceived by mid-level lightness illusions Type Journal Article
  Year 2023 Publication Journal of Vision Abbreviated Journal JV  
  Volume 23 Issue 9 Pages 4817-4817  
  Keywords  
  Abstract A long-standing view is that lightness illusions are by-products of strategies employed by the visual system to stabilize its perceptual representation of surface reflectance against changes in illumination. Computationally, one such strategy is to infer reflectance from the retinal image, and to base the lightness percept on this inference. CNNs trained to infer reflectance from images have proven successful at solving this problem under limited conditions. To evaluate whether these CNNs provide suitable starting points for computational models of human lightness perception, we tested a state-of-the-art CNN on several lightness illusions, and compared its behaviour to prior measurements of human performance. We trained a CNN (Yu & Smith, 2019) to infer reflectance from luminance images. The network had a 30-layer hourglass architecture with skip connections. We trained the network via supervised learning on 100K images, rendered in Blender, each showing randomly placed geometric objects (surfaces, cubes, tori, etc.), with random Lambertian reflectance patterns (solid, Voronoi, or low-pass noise), under randomized point+ambient lighting. The renderer also provided the ground-truth reflectance images required for training. After training, we applied the network to several visual illusions. These included the argyle, Koffka-Adelson, snake, White’s, checkerboard assimilation, and simultaneous contrast illusions, along with their controls where appropriate. The CNN correctly predicted larger illusions in the argyle, Koffka-Adelson, and snake images than in their controls. It also correctly predicted an assimilation effect in White's illusion. It did not, however, account for the checkerboard assimilation or simultaneous contrast effects. These results are consistent with the view that at least some lightness phenomena are by-products of a rational approach to inferring stable representations of physical properties from intrinsically ambiguous retinal images. Furthermore, they suggest that CNN models may be a promising starting point for new models of human lightness perception.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MACO; CIC Approved no  
  Call Number (up) Admin @ si @ PFV2023 Serial 3890  
Permanent link to this record
 

 
Author Marco Pedersoli; Jordi Gonzalez; Andrew Bagdanov; Xavier Roca edit  doi
openurl 
  Title Efficient Discriminative Multiresolution Cascade for Real-Time Human Detection Applications Type Journal Article
  Year 2011 Publication Pattern Recognition Letters Abbreviated Journal PRL  
  Volume 32 Issue 13 Pages 1581-1587  
  Keywords  
  Abstract Human detection is fundamental in many machine vision applications, like video surveillance, driving assistance, action recognition and scene understanding. However in most of these applications real-time performance is necessary and this is not achieved yet by current detection methods.

This paper presents a new method for human detection based on a multiresolution cascade of Histograms of Oriented Gradients (HOG) that can highly reduce the computational cost of detection search without affecting accuracy. The method consists of a cascade of sliding window detectors. Each detector is a linear Support Vector Machine (SVM) composed of HOG features at different resolutions, from coarse at the first level to fine at the last one.

In contrast to previous methods, our approach uses a non-uniform stride of the sliding window that is defined by the feature resolution and allows the detection to be incrementally refined as going from coarse-to-fine resolution. In this way, the speed-up of the cascade is not only due to the fewer number of features computed at the first levels of the cascade, but also to the reduced number of windows that need to be evaluated at the coarse resolution. Experimental results show that our method reaches a detection rate comparable with the state-of-the-art of detectors based on HOG features, while at the same time the detection search is up to 23 times faster.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ISE Approved no  
  Call Number (up) Admin @ si @ PGB2011a Serial 1707  
Permanent link to this record
 

 
Author Victor Ponce; Mario Gorga; Xavier Baro; Sergio Escalera edit  url
doi  isbn
openurl 
  Title Human Behavior Analysis from Video Data Using Bag-of-Gestures Type Conference Article
  Year 2011 Publication 22nd International Joint Conference on Artificial Intelligence Abbreviated Journal  
  Volume 3 Issue Pages 2836-2837  
  Keywords  
  Abstract Human Behavior Analysis in Uncontrolled Environments can be categorized in two main challenges: 1) Feature extraction and 2) Behavior analysis from a set of corporal language vocabulary. In this work, we present our achievements characterizing some simple behaviors from visual data on different real applications and discuss our plan for future work: low level vocabulary definition from bag-of-gesture units and high level modelling and inference of human behaviors.  
  Address Barcelona  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-57735-516-8 Medium  
  Area Expedition Conference IJCAI  
  Notes HuPBA;MV Approved no  
  Call Number (up) Admin @ si @ PGB2011b Serial 1770  
Permanent link to this record
 

 
Author Victor Ponce; Mario Gorga; Xavier Baro; Petia Radeva; Sergio Escalera edit  url
openurl 
  Title Analisis de la Expresion Oral y Gestual en Proyectos Fin de Carrera Via un Sistema de Vision Artificial Type Miscellaneous
  Year 2011 Publication Revista electronica de la asociacion de enseñantes universitarios de la informatica AENUI Abbreviated Journal ReVision  
  Volume 4 Issue 1 Pages 8-18  
  Keywords  
  Abstract La comunicación y expresión oral es una competencia de especial relevancia en el EEES. No obstante, en muchas enseñanzas superiores la puesta en práctica de esta competencia ha sido relegada principalmente a la presentación de proyectos fin de carrera. Dentro de un proyecto de innovación docente, se ha desarrollado una herramienta informática para la extracción de información objetiva para el análisis de la expresión oral y gestual de los alumnos. El objetivo es dar un “feedback” a los estudiantes que les permita mejorar la calidad de sus presentaciones. El prototipo inicial que se presenta en este trabajo permite extraer de forma automática información audiovisual y analizarla mediante técnicas de aprendizaje. El sistema ha sido aplicado a 15 proyectos fin de carrera y 15 exposiciones dentro de una asignatura de cuarto curso. Los resultados obtenidos muestran la viabilidad del sistema para sugerir factores que ayuden tanto en el éxito de la comunicación así como en los criterios de evaluación.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1989-1199 ISBN Medium  
  Area Expedition Conference  
  Notes MILAB;HuPBA;MV Approved no  
  Call Number (up) Admin @ si @ PGB2011c Serial 1783  
Permanent link to this record
 

 
Author Victor Ponce; Mario Gorga; Xavier Baro; Petia Radeva; Sergio Escalera edit  url
openurl 
  Title Análisis de la expresión oral y gestual en proyectos fin de carrera vía un sistema de visión artificial Type Journal Article
  Year 2011 Publication ReVisión Abbreviated Journal  
  Volume 4 Issue 1 Pages  
  Keywords  
  Abstract La comunicación y expresión oral es una competencia de especial relevancia en el EEES. No obstante, en muchas enseñanzas superiores la puesta en práctica de esta competencia ha sido relegada principalmente a la presentación de proyectos fin de carrera. Dentro de un proyecto de innovación docente, se ha desarrollado una herramienta informática para la extracción de información objetiva para el análisis de la expresión oral y gestual de los alumnos. El objetivo es dar un “feedback” a los estudiantes que les permita mejorar la calidad de sus presentaciones. El prototipo inicial que se presenta en este trabajo permite extraer de forma automática información audiovisual y analizarla mediante técnicas de aprendizaje. El sistema ha sido aplicado a 15 proyectos fin de carrera y 15 exposiciones dentro de una asignatura de cuarto curso. Los resultados obtenidos muestran la viabilidad del sistema para sugerir factores que ayuden tanto en el éxito de la comunicación así como en los criterios de evaluación.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1989-1199 ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA; MILAB;MV Approved no  
  Call Number (up) Admin @ si @ PGB2011d Serial 2514  
Permanent link to this record
 

 
Author Y. Patel; Lluis Gomez; Raul Gomez; Marçal Rusiñol; Dimosthenis Karatzas; C.V. Jawahar edit  openurl
  Title TextTopicNet-Self-Supervised Learning of Visual Features Through Embedding Images on Semantic Text Spaces Type Miscellaneous
  Year 2018 Publication Arxiv Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract The immense success of deep learning based methods in computer vision heavily relies on large scale training datasets. These richly annotated datasets help the network learn discriminative visual features. Collecting and annotating such datasets requires a tremendous amount of human effort and annotations are limited to popular set of classes. As an alternative, learning visual features by designing auxiliary tasks which make use of freely available self-supervision has become increasingly popular in the computer vision community.
In this paper, we put forward an idea to take advantage of multi-modal context to provide self-supervision for the training of computer vision algorithms. We show that adequate visual features can be learned efficiently by training a CNN to predict the semantic textual context in which a particular image is more probable to appear as an illustration. More specifically we use popular text embedding techniques to provide the self-supervision for the training of deep CNN.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes DAG; 600.084; 601.338; 600.121 Approved no  
  Call Number (up) Admin @ si @ PGG2018 Serial 3177  
Permanent link to this record
 

 
Author Karel Paleček; David Geronimo; Frederic Lerasle edit  doi
isbn  openurl
  Title Pre-attention cues for person detection Type Conference Article
  Year 2012 Publication Cognitive Behavioural Systems, COST 2102 International Training School Abbreviated Journal  
  Volume Issue Pages 225-235  
  Keywords  
  Abstract Current state-of-the-art person detectors have been proven reliable and achieve very good detection rates. However, the performance is often far from real time, which limits their use to low resolution images only. In this paper, we deal with candidate window generation problem for person detection, i.e. we want to reduce the computational complexity of a person detector by reducing the number of regions that has to be evaluated. We base our work on Alexe’s paper [1], which introduced several pre-attention cues for generic object detection. We evaluate these cues in the context of person detection and show that their performance degrades rapidly for scenes containing multiple objects of interest such as pictures from urban environment. We extend this set by new cues, which better suits our class-specific task. The cues are designed to be simple and efficient, so that they can be used in the pre-attention phase of a more complex sliding window based person detector.  
  Address Dresden, Germany  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-34583-8 Medium  
  Area Expedition Conference COST-TS  
  Notes ADAS Approved no  
  Call Number (up) Admin @ si @ PGL2012 Serial 2148  
Permanent link to this record
 

 
Author Adrien Pavao; Isabelle Guyon; Anne-Catherine Letournel; Dinh-Tuan Tran; Xavier Baro; Hugo Jair Escalante; Sergio Escalera; Tyler Thomas; Zhen Xu edit  url
openurl 
  Title CodaLab Competitions: An Open Source Platform to Organize Scientific Challenges Type Journal Article
  Year 2023 Publication Journal of Machine Learning Research Abbreviated Journal JMLR  
  Volume Issue Pages  
  Keywords  
  Abstract CodaLab Competitions is an open source web platform designed to help data scientists and research teams to crowd-source the resolution of machine learning problems through the organization of competitions, also called challenges or contests. CodaLab Competitions provides useful features such as multiple phases, results and code submissions, multi-score leaderboards, and jobs running
inside Docker containers. The platform is very flexible and can handle large scale experiments, by allowing organizers to upload large datasets and provide their own CPU or GPU compute workers.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HUPBA Approved no  
  Call Number (up) Admin @ si @ PGL2023 Serial 3973  
Permanent link to this record
 

 
Author Y. Patel; Lluis Gomez; Marçal Rusiñol; Dimosthenis Karatzas edit   pdf
openurl 
  Title Dynamic Lexicon Generation for Natural Scene Images Type Conference Article
  Year 2016 Publication 14th European Conference on Computer Vision Workshops Abbreviated Journal  
  Volume Issue Pages 395-410  
  Keywords scene text; photo OCR; scene understanding; lexicon generation; topic modeling; CNN  
  Abstract Many scene text understanding methods approach the endtoend recognition problem from a word-spotting perspective and take huge bene t from using small per-image lexicons. Such customized lexicons are normally assumed as given and their source is rarely discussed.
In this paper we propose a method that generates contextualized lexicons
for scene images using only visual information. For this, we exploit
the correlation between visual and textual information in a dataset consisting
of images and textual content associated with them. Using the topic modeling framework to discover a set of latent topics in such a dataset allows us to re-rank a xed dictionary in a way that prioritizes the words that are more likely to appear in a given image. Moreover, we train a CNN that is able to reproduce those word rankings but using only the image raw pixels as input. We demonstrate that the quality of the automatically obtained custom lexicons is superior to a generic frequency-based baseline.
 
  Address Amsterdam; The Netherlands; October 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ECCVW  
  Notes DAG; 600.084 Approved no  
  Call Number (up) Admin @ si @ PGR2016 Serial 2825  
Permanent link to this record
 

 
Author Y. Patel; Lluis Gomez; Marçal Rusiñol; Dimosthenis Karatzas; C.V. Jawahar edit   pdf
url  doi
openurl 
  Title Self-Supervised Visual Representations for Cross-Modal Retrieval Type Conference Article
  Year 2019 Publication ACM International Conference on Multimedia Retrieval Abbreviated Journal  
  Volume Issue Pages 182–186  
  Keywords  
  Abstract Cross-modal retrieval methods have been significantly improved in last years with the use of deep neural networks and large-scale annotated datasets such as ImageNet and Places. However, collecting and annotating such datasets requires a tremendous amount of human effort and, besides, their annotations are limited to discrete sets of popular visual classes that may not be representative of the richer semantics found on large-scale cross-modal retrieval datasets. In this paper, we present a self-supervised cross-modal retrieval framework that leverages as training data the correlations between images and text on the entire set of Wikipedia articles. Our method consists in training a CNN to predict: (1) the semantic context of the article in which an image is more probable to appear as an illustration, and (2) the semantic context of its caption. Our experiments demonstrate that the proposed method is not only capable of learning discriminative visual representations for solving vision tasks like classification, but that the learned representations are better for cross-modal retrieval when compared to supervised pre-training of the network on the ImageNet dataset.  
  Address Otawa; Canada; june 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICMR  
  Notes DAG; 600.121; 600.129 Approved no  
  Call Number (up) Admin @ si @ PGR2019 Serial 3288  
Permanent link to this record
 

 
Author V. Poulain d'Andecy; Emmanuel Hartmann; Marçal Rusiñol edit   pdf
doi  openurl
  Title Field Extraction by hybrid incremental and a-priori structural templates Type Conference Article
  Year 2018 Publication 13th IAPR International Workshop on Document Analysis Systems Abbreviated Journal  
  Volume Issue Pages 251 - 256  
  Keywords Layout Analysis; information extraction; incremental learning  
  Abstract In this paper, we present an incremental framework for extracting information fields from administrative documents. First, we demonstrate some limits of the existing state-of-the-art methods such as the delay of the system efficiency. This is a concern in industrial context when we have only few samples of each document class. Based on this analysis, we propose a hybrid system combining incremental learning by means of itf-df statistics and a-priori generic
models. We report in the experimental section our results obtained with a dataset of real invoices.
 
  Address Viena; Austria; April 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference DAS  
  Notes DAG; 600.084; 600.129; 600.121 Approved no  
  Call Number (up) Admin @ si @ PHR2018 Serial 3106  
Permanent link to this record
 

 
Author Lorenzo Porzi; Markus Hofinger; Idoia Ruiz; Joan Serrat; Samuel Rota Bulo; Peter Kontschieder edit   pdf
url  doi
openurl 
  Title Learning Multi-Object Tracking and Segmentation from Automatic Annotations Type Conference Article
  Year 2020 Publication 33rd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 6845-6854  
  Keywords  
  Abstract In this work we contribute a novel pipeline to automatically generate training data, and to improve over state-of-the-art multi-object tracking and segmentation (MOTS) methods. Our proposed track mining algorithm turns raw street-level videos into high-fidelity MOTS training data, is scalable and overcomes the need of expensive and time-consuming manual annotation approaches. We leverage state-of-the-art instance segmentation results in combination with optical flow predictions, also trained on automatically harvested training data. Our second major contribution is MOTSNet – a deep learning, tracking-by-detection architecture for MOTS – deploying a novel mask-pooling layer for improved object association over time. Training MOTSNet with our automatically extracted data leads to significantly improved sMOTSA scores on the novel KITTI MOTS dataset (+1.9%/+7.5% on cars/pedestrians), and MOTSNet improves by +4.1% over previously best methods on the MOTSChallenge dataset. Our most impressive finding is that we can improve over previous best-performing works, even in complete absence of manually annotated MOTS training data.  
  Address virtual; June 2020  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPR  
  Notes ADAS; 600.124; 600.118 Approved no  
  Call Number (up) Admin @ si @ PHR2020 Serial 3402  
Permanent link to this record
 

 
Author Monica Piñol edit  openurl
  Title Adaptative Vocabulary Tree for Image Classification using Reinforcement Learning Type Report
  Year 2010 Publication CVC Technical Report Abbreviated Journal  
  Volume 162 Issue Pages  
  Keywords  
  Abstract  
  Address Bellaterra (Barcelona)  
  Corporate Author Computer Vision Center Thesis Master's thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number (up) Admin @ si @ Piñ2010 Serial 1936  
Permanent link to this record
 

 
Author Monica Piñol edit  isbn
openurl 
  Title Reinforcement Learning of Visual Descriptors for Object Recognition Type Book Whole
  Year 2014 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract The human visual system is able to recognize the object in an image even if the object is partially occluded, from various points of view, in different colors, or with independence of the distance to the object. To do this, the eye obtains an image and extracts features that are sent to the brain, and then, in the brain the object is recognized. In computer vision, the object recognition branch tries to learns from the human visual system behaviour to achieve its goal. Hence, an algorithm is used to identify representative features of the scene (detection), then another algorithm is used to describe these points (descriptor) and finally the extracted information is used for classifying the object in the scene. The selection of this set of algorithms is a very complicated task and thus, a very active research field. In this thesis we are focused on the selection/learning of the best descriptor for a given image. In the state of the art there are several descriptors but we do not know how to choose the best descriptor because depends on scenes that we will use (dataset) and the algorithm chosen to do the classification. We propose a framework based on reinforcement learning and bag of features to choose the best descriptor according to the given image. The system can analyse the behaviour of different learning algorithms and descriptor sets. Furthermore the proposed framework for improving the classification/recognition ratio can be used with minor changes in other computer vision fields, such as video retrieval.  
  Address  
  Corporate Author Thesis Ph.D. thesis  
  Publisher Ediciones Graficas Rey Place of Publication Editor Ricardo Toledo;Angel Sappa  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-84-940902-5-7 Medium  
  Area Expedition Conference  
  Notes ADAS; 600.076 Approved no  
  Call Number (up) Admin @ si @ Piñ2014 Serial 2464  
Permanent link to this record
 

 
Author Xavier Perez Sala; Laura Igual; Sergio Escalera; Cecilio Angulo edit   pdf
doi  openurl
  Title Uniform Sampling of Rotations for Discrete and Continuous Learning of 2D Shape Models Type Book Chapter
  Year 2012 Publication Vision Robotics: Technologies for Machine Learning and Vision Applications Abbreviated Journal  
  Volume Issue 2 Pages 23-42  
  Keywords  
  Abstract Different methodologies of uniform sampling over the rotation group, SO(3), for building unbiased 2D shape models from 3D objects are introduced and reviewed in this chapter. State-of-the-art non uniform sampling approaches are discussed, and uniform sampling methods using Euler angles and quaternions are introduced. Moreover, since presented work is oriented to model building applications, it is not limited to general discrete methods to obtain uniform 3D rotations, but also from a continuous point of view in the case of Procrustes Analysis.  
  Address  
  Corporate Author Thesis  
  Publisher IGI-Global Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB;HuPBA Approved no  
  Call Number (up) Admin @ si @ PIE2012 Serial 2064  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: