toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Andres Traumann; Sergio Escalera; Gholamreza Anbarjafari edit   pdf
doi  openurl
  Title A New Retexturing Method for Virtual Fitting Room Using Kinect 2 Camera Type Conference Article
  Year 2015 Publication (up) 2015 IEEE Conference on Computer Vision and Pattern Recognition Worshops (CVPRW) Abbreviated Journal  
  Volume Issue Pages 75-79  
  Keywords  
  Abstract  
  Address Boston; EEUU; June 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes HuPBA;MILAB Approved no  
  Call Number Admin @ si @ TEA2015 Serial 2653  
Permanent link to this record
 

 
Author Ramin Irani; Kamal Nasrollahi; Chris Bahnsen; D.H. Lundtoft; Thomas B. Moeslund; Marc O. Simon; Ciprian Corneanu; Sergio Escalera; Tanja L. Pedersen; Maria-Louise Klitgaard; Laura Petrini edit   pdf
doi  openurl
  Title Spatio-temporal Analysis of RGB-D-T Facial Images for Multimodal Pain Level Recognition Type Conference Article
  Year 2015 Publication (up) 2015 IEEE Conference on Computer Vision and Pattern Recognition Worshops (CVPRW) Abbreviated Journal  
  Volume Issue Pages 88-95  
  Keywords  
  Abstract Pain is a vital sign of human health and its automatic detection can be of crucial importance in many different contexts, including medical scenarios. While most available computer vision techniques are based on RGB, in this paper, we investigate the effect of combining RGB, depth, and thermal
facial images for pain detection and pain intensity level recognition. For this purpose, we extract energies released by facial pixels using a spatiotemporal filter. Experiments on a group of 12 elderly people applying the multimodal approach show that the proposed method successfully detects pain and recognizes between three intensity levels in 82% of the analyzed frames improving more than 6% over RGB only analysis in similar conditions.
 
  Address Boston; EEUU; June 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes HuPBA;MILAB Approved no  
  Call Number Admin @ si @ INB2015 Serial 2654  
Permanent link to this record
 

 
Author Aniol Lidon; Xavier Giro; Marc Bolaños; Petia Radeva; Markus Seidl; Matthias Zeppelzauer edit  url
openurl 
  Title UPC-UB-STP @ MediaEval 2015 diversity task: iterative reranking of relevant images Type Conference Article
  Year 2015 Publication (up) 2015 MediaEval Retrieving Diverse Images Task Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract This paper presents the results of the UPC-UB-STP team in the 2015 MediaEval Retrieving Diverse Images Task. The goal of the challenge is to provide a ranked list of Flickr photos for a predefined set of queries. Our approach firstly generates a ranking of images based on a query-independent estimation of its relevance. Only top results are kept and iteratively re-ranked based on their intra-similarity to introduce diversity.  
  Address Wurzen; Germany; September 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference MediaEval  
  Notes MILAB Approved no  
  Call Number Admin @ si @LGB2016 Serial 2793  
Permanent link to this record
 

 
Author Mickael Cormier; Andreas Specker; Julio C. S. Jacques; Lucas Florin; Jurgen Metzler; Thomas B. Moeslund; Kamal Nasrollahi; Sergio Escalera; Jurgen Beyerer edit   pdf
url  doi
openurl 
  Title UPAR Challenge: Pedestrian Attribute Recognition and Attribute-based Person Retrieval – Dataset, Design, and Results Type Conference Article
  Year 2023 Publication (up) 2023 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops Abbreviated Journal  
  Volume Issue Pages 166-175  
  Keywords  
  Abstract In civilian video security monitoring, retrieving and tracking a person of interest often rely on witness testimony and their appearance description. Deployed systems rely on a large amount of annotated training data and are expected to show consistent performance in diverse areas and gen-eralize well between diverse settings w.r.t. different view-points, illumination, resolution, occlusions, and poses for indoor and outdoor scenes. However, for such generalization, the system would require a large amount of various an-notated data for training and evaluation. The WACV 2023 Pedestrian Attribute Recognition and Attributed-based Per-son Retrieval Challenge (UPAR-Challenge) aimed to spot-light the problem of domain gaps in a real-world surveil-lance context and highlight the challenges and limitations of existing methods. The UPAR dataset, composed of 40 important binary attributes over 12 attribute categories across four datasets, was extended with data captured from a low-flying UAV from the P-DESTRE dataset. To this aim, 0.6M additional annotations were manually labeled and vali-dated. Each track evaluated the robustness of the competing methods to domain shifts by training on limited data from a specific domain and evaluating using data from unseen do-mains. The challenge attracted 41 registered participants, but only one team managed to outperform the baseline on one track, emphasizing the task's difficulty. This work de-scribes the challenge design, the adopted dataset, obtained results, as well as future directions on the topic.  
  Address Waikoloa; Hawai; USA; January 2023  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference WACVW  
  Notes HUPBA Approved no  
  Call Number Admin @ si @ CSJ2023 Serial 3902  
Permanent link to this record
 

 
Author Dawid Rymarczyk; Joost van de Weijer; Bartosz Zielinski; Bartlomiej Twardowski edit   pdf
url  openurl
  Title ICICLE: Interpretable Class Incremental Continual Learning. Dawid Rymarczyk Type Conference Article
  Year 2023 Publication (up) 20th IEEE International Conference on Computer Vision Abbreviated Journal  
  Volume Issue Pages 1887-1898  
  Keywords  
  Abstract Continual learning enables incremental learning of new tasks without forgetting those previously learned, resulting in positive knowledge transfer that can enhance performance on both new and old tasks. However, continual learning poses new challenges for interpretability, as the rationale behind model predictions may change over time, leading to interpretability concept drift. We address this problem by proposing Interpretable Class-InCremental LEarning (ICICLE), an exemplar-free approach that adopts a prototypical part-based approach. It consists of three crucial novelties: interpretability regularization that distills previously learned concepts while preserving user-friendly positive reasoning; proximity-based prototype initialization strategy dedicated to the fine-grained setting; and task-recency bias compensation devoted to prototypical parts. Our experimental results demonstrate that ICICLE reduces the interpretability concept drift and outperforms the existing exemplar-free methods of common class-incremental learning when applied to concept-based models.  
  Address Paris; France; October 2023  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICCV  
  Notes LAMP Approved no  
  Call Number Admin @ si @ RWZ2023 Serial 3947  
Permanent link to this record
 

 
Author Jordy Van Landeghem; Ruben Tito; Lukasz Borchmann; Michal Pietruszka; Pawel Joziak; Rafal Powalski; Dawid Jurkiewicz; Mickael Coustaty; Bertrand Anckaert; Ernest Valveny; Matthew Blaschko; Sien Moens; Tomasz Stanislawek edit   pdf
url  openurl
  Title Document Understanding Dataset and Evaluation (DUDE) Type Conference Article
  Year 2023 Publication (up) 20th IEEE International Conference on Computer Vision Abbreviated Journal  
  Volume Issue Pages 19528-19540  
  Keywords  
  Abstract We call on the Document AI (DocAI) community to re-evaluate current methodologies and embrace the challenge of creating more practically-oriented benchmarks. Document Understanding Dataset and Evaluation (DUDE) seeks to remediate the halted research progress in understanding visually-rich documents (VRDs). We present a new dataset with novelties related to types of questions, answers, and document layouts based on multi-industry, multi-domain, and multi-page VRDs of various origins and dates. Moreover, we are pushing the boundaries of current methods by creating multi-task and multi-domain evaluation setups that more accurately simulate real-world situations where powerful generalization and adaptation under low-resource settings are desired. DUDE aims to set a new standard as a more practical, long-standing benchmark for the community, and we hope that it will lead to future extensions and contributions that address real-world challenges. Finally, our work illustrates the importance of finding more efficient ways to model language, images, and layout in DocAI.  
  Address Paris; France; October 2023  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICCV  
  Notes DAG Approved no  
  Call Number Admin @ si @ LTB2023 Serial 3948  
Permanent link to this record
 

 
Author Yuyang Liu; Yang Cong; Dipam Goswami; Xialei Liu; Joost Van de Weijer edit   pdf
url  openurl
  Title Augmented Box Replay: Overcoming Foreground Shift for Incremental Object Detection Type Conference Article
  Year 2023 Publication (up) 20th IEEE International Conference on Computer Vision Abbreviated Journal  
  Volume Issue Pages 11367-11377  
  Keywords  
  Abstract In incremental learning, replaying stored samples from previous tasks together with current task samples is one of the most efficient approaches to address catastrophic forgetting. However, unlike incremental classification, image replay has not been successfully applied to incremental object detection (IOD). In this paper, we identify the overlooked problem of foreground shift as the main reason for this. Foreground shift only occurs when replaying images of previous tasks and refers to the fact that their background might contain foreground objects of the current task. To overcome this problem, a novel and efficient Augmented Box Replay (ABR) method is developed that only stores and replays foreground objects and thereby circumvents the foreground shift problem. In addition, we propose an innovative Attentive RoI Distillation loss that uses spatial attention from region-of-interest (RoI) features to constrain current model to focus on the most important information from old model. ABR significantly reduces forgetting of previous classes while maintaining high plasticity in current classes. Moreover, it considerably reduces the storage requirements when compared to standard image replay. Comprehensive experiments on Pascal-VOC and COCO datasets support the state-of-the-art performance of our model.  
  Address Paris; France; October 2023  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICCV  
  Notes LAMP Approved no  
  Call Number Admin @ si @ LCG2023 Serial 3949  
Permanent link to this record
 

 
Author Rahat Khan; Joost Van de Weijer; Dimosthenis Karatzas; Damien Muselet edit   pdf
doi  openurl
  Title Towards multispectral data acquisition with hand-held devices Type Conference Article
  Year 2013 Publication (up) 20th IEEE International Conference on Image Processing Abbreviated Journal  
  Volume Issue Pages 2053 - 2057  
  Keywords Multispectral; mobile devices; color measurements  
  Abstract We propose a method to acquire multispectral data with handheld devices with front-mounted RGB cameras. We propose to use the display of the device as an illuminant while the camera captures images illuminated by the red, green and
blue primaries of the display. Three illuminants and three response functions of the camera lead to nine response values which are used for reflectance estimation. Results are promising and show that the accuracy of the spectral reconstruction improves in the range from 30-40% over the spectral
reconstruction based on a single illuminant. Furthermore, we propose to compute sensor-illuminant aware linear basis by discarding the part of the reflectances that falls in the sensorilluminant null-space. We show experimentally that optimizing reflectance estimation on these new basis functions decreases
the RMSE significantly over basis functions that are independent to sensor-illuminant. We conclude that, multispectral data acquisition is potentially possible with consumer hand-held devices such as tablets, mobiles, and laptops, opening up applications which are currently considered to be unrealistic.
 
  Address Melbourne; Australia; September 2013  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICIP  
  Notes CIC; DAG; 600.048 Approved no  
  Call Number Admin @ si @ KWK2013b Serial 2265  
Permanent link to this record
 

 
Author Shida Beigpour; Marc Serra; Joost Van de Weijer; Robert Benavente; Maria Vanrell; Olivier Penacchio; Dimitris Samaras edit   pdf
doi  openurl
  Title Intrinsic Image Evaluation On Synthetic Complex Scenes Type Conference Article
  Year 2013 Publication (up) 20th IEEE International Conference on Image Processing Abbreviated Journal  
  Volume Issue Pages 285 - 289  
  Keywords  
  Abstract Scene decomposition into its illuminant, shading, and reflectance intrinsic images is an essential step for scene understanding. Collecting intrinsic image groundtruth data is a laborious task. The assumptions on which the ground-truth
procedures are based limit their application to simple scenes with a single object taken in the absence of indirect lighting and interreflections. We investigate synthetic data for intrinsic image research since the extraction of ground truth is straightforward, and it allows for scenes in more realistic situations (e.g, multiple illuminants and interreflections). With this dataset we aim to motivate researchers to further explore intrinsic image decomposition in complex scenes.
 
  Address Melbourne; Australia; September 2013  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICIP  
  Notes CIC; 600.048; 600.052; 600.051 Approved no  
  Call Number Admin @ si @ BSW2013 Serial 2264  
Permanent link to this record
 

 
Author Katerine Diaz; Francesc J. Ferri; W. Diaz edit  doi
isbn  openurl
  Title Fast Approximated Discriminative Common Vectors using rank-one SVD updates Type Conference Article
  Year 2013 Publication (up) 20th International Conference On Neural Information Processing Abbreviated Journal  
  Volume 8228 Issue III Pages 368-375  
  Keywords  
  Abstract An efficient incremental approach to the discriminative common vector (DCV) method for dimensionality reduction and classification is presented. The proposal consists of a rank-one update along with an adaptive restriction on the rank of the null space which leads to an approximate but convenient solution. The algorithm can be implemented very efficiently in terms of matrix operations and space complexity, which enables its use in large-scale dynamic application domains. Deep comparative experimentation using publicly available high dimensional image datasets has been carried out in order to properly assess the proposed algorithm against several recent incremental formulations.
K. Diaz-Chito, F.J. Ferri, W. Diaz
 
  Address Daegu; Korea; November 2013  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-42050-4 Medium  
  Area Expedition Conference ICONIP  
  Notes ADAS Approved no  
  Call Number Admin @ si @ DFD2013 Serial 2439  
Permanent link to this record
 

 
Author Jaume Amores edit  doi
isbn  openurl
  Title Vocabulary-based Approaches for Multiple-Instance Data: a Comparative Study Type Conference Article
  Year 2010 Publication (up) 20th International Conference on Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 4246–4250  
  Keywords  
  Abstract Multiple Instance Learning (MIL) has become a hot topic and many different algorithms have been proposed in the last years. Despite this fact, there is a lack of comparative studies that shed light into the characteristics of the different methods and their behavior in different scenarios. In this paper we provide such an analysis. We include methods from different families, and pay special attention to vocabulary-based approaches, a new family of methods that has not received much attention in the MIL literature. The empirical comparison includes seven databases from four heterogeneous domains, implementations of eight popular MIL methods, and a study of the behavior under synthetic conditions. Based on this analysis, we show that, with an appropriate implementation, vocabulary-based approaches outperform other MIL methods in most of the cases, showing in general a more consistent performance.  
  Address Istanbul, Turkey  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1051-4651 ISBN 978-1-4244-7542-1 Medium  
  Area Expedition Conference ICPR  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ Amo2010 Serial 1295  
Permanent link to this record
 

 
Author Fadi Dornaika; Bogdan Raducanu edit  doi
isbn  openurl
  Title Person-specific face shape estimation under varying head pose from single snapshots Type Conference Article
  Year 2010 Publication (up) 20th International Conference on Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 3496–3499  
  Keywords  
  Abstract This paper presents a new method for person-specific face shape estimation under varying head pose of a previously unseen person from a single image. We describe a featureless approach based on a deformable 3D model and a learned face subspace. The proposed approach is based on maximizing a likelihood measure associated with a learned face subspace, which is carried out by a stochastic and genetic optimizer. We conducted the experiments on a subset of Honda Video Database showing the feasibility and robustness of the proposed approach. For this reason, our approach could lend itself nicely to complex frameworks involving 3D face tracking and face gesture recognition in monocular videos.  
  Address Istanbul, Turkey  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1051-4651 ISBN 978-1-4244-7542-1 Medium  
  Area Expedition Conference ICPR  
  Notes OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ DoR2010b Serial 1361  
Permanent link to this record
 

 
Author Francesco Ciompi; Oriol Pujol; Petia Radeva edit  doi
isbn  openurl
  Title A meta-learning approach to Conditional Random Fields using Error-Correcting Output Codes Type Conference Article
  Year 2010 Publication (up) 20th International Conference on Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 710–713  
  Keywords  
  Abstract We present a meta-learning framework for the design of potential functions for Conditional Random Fields. The design of both node potential and edge potential is formulated as a classification problem where margin classifiers are used. The set of state transitions for the edge potential is treated as a set of different classes, thus defining a multi-class learning problem. The Error-Correcting Output Codes (ECOC) technique is used to deal with the multi-class problem. Furthermore, the point defined by the combination of margin classifiers in the ECOC space is interpreted in a probabilistic manner, and the obtained distance values are then converted into potential values. The proposed model exhibits very promising results when applied to two real detection problems.  
  Address Istanbul;Turkey  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1051-4651 ISBN 978-1-4244-7542-1 Medium  
  Area Expedition Conference ICPR  
  Notes MILAB;HUPBA Approved no  
  Call Number BCNPCL @ bcnpcl @ CPR2010a Serial 1365  
Permanent link to this record
 

 
Author David Augusto Rojas; Fahad Shahbaz Khan; Joost Van de Weijer edit  doi
isbn  openurl
  Title The Impact of Color on Bag-of-Words based Object Recognition Type Conference Article
  Year 2010 Publication (up) 20th International Conference on Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 1549–1553  
  Keywords  
  Abstract In recent years several works have aimed at exploiting color information in order to improve the bag-of-words based image representation. There are two stages in which color information can be applied in the bag-of-words framework. Firstly, feature detection can be improved by choosing highly informative color-based regions. Secondly, feature description, typically focusing on shape, can be improved with a color description of the local patches. Although both approaches have been shown to improve results the combined merits have not yet been analyzed. Therefore, in this paper we investigate the combined contribution of color to both the feature detection and extraction stages. Experiments performed on two challenging data sets, namely Flower and Pascal VOC 2009; clearly demonstrate that incorporating color in both feature detection and extraction significantly improves the overall performance.  
  Address Istanbul (Turkey)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1051-4651 ISBN 978-1-4244-7542-1 Medium  
  Area Expedition Conference ICPR  
  Notes Approved no  
  Call Number CAT @ cat @ RKW2010 Serial 1415  
Permanent link to this record
 

 
Author Murad Al Haj; Andrew Bagdanov; Jordi Gonzalez; Xavier Roca edit  doi
isbn  openurl
  Title Reactive object tracking with a single PTZ camera Type Conference Article
  Year 2010 Publication (up) 20th International Conference on Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 1690–1693  
  Keywords  
  Abstract In this paper we describe a novel approach to reactive tracking of moving targets with a pan-tilt-zoom camera. The approach uses an extended Kalman filter to jointly track the object position in the real world, its velocity in 3D and the camera intrinsics, in addition to the rate of change of these parameters. The filter outputs are used as inputs to PID controllers which continuously adjust the camera motion in order to reactively track the object at a constant image velocity while simultaneously maintaining a desirable target scale in the image plane. We provide experimental results on simulated and real tracking sequences to show how our tracker is able to accurately estimate both 3D object position and camera intrinsics with very high precision over a wide range of focal lengths.  
  Address Istanbul (Turkey)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1051-4651 ISBN 978-1-4244-7542-1 Medium  
  Area Expedition Conference ICPR  
  Notes ISE Approved no  
  Call Number DAG @ dag @ ABG2010 Serial 1418  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: