toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author (up) Bogdan Raducanu; Fadi Dornaika edit  doi
isbn  openurl
  Title Dynamic Facial Expression Recognition Using Laplacian Eigenmaps-Based Manifold Learning Type Conference Article
  Year 2010 Publication IEEE International Conference on Robotics and Automation Abbreviated Journal  
  Volume Issue Pages 156–161  
  Keywords  
  Abstract In this paper, we propose an integrated framework for tracking, modelling and recognition of facial expressions. The main contributions are: (i) a view- and texture independent scheme that exploits facial action parameters estimated by an appearance-based 3D face tracker; (ii) the complexity of the non-linear facial expression space is modelled through a manifold, whose structure is learned using Laplacian Eigenmaps. The projected facial expressions are afterwards recognized based on Nearest Neighbor classifier; (iii) with the proposed approach, we developed an application for an AIBO robot, in which it mirrors the perceived facial expression.  
  Address Anchorage; AK; USA;  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1050-4729 ISBN 978-1-4244-5038-1 Medium  
  Area Expedition Conference ICRA  
  Notes OR; MV Approved no  
  Call Number BCNPCL @ bcnpcl @ RaD2010 Serial 1310  
Permanent link to this record
 

 
Author (up) Bogdan Raducanu; Fadi Dornaika edit   pdf
doi  isbn
openurl 
  Title Appearance-based Face Recognition Using A Supervised Manifold Learning Framework Type Conference Article
  Year 2012 Publication IEEE Workshop on the Applications of Computer Vision Abbreviated Journal  
  Volume Issue Pages 465-470  
  Keywords  
  Abstract Many natural image sets, depicting objects whose appearance is changing due to motion, pose or light variations, can be considered samples of a low-dimension nonlinear manifold embedded in the high-dimensional observation space (the space of all possible images). The main contribution of our work is represented by a Supervised Laplacian Eigemaps (S-LE) algorithm, which exploits the class label information for mapping the original data in the embedded space. Our proposed approach benefits from two important properties: i) it is discriminative, and ii) it adaptively selects the neighbors of a sample without using any predefined neighborhood size. Experiments were conducted on four face databases and the results demonstrate that the proposed algorithm significantly outperforms many linear and non-linear embedding techniques. Although we've focused on the face recognition problem, the proposed approach could also be extended to other category of objects characterized by large variance in their appearance.  
  Address Breckenridge; CO; USA  
  Corporate Author Thesis  
  Publisher IEEE Xplore Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1550-5790 ISBN 978-1-4673-0233-3 Medium  
  Area Expedition Conference WACV  
  Notes OR;MV Approved no  
  Call Number Admin @ si @ RaD2012d Serial 1890  
Permanent link to this record
 

 
Author (up) Bogdan Raducanu; Fadi Dornaika edit   pdf
doi  isbn
openurl 
  Title Out-of-Sample Embedding by Sparse Representation Type Conference Article
  Year 2012 Publication Structural, Syntactic, and Statistical Pattern Recognition, Joint IAPR International Workshop Abbreviated Journal  
  Volume 7626 Issue Pages 336-344  
  Keywords  
  Abstract A critical aspect of non-linear dimensionality reduction techniques is represented by the construction of the adjacency graph. The difficulty resides in finding the optimal parameters, a process which, in general, is heuristically driven. Recently, sparse representation has been proposed as a non-parametric solution to overcome this problem. In this paper, we demonstrate that this approach not only serves for the graph construction, but also represents an efficient and accurate alternative for out-of-sample embedding. Considering for a case study the Laplacian Eigenmaps, we applied our method to the face recognition problem. Experimental results conducted on some challenging datasets confirmed the robustness of our approach and its superiority when compared to existing techniques.  
  Address  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-34165-6 Medium  
  Area Expedition Conference SSPR&SPR  
  Notes OR;MV Approved no  
  Call Number Admin @ si @ RaD2012c Serial 2175  
Permanent link to this record
 

 
Author (up) Bogdan Raducanu; Fadi Dornaika edit   pdf
doi  isbn
openurl 
  Title Pose-Invariant Face Recognition in Videos for Human-Machine Interaction Type Conference Article
  Year 2012 Publication 12th European Conference on Computer Vision Abbreviated Journal  
  Volume 7584 Issue Pages 566.575  
  Keywords  
  Abstract Human-machine interaction is a hot topic nowadays in the communities of computer vision and robotics. In this context, face recognition algorithms (used as primary cue for a person’s identity assessment) work well under controlled conditions but degrade significantly when tested in real-world environments. This is mostly due to the difficulty of simultaneously handling variations in illumination, pose, and occlusions. In this paper, we propose a novel approach for robust pose-invariant face recognition for human-robot interaction based on the real-time fitting of a 3D deformable model to input images taken from video sequences. More concrete, our approach generates a rectified face image irrespective with the actual head-pose orientation. Experimental results performed on Honda video database, using several manifold learning techniques, show a distinct advantage of the proposed method over the standard 2D appearance-based snapshot approach.  
  Address  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-33867-0 Medium  
  Area Expedition Conference ECCVW  
  Notes OR;MV Approved no  
  Call Number Admin @ si @ RaD2012e Serial 2182  
Permanent link to this record
 

 
Author (up) Bogdan Raducanu; Jordi Vitria edit  openurl
  Title Online Learning for Human-Robot Interaction Type Conference Article
  Year 2007 Publication IEEE Conference on Computer Vision and Pattern Recognition Workshop on Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Minneapolis (USA)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPR  
  Notes OR; MV Approved no  
  Call Number BCNPCL @ bcnpcl @ RaV2007a Serial 791  
Permanent link to this record
 

 
Author (up) Bogdan Raducanu; Jordi Vitria edit  openurl
  Title Incremental Subspace Learning for Cognitive Visual Processes Type Conference Article
  Year 2007 Publication Advances in Brain, Vision and Artificial Intelligence, 2nd International Symposium Abbreviated Journal  
  Volume 4729 Issue Pages 214–223  
  Keywords  
  Abstract  
  Address Naples (Italy)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference BVAI’07  
  Notes OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ RaV2007b Serial 901  
Permanent link to this record
 

 
Author (up) Bogdan Raducanu; Jordi Vitria; D. Gatica-Perez edit  doi
isbn  openurl
  Title You are Fired! Nonverbal Role Analysis in Competitive Meetings Type Conference Article
  Year 2009 Publication IEEE International Conference on Audio, Speech and Signal Processing Abbreviated Journal  
  Volume Issue Pages 1949–1952  
  Keywords  
  Abstract This paper addresses the problem of social interaction analysis in competitive meetings, using nonverbal cues. For our study, we made use of ldquoThe Apprenticerdquo reality TV show, which features a competition for a real, highly paid corporate job. Our analysis is centered around two tasks regarding a person's role in a meeting: predicting the person with the highest status and predicting the fired candidates. The current study was carried out using nonverbal audio cues. Results obtained from the analysis of a full season of the show, representing around 90 minutes of audio data, are very promising (up to 85.7% of accuracy in the first case and up to 92.8% in the second case). Our approach is based only on the nonverbal interaction dynamics during the meeting without relying on the spoken words.  
  Address Taipei, Taiwan  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1520-6149 ISBN 978-1-4244-2353-8 Medium  
  Area Expedition Conference ICASSP  
  Notes OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ RVG2009 Serial 1154  
Permanent link to this record
 

 
Author (up) Bojana Gajic; Ariel Amato; Ramon Baldrich; Carlo Gatta edit   pdf
openurl 
  Title Bag of Negatives for Siamese Architectures Type Conference Article
  Year 2019 Publication 30th British Machine Vision Conference Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Training a Siamese architecture for re-identification with a large number of identities is a challenging task due to the difficulty of finding relevant negative samples efficiently. In this work we present Bag of Negatives (BoN), a method for accelerated and improved training of Siamese networks that scales well on datasets with a very large number of identities. BoN is an efficient and loss-independent method, able to select a bag of high quality negatives, based on a novel online hashing strategy.  
  Address Cardiff; United Kingdom; September 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference BMVC  
  Notes CIC; 600.140; 600.118 Approved no  
  Call Number Admin @ si @ GAB2019b Serial 3263  
Permanent link to this record
 

 
Author (up) Bojana Gajic; Ariel Amato; Ramon Baldrich; Joost Van de Weijer; Carlo Gatta edit   pdf
doi  openurl
  Title Area Under the ROC Curve Maximization for Metric Learning Type Conference Article
  Year 2022 Publication CVPR 2022 Workshop on Efficien Deep Learning for Computer Vision (ECV 2022, 5th Edition) Abbreviated Journal  
  Volume Issue Pages  
  Keywords Training; Computer vision; Conferences; Area measurement; Benchmark testing; Pattern recognition  
  Abstract Most popular metric learning losses have no direct relation with the evaluation metrics that are subsequently applied to evaluate their performance. We hypothesize that training a metric learning model by maximizing the area under the ROC curve (which is a typical performance measure of recognition systems) can induce an implicit ranking suitable for retrieval problems. This hypothesis is supported by previous work that proved that a curve dominates in ROC space if and only if it dominates in Precision-Recall space. To test this hypothesis, we design and maximize an approximated, derivable relaxation of the area under the ROC curve. The proposed AUC loss achieves state-of-the-art results on two large scale retrieval benchmark datasets (Stanford Online Products and DeepFashion In-Shop). Moreover, the AUC loss achieves comparable performance to more complex, domain specific, state-of-the-art methods for vehicle re-identification.  
  Address New Orleans, USA; 20 June 2022  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes CIC; LAMP; Approved no  
  Call Number Admin @ si @ GAB2022 Serial 3700  
Permanent link to this record
 

 
Author (up) Bojana Gajic; Eduard Vazquez; Ramon Baldrich edit  url
openurl 
  Title Evaluation of Deep Image Descriptors for Texture Retrieval Type Conference Article
  Year 2017 Publication Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2017) Abbreviated Journal  
  Volume Issue Pages 251-257  
  Keywords Texture Representation; Texture Retrieval; Convolutional Neural Networks; Psychophysical Evaluation  
  Abstract The increasing complexity learnt in the layers of a Convolutional Neural Network has proven to be of great help for the task of classification. The topic has received great attention in recently published literature.
Nonetheless, just a handful of works study low-level representations, commonly associated with lower layers. In this paper, we explore recent findings which conclude, counterintuitively, the last layer of the VGG convolutional network is the best to describe a low-level property such as texture. To shed some light on this issue, we are proposing a psychophysical experiment to evaluate the adequacy of different layers of the VGG network for texture retrieval. Results obtained suggest that, whereas the last convolutional layer is a good choice for a specific task of classification, it might not be the best choice as a texture descriptor, showing a very poor performance on texture retrieval. Intermediate layers show the best performance, showing a good combination of basic filters, as in the primary visual cortex, and also a degree of higher level information to describe more complex textures.
 
  Address Porto, Portugal; 27 February – 1 March 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference VISIGRAPP  
  Notes CIC; 600.087 Approved no  
  Call Number Admin @ si @ Serial 3710  
Permanent link to this record
 

 
Author (up) Bojana Gajic; Ramon Baldrich edit  doi
openurl 
  Title Cross-domain fashion image retrieval Type Conference Article
  Year 2018 Publication CVPR 2018 Workshop on Women in Computer Vision (WiCV 2018, 4th Edition) Abbreviated Journal  
  Volume Issue Pages 19500-19502  
  Keywords  
  Abstract Cross domain image retrieval is a challenging task that implies matching images from one domain to their pairs from another domain. In this paper we focus on fashion image retrieval, which involves matching an image of a fashion item taken by users, to the images of the same item taken in controlled condition, usually by professional photographer. When facing this problem, we have different products
in train and test time, and we use triplet loss to train the network. We stress the importance of proper training of simple architecture, as well as adapting general models to the specific task.
 
  Address Salt Lake City, USA; 22 June 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes CIC; 600.087 Approved no  
  Call Number Admin @ si @ Serial 3709  
Permanent link to this record
 

 
Author (up) Boris N. Oreshkin; Pau Rodriguez; Alexandre Lacoste edit   pdf
url  openurl
  Title TADAM: Task dependent adaptive metric for improved few-shot learning Type Conference Article
  Year 2018 Publication 32nd Annual Conference on Neural Information Processing Systems Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Few-shot learning has become essential for producing models that generalize from few examples. In this work, we identify that metric scaling and metric task conditioning are important to improve the performance of few-shot algorithms. Our analysis reveals that simple metric scaling completely changes the nature of few-shot algorithm parameter updates. Metric scaling provides improvements up to 14% in accuracy for certain metrics on the mini-Imagenet 5-way 5-shot classification task. We further propose a simple and effective way of conditioning a learner on the task sample set, resulting in learning a task-dependent metric space. Moreover, we propose and empirically test a practical end-to-end optimization procedure based on auxiliary task co-training to learn a task-dependent metric space. The resulting few-shot learning model based on the task-dependent scaled metric achieves state of the art on mini-Imagenet. We confirm these results on another few-shot dataset that we introduce in this paper based on CIFAR100.  
  Address Montreal; Canada; December 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference NIPS  
  Notes ISE; 600.098; 600.119 Approved no  
  Call Number Admin @ si @ ORL2018 Serial 3140  
Permanent link to this record
 

 
Author (up) C. Alejandro Parraga; Arash Akbarinia edit   pdf
doi  openurl
  Title Colour Constancy as a Product of Dynamic Centre-Surround Adaptation Type Conference Article
  Year 2016 Publication 16th Annual meeting in Vision Sciences Society Abbreviated Journal  
  Volume 16 Issue 12 Pages  
  Keywords  
  Abstract Colour constancy refers to the human visual system's ability to preserve the perceived colour of objects despite changes in the illumination. Its exact mechanisms are unknown, although a number of systems ranging from retinal to cortical and memory are thought to play important roles. The strength of the perceptual shift necessary to preserve these colours is usually estimated by the vectorial distances from an ideal match (or canonical illuminant). In this work we explore how much of the colour constancy phenomenon could be explained by well-known physiological properties of V1 and V2 neurons whose receptive fields (RF) vary according to the contrast and orientation of surround stimuli. Indeed, it has been shown that both RF size and the normalization occurring between centre and surround in cortical neurons depend on the local properties of surrounding stimuli. Our stating point is the construction of a computational model which includes this dynamical centre-surround adaptation by means of two overlapping asymmetric Gaussian kernels whose variances are adjusted to the contrast of surrounding pixels to represent the changes in RF size of cortical neurons and the weights of their respective contributions are altered according to differences in centre-surround contrast and orientation. The final output of the model is obtained after convolving an image with this dynamical operator and an estimation of the illuminant is obtained by considering the contrast of the far surround. We tested our algorithm on naturalistic stimuli from several benchmark datasets. Our results show that although our model does not require any training, its performance against the state-of-the-art is highly competitive, even outperforming learning-based algorithms in some cases. Indeed, these results are very encouraging if we consider that they were obtained with the same parameters for all datasets (i.e. just like the human visual system operates).  
  Address Florida; USA; May 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference VSS  
  Notes NEUROBIT Approved no  
  Call Number Admin @ si @ PaA2016b Serial 2901  
Permanent link to this record
 

 
Author (up) C. Alejandro Parraga; Ramon Baldrich; Maria Vanrell edit  isbn
openurl 
  Title Accurate Mapping of Natural Scenes Radiance to Cone Activation Space: A New Image Dataset Type Conference Article
  Year 2010 Publication 5th European Conference on Colour in Graphics, Imaging and Vision and 12th International Symposium on Multispectral Colour Science Abbreviated Journal  
  Volume Issue Pages 50–57  
  Keywords  
  Abstract The characterization of trichromatic cameras is usually done in terms of a device-independent color space, such as the CIE 1931 XYZ space. This is indeed convenient since it allows the testing of results against colorimetric measures. We have characterized our camera to represent human cone activation by mapping the camera sensor's (RGB) responses to human (LMS) through a polynomial transformation, which can be “customized” according to the types of scenes we want to represent. Here we present a method to test the accuracy of the camera measures and a study on how the choice of training reflectances for the polynomial may alter the results.  
  Address Joensuu, Finland  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 9781617388897 Medium  
  Area Expedition Conference CGIV/MCS  
  Notes CIC Approved no  
  Call Number CAT @ cat @ PBV2010a Serial 1322  
Permanent link to this record
 

 
Author (up) C. Alejandro Parraga; Robert Benavente; Maria Vanrell; Ramon Baldrich edit  openurl
  Title Modelling Inter-Colour Regions of Colour Naming Space Type Conference Article
  Year 2008 Publication 4th European Conference on Colour in Graphics, Imaging and Vision Proceedings Abbreviated Journal  
  Volume Issue Pages 218–222  
  Keywords  
  Abstract  
  Address Terrassa (Spain)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CGIV08  
  Notes CAT;CIC Approved no  
  Call Number CAT @ cat @ PBV2008 Serial 969  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: