|   | 
Details
   web
Records
Author Manuel Graña; Bogdan Raducanu
Title Special Issue on Bioinspired and knowledge based techniques and applications Type Journal Article
Year 2015 Publication Neurocomputing Abbreviated Journal NEUCOM
Volume Issue Pages 1-3
Keywords
Abstract (up)
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes LAMP; Approved no
Call Number Admin @ si @ GrR2015 Serial 2598
Permanent link to this record
 

 
Author Xavier Otazu; Olivier Penacchio; Xim Cerda-Company
Title Brightness and colour induction through contextual influences in V1 Type Conference Article
Year 2015 Publication Scottish Vision Group 2015 SGV2015 Abbreviated Journal
Volume 12 Issue 9 Pages 1208-2012
Keywords
Abstract (up)
Address Carnoustie; Scotland; March 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference SGV
Notes NEUROBIT; Approved no
Call Number Admin @ si @ OPC2015a Serial 2632
Permanent link to this record
 

 
Author Olivier Penacchio; Xavier Otazu; A. wilkins; J. Harris
Title Uncomfortable images prevent lateral interactions in the cortex from providing a sparse code Type Conference Article
Year 2015 Publication European Conference on Visual Perception ECVP2015 Abbreviated Journal
Volume Issue Pages
Keywords
Abstract (up)
Address Liverpool; uk; August 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ECVP
Notes NEUROBIT; Approved no
Call Number Admin @ si @ POW2015 Serial 2633
Permanent link to this record
 

 
Author Xavier Otazu; Olivier Penacchio; Xim Cerda-Company
Title An excitatory-inhibitory firing rate model accounts for brightness induction, colour induction and visual discomfort Type Conference Article
Year 2015 Publication Barcelona Computational, Cognitive and Systems Neuroscience Abbreviated Journal
Volume Issue Pages
Keywords
Abstract (up)
Address Barcelona; June 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference BARCCSYN
Notes NEUROBIT; Approved no
Call Number Admin @ si @ OPC2015b Serial 2634
Permanent link to this record
 

 
Author H. Martin Kjer; Jens Fagertun; Sergio Vera; Debora Gil; Miguel Angel Gonzalez Ballester; Rasmus R. Paulsena
Title Free-form image registration of human cochlear uCT data using skeleton similarity as anatomical prior Type Journal Article
Year 2016 Publication Patter Recognition Letters Abbreviated Journal PRL
Volume 76 Issue 1 Pages 76-82
Keywords
Abstract (up)
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes IAM; 600.060 Approved no
Call Number Admin @ si @ MFV2017b Serial 2941
Permanent link to this record
 

 
Author Javad Zolfaghari Bengar; Bogdan Raducanu; Joost Van de Weijer
Title When Deep Learners Change Their Mind: Learning Dynamics for Active Learning Type Conference Article
Year 2021 Publication 19th International Conference on Computer Analysis of Images and Patterns Abbreviated Journal
Volume 13052 Issue 1 Pages 403-413
Keywords
Abstract (up) Active learning aims to select samples to be annotated that yield the largest performance improvement for the learning algorithm. Many methods approach this problem by measuring the informativeness of samples and do this based on the certainty of the network predictions for samples. However, it is well-known that neural networks are overly confident about their prediction and are therefore an untrustworthy source to assess sample informativeness. In this paper, we propose a new informativeness-based active learning method. Our measure is derived from the learning dynamics of a neural network. More precisely we track the label assignment of the unlabeled data pool during the training of the algorithm. We capture the learning dynamics with a metric called label-dispersion, which is low when the network consistently assigns the same label to the sample during the training of the network and high when the assigned label changes frequently. We show that label-dispersion is a promising predictor of the uncertainty of the network, and show on two benchmark datasets that an active learning algorithm based on label-dispersion obtains excellent results.
Address September 2021
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CAIP
Notes LAMP; Approved no
Call Number Admin @ si @ ZRV2021 Serial 3673
Permanent link to this record
 

 
Author Javad Zolfaghari Bengar; Joost Van de Weijer; Bartlomiej Twardowski; Bogdan Raducanu
Title Reducing Label Effort: Self- Supervised Meets Active Learning Type Conference Article
Year 2021 Publication International Conference on Computer Vision Workshops Abbreviated Journal
Volume Issue Pages 1631-1639
Keywords
Abstract (up) Active learning is a paradigm aimed at reducing the annotation effort by training the model on actively selected informative and/or representative samples. Another paradigm to reduce the annotation effort is self-training that learns from a large amount of unlabeled data in an unsupervised way and fine-tunes on few labeled samples. Recent developments in self-training have achieved very impressive results rivaling supervised learning on some datasets. The current work focuses on whether the two paradigms can benefit from each other. We studied object recognition datasets including CIFAR10, CIFAR100 and Tiny ImageNet with several labeling budgets for the evaluations. Our experiments reveal that self-training is remarkably more efficient than active learning at reducing the labeling effort, that for a low labeling budget, active learning offers no benefit to self-training, and finally that the combination of active learning and self-training is fruitful when the labeling budget is high. The performance gap between active learning trained either with self-training or from scratch diminishes as we approach to the point where almost half of the dataset is labeled.
Address October 2021
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCVW
Notes LAMP; Approved no
Call Number Admin @ si @ ZVT2021 Serial 3672
Permanent link to this record
 

 
Author Gemma Rotger; Felipe Lumbreras; Francesc Moreno-Noguer; Antonio Agudo
Title 2D-to-3D Facial Expression Transfer Type Conference Article
Year 2018 Publication 24th International Conference on Pattern Recognition Abbreviated Journal
Volume Issue Pages 2008 - 2013
Keywords
Abstract (up) Automatically changing the expression and physical features of a face from an input image is a topic that has been traditionally tackled in a 2D domain. In this paper, we bring this problem to 3D and propose a framework that given an
input RGB video of a human face under a neutral expression, initially computes his/her 3D shape and then performs a transfer to a new and potentially non-observed expression. For this purpose, we parameterize the rest shape –obtained from standard factorization approaches over the input video– using a triangular
mesh which is further clustered into larger macro-segments. The expression transfer problem is then posed as a direct mapping between this shape and a source shape, such as the blend shapes of an off-the-shelf 3D dataset of human facial expressions. The mapping is resolved to be geometrically consistent between 3D models by requiring points in specific regions to map on semantic
equivalent regions. We validate the approach on several synthetic and real examples of input faces that largely differ from the source shapes, yielding very realistic expression transfers even in cases with topology changes, such as a synthetic video sequence of a single-eyed cyclops.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICPR
Notes MSIAU; 600.086; 600.130; 600.118 Approved no
Call Number Admin @ si @ RLM2018 Serial 3232
Permanent link to this record
 

 
Author Daniela Rato; Miguel Oliveira; Vitor Santos; Manuel Gomes; Angel Sappa
Title A sensor-to-pattern calibration framework for multi-modal industrial collaborative cells Type Journal Article
Year 2022 Publication Journal of Manufacturing Systems Abbreviated Journal JMANUFSYST
Volume 64 Issue Pages 497-507
Keywords Calibration; Collaborative cell; Multi-modal; Multi-sensor
Abstract (up) Collaborative robotic industrial cells are workspaces where robots collaborate with human operators. In this context, safety is paramount, and for that a complete perception of the space where the collaborative robot is inserted is necessary. To ensure this, collaborative cells are equipped with a large set of sensors of multiple modalities, covering the entire work volume. However, the fusion of information from all these sensors requires an accurate extrinsic calibration. The calibration of such complex systems is challenging, due to the number of sensors and modalities, and also due to the small overlapping fields of view between the sensors, which are positioned to capture different viewpoints of the cell. This paper proposes a sensor to pattern methodology that can calibrate a complex system such as a collaborative cell in a single optimization procedure. Our methodology can tackle RGB and Depth cameras, as well as LiDARs. Results show that our methodology is able to accurately calibrate a collaborative cell containing three RGB cameras, a depth camera and three 3D LiDARs.
Address
Corporate Author Thesis
Publisher Science Direct Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MSIAU; MACO Approved no
Call Number Admin @ si @ ROS2022 Serial 3750
Permanent link to this record
 

 
Author Juan Ramon Terven Salinas; Joaquin Salas; Bogdan Raducanu
Title New Opportunities for Computer Vision-Based Assistive Technology Systems for the Visually Impaired Type Journal Article
Year 2014 Publication Computer Abbreviated Journal COMP
Volume 47 Issue 4 Pages 52-58
Keywords
Abstract (up) Computing advances and increased smartphone use gives technology system designers greater flexibility in exploiting computer vision to support visually impaired users. Understanding these users' needs will certainly provide insight for the development of improved usability of computing devices.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0018-9162 ISBN Medium
Area Expedition Conference
Notes LAMP; Approved no
Call Number Admin @ si @ TSR2014a Serial 2317
Permanent link to this record
 

 
Author Juan Ramon Terven Salinas; Bogdan Raducanu; Maria Elena Meza-de-Luna; Joaquin Salas
Title Head-gestures mirroring detection in dyadic social linteractions with computer vision-based wearable devices Type Journal Article
Year 2016 Publication Neurocomputing Abbreviated Journal NEUCOM
Volume 175 Issue B Pages 866–876
Keywords Head gestures recognition; Mirroring detection; Dyadic social interaction analysis; Wearable devices
Abstract (up) During face-to-face human interaction, nonverbal communication plays a fundamental role. A relevant aspect that takes part during social interactions is represented by mirroring, in which a person tends to mimic the non-verbal behavior (head and body gestures, vocal prosody, etc.) of the counterpart. In this paper, we introduce a computer vision-based system to detect mirroring in dyadic social interactions with the use of a wearable platform. In our context, mirroring is inferred as simultaneous head noddings displayed by the interlocutors. Our approach consists of the following steps: (1) facial features extraction; (2) facial features stabilization; (3) head nodding recognition; and (4) mirroring detection. Our system achieves a mirroring detection accuracy of 72% on a custom mirroring dataset.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes LAMP; 600.072; 600.068; Approved no
Call Number Admin @ si @ TRM2016 Serial 2721
Permanent link to this record
 

 
Author Hao Fang; Ajian Liu; Jun Wan; Sergio Escalera; Hugo Jair Escalante; Zhen Lei
Title Surveillance Face Presentation Attack Detection Challenge Type Conference Article
Year 2023 Publication Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops Abbreviated Journal
Volume Issue Pages 6360-6370
Keywords
Abstract (up) Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, most of the studies lacked consideration of long-distance scenarios. Specifically, compared with FAS in traditional scenes such as phone unlocking, face payment, and self-service security inspection, FAS in long-distance such as station squares, parks, and self-service supermarkets are equally important, but it has not been sufficiently explored yet. In order to fill this gap in the FAS community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask). SuHiFiMask contains 10,195 videos from 101 subjects of different age groups, which are collected by 7 mainstream surveillance cameras. Based on this dataset and protocol-3 for evaluating the robustness of the algorithm under quality changes, we organized a face presentation attack detection challenge in surveillance scenarios. It attracted 180 teams for the development phase with a total of 37 teams qualifying for the final round. The organization team re-verified and re-ran the submitted code and used the results as the final ranking. In this paper, we present an overview of the challenge, including an introduction to the dataset used, the definition of the protocol, the evaluation metrics, and the announcement of the competition results. Finally, we present the top-ranked algorithms and the research ideas provided by the competition for attack detection in long-range surveillance scenarios.
Address Vancouver; Canada; June 2023
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPRW
Notes HuPBA Approved no
Call Number Admin @ si @ FLW2023 Serial 3917
Permanent link to this record
 

 
Author Cristina Palmero; Javier Selva; Mohammad Ali Bagheri; Sergio Escalera
Title Recurrent CNN for 3D Gaze Estimation using Appearance and Shape Cues Type Conference Article
Year 2018 Publication 29th British Machine Vision Conference Abbreviated Journal
Volume Issue Pages
Keywords
Abstract (up) Gaze behavior is an important non-verbal cue in social signal processing and humancomputer interaction. In this paper, we tackle the problem of person- and head poseindependent 3D gaze estimation from remote cameras, using a multi-modal recurrent convolutional neural network (CNN). We propose to combine face, eyes region, and face landmarks as individual streams in a CNN to estimate gaze in still images. Then, we exploit the dynamic nature of gaze by feeding the learned features of all the frames in a sequence to a many-to-one recurrent module that predicts the 3D gaze vector of the last frame. Our multi-modal static solution is evaluated on a wide range of head poses and gaze directions, achieving a significant improvement of 14.6% over the state of the art on
EYEDIAP dataset, further improved by 4% when the temporal modality is included.
Address Newcastle; UK; September 2018
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference BMVC
Notes HUPBA; no proj Approved no
Call Number Admin @ si @ PSB2018 Serial 3208
Permanent link to this record
 

 
Author Fadi Dornaika; Bogdan Raducanu; Alireza Bosaghzadeh
Title Facial expression recognition based on multi observations with application to social robotics Type Book Chapter
Year 2015 Publication Emotional and Facial Expressions: Recognition, Developmental Differences and Social Importance Abbreviated Journal
Volume Issue Pages 153-166
Keywords
Abstract (up) Human-robot interaction is a hot topic nowadays in the social robotics
community. One crucial aspect is represented by the affective communication
which comes encoded through the facial expressions. In this chapter, we propose a novel approach for facial expression recognition, which exploits an efficient and adaptive graph-based label propagation (semi-supervised mode) in a multi-observation framework. The facial features are extracted using an appearance-based 3D face tracker, viewand texture independent. Our method has been extensively tested on the CMU dataset, and has been conveniently compared with other methods for graph construction. With the proposed approach, we developed an application for an AIBO robot, in which it mirrors the recognized facial
expression.
Address
Corporate Author Thesis
Publisher Nova Science publishers Place of Publication Editor Bruce Flores
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes LAMP; Approved no
Call Number Admin @ si @ DRB2015 Serial 2720
Permanent link to this record
 

 
Author Bogdan Raducanu; Alireza Bosaghzadeh; Fadi Dornaika
Title Facial Expression Recognition based on Multi-view Observations with Application to Social Robotics Type Conference Article
Year 2014 Publication 1st Workshop on Computer Vision for Affective Computing Abbreviated Journal
Volume Issue Pages 1-8
Keywords
Abstract (up) Human-robot interaction is a hot topic nowadays in the social robotics community. One crucial aspect is represented by the affective communication which comes encoded through the facial expressions. In this paper, we propose a novel approach for facial expression recognition, which exploits an efficient and adaptive graph-based label propagation (semi-supervised mode) in a multi-observation framework. The facial features are extracted using an appearance-based 3D face tracker, view- and texture independent. Our method has been extensively tested on the CMU dataset, and has been conveniently compared with other methods for graph construction. With the proposed approach, we developed an application for an AIBO robot, in which it mirrors the recognized facial
expression.
Address Singapore; November 2014
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ACCV
Notes LAMP; Approved no
Call Number Admin @ si @ RBD2014 Serial 2599
Permanent link to this record