|
Records |
Links |
|
Author |
Julio C. S. Jacques Junior; Yagmur Gucluturk; Marc Perez; Umut Guçlu; Carlos Andujar; Xavier Baro; Hugo Jair Escalante; Isabelle Guyon; Marcel A. J. van Gerven; Rob van Lier; Sergio Escalera |

|
|
Title |
First Impressions: A Survey on Vision-Based Apparent Personality Trait Analysis |
Type |
Journal Article |
|
Year |
2022 |
Publication |
IEEE Transactions on Affective Computing |
Abbreviated Journal |
TAC |
|
|
Volume |
13 |
Issue |
1 |
Pages |
75-95 |
|
|
Keywords |
Personality computing; first impressions; person perception; big-five; subjective bias; computer vision; machine learning; nonverbal signals; facial expression; gesture; speech analysis; multi-modal recognition |
|
|
Abstract |
Personality analysis has been widely studied in psychology, neuropsychology, and signal processing fields, among others. From the past few years, it also became an attractive research area in visual computing. From the computational point of view, by far speech and text have been the most considered cues of information for analyzing personality. However, recently there has been an increasing interest from the computer vision community in analyzing personality from visual data. Recent computer vision approaches are able to accurately analyze human faces, body postures and behaviors, and use these information to infer apparent personality traits. Because of the overwhelming research interest in this topic, and of the potential impact that this sort of methods could have in society, we present in this paper an up-to-date review of existing vision-based approaches for apparent personality trait recognition. We describe seminal and cutting edge works on the subject, discussing and comparing their distinctive features and limitations. Future venues of research in the field are identified and discussed. Furthermore, aspects on the subjectivity in data labeling/evaluation, as well as current datasets and challenges organized to push the research on the field are reviewed. |
|
|
Address |
1 Jan.-March 2022 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher  |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HuPBA;MV;OR;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ JGP2022 |
Serial |
3724 |
|
Permanent link to this record |
|
|
|
|
Author |
Joakim Bruslund Haurum; Meysam Madadi; Sergio Escalera; Thomas B. Moeslund |

|
|
Title |
Multi-scale hybrid vision transformer and Sinkhorn tokenizer for sewer defect classification |
Type |
Journal Article |
|
Year |
2022 |
Publication |
Automation in Construction |
Abbreviated Journal |
AC |
|
|
Volume |
144 |
Issue |
|
Pages |
104614 |
|
|
Keywords |
Sewer Defect Classification; Vision Transformers; Sinkhorn-Knopp; Convolutional Neural Networks; Closed-Circuit Television; Sewer Inspection |
|
|
Abstract |
A crucial part of image classification consists of capturing non-local spatial semantics of image content. This paper describes the multi-scale hybrid vision transformer (MSHViT), an extension of the classical convolutional neural network (CNN) backbone, for multi-label sewer defect classification. To better model spatial semantics in the images, features are aggregated at different scales non-locally through the use of a lightweight vision transformer, and a smaller set of tokens was produced through a novel Sinkhorn clustering-based tokenizer using distinct cluster centers. The proposed MSHViT and Sinkhorn tokenizer were evaluated on the Sewer-ML multi-label sewer defect classification dataset, showing consistent performance improvements of up to 2.53 percentage points. |
|
|
Address |
Dec 2022 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher  |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HuPBA;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ BME2022c |
Serial |
3780 |
|
Permanent link to this record |
|
|
|
|
Author |
Swathikiran Sudhakaran; Sergio Escalera; Oswald Lanz |


|
|
Title |
Gate-Shift-Fuse for Video Action Recognition |
Type |
Journal Article |
|
Year |
2023 |
Publication |
IEEE Transactions on Pattern Analysis and Machine Intelligence |
Abbreviated Journal |
TPAMI |
|
|
Volume |
45 |
Issue |
9 |
Pages |
10913-10928 |
|
|
Keywords |
Action Recognition; Video Classification; Spatial Gating; Channel Fusion |
|
|
Abstract |
Convolutional Neural Networks are the de facto models for image recognition. However 3D CNNs, the straight forward extension of 2D CNNs for video recognition, have not achieved the same success on standard action recognition benchmarks. One of the main reasons for this reduced performance of 3D CNNs is the increased computational complexity requiring large scale annotated datasets to train them in scale. 3D kernel factorization approaches have been proposed to reduce the complexity of 3D CNNs. Existing kernel factorization approaches follow hand-designed and hard-wired techniques. In this paper we propose Gate-Shift-Fuse (GSF), a novel spatio-temporal feature extraction module which controls interactions in spatio-temporal decomposition and learns to adaptively route features through time and combine them in a data dependent manner. GSF leverages grouped spatial gating to decompose input tensor and channel weighting to fuse the decomposed tensors. GSF can be inserted into existing 2D CNNs to convert them into an efficient and high performing spatio-temporal feature extractor, with negligible parameter and compute overhead. We perform an extensive analysis of GSF using two popular 2D CNN families and achieve state-of-the-art or competitive performance on five standard action recognition benchmarks. |
|
|
Address |
1 Sept. 2023 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher  |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HUPBA; no menciona;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ SEL2023 |
Serial |
3814 |
|
Permanent link to this record |
|
|
|
|
Author |
Javier Selva; Anders S. Johansen; Sergio Escalera; Kamal Nasrollahi; Thomas B. Moeslund; Albert Clapes |

|
|
Title |
Video transformers: A survey |
Type |
Journal Article |
|
Year |
2023 |
Publication |
IEEE Transactions on Pattern Analysis and Machine Intelligence |
Abbreviated Journal |
TPAMI |
|
|
Volume |
45 |
Issue |
11 |
Pages |
12922-12943 |
|
|
Keywords |
Artificial Intelligence; Computer Vision; Self-Attention; Transformers; Video Representations |
|
|
Abstract |
Transformer models have shown great success handling long-range interactions, making them a promising tool for modeling video. However, they lack inductive biases and scale quadratically with input length. These limitations are further exacerbated when dealing with the high dimensionality introduced by the temporal dimension. While there are surveys analyzing the advances of Transformers for vision, none focus on an in-depth analysis of video-specific designs. In this survey, we analyze the main contributions and trends of works leveraging Transformers to model video. Specifically, we delve into how videos are handled at the input level first. Then, we study the architectural changes made to deal with video more efficiently, reduce redundancy, re-introduce useful inductive biases, and capture long-term temporal dynamics. In addition, we provide an overview of different training regimes and explore effective self-supervised learning strategies for video. Finally, we conduct a performance comparison on the most common benchmark for Video Transformers (i.e., action classification), finding them to outperform 3D ConvNets even with less computational complexity. |
|
|
Address |
1 Nov. 2023 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher  |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HUPBA; no menciona;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ SJE2023 |
Serial |
3823 |
|
Permanent link to this record |
|
|
|
|
Author |
Hao Fang; Ajian Liu; Jun Wan; Sergio Escalera; Chenxu Zhao; Xu Zhang; Stan Z Li; Zhen Lei |


|
|
Title |
Surveillance Face Anti-spoofing |
Type |
Journal Article |
|
Year |
2024 |
Publication |
IEEE Transactions on Information Forensics and Security |
Abbreviated Journal |
TIFS |
|
|
Volume |
19 |
Issue |
|
Pages |
1535-1546 |
|
|
Keywords |
|
|
|
Abstract |
Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, recent research generally focuses on short-distance applications (i.e., phone unlocking) while lacking consideration of long-distance scenes (i.e., surveillance security checks). In order to promote relevant research and fill this gap in the community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask) dataset captured under 40 surveillance scenes, which has 101 subjects from different age groups with 232 3D attacks (high-fidelity masks), 200 2D attacks (posters, portraits, and screens), and 2 adversarial attacks. In this scene, low image resolution and noise interference are new challenges faced in surveillance FAS. Together with the SuHiFiMask dataset, we propose a Contrastive Quality-Invariance Learning (CQIL) network to alleviate the performance degradation caused by image quality from three aspects: (1) An Image Quality Variable module (IQV) is introduced to recover image information associated with discrimination by combining the super-resolution network. (2) Using generated sample pairs to simulate quality variance distributions to help contrastive learning strategies obtain robust feature representation under quality variation. (3) A Separate Quality Network (SQN) is designed to learn discriminative features independent of image quality. Finally, a large number of experiments verify the quality of the SuHiFiMask dataset and the superiority of the proposed CQIL. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher  |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HUPBA;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ FLW2024 |
Serial |
3869 |
|
Permanent link to this record |