Records |
Author |
Albert Clapes; Julio C. S. Jacques Junior; Carla Morral; Sergio Escalera |
Title |
ChaLearn LAP 2020 Challenge on Identity-preserved Human Detection: Dataset and Results |
Type |
Conference Article |
Year |
2020 |
Publication |
15th IEEE International Conference on Automatic Face and Gesture Recognition |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
801-808 |
Keywords |
|
Abstract |
This paper summarizes the ChaLearn Looking at People 2020 Challenge on Identity-preserved Human Detection (IPHD). For the purpose, we released a large novel dataset containing more than 112K pairs of spatiotemporally aligned depth and thermal frames (and 175K instances of humans) sampled from 780 sequences. The sequences contain hundreds of non-identifiable people appearing in a mix of in-the-wild and scripted scenarios recorded in public and private places. The competition was divided into three tracks depending on the modalities exploited for the detection: (1) depth, (2) thermal, and (3) depth-thermal fusion. Color was also captured but only used to facilitate the groundtruth annotation. Still the temporal synchronization of three sensory devices is challenging, so bad temporal matches across modalities can occur. Hence, the labels provided should considered “weak”, although test frames were carefully selected to minimize this effect and ensure the fairest comparison of the participants’ results. Despite this added difficulty, the results got by the participants demonstrate current fully-supervised methods can deal with that and achieve outstanding detection performance when measured in terms of AP@0.50. |
Address |
Virtual; November 2020 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
FG |
Notes |
HUPBA |
Approved |
no |
Call Number |
Admin @ si @ CJM2020 |
Serial |
3501 |
Permanent link to this record |
|
|
|
Author |
Josep Famadas; Meysam Madadi; Cristina Palmero; Sergio Escalera |
Title |
Generative Video Face Reenactment by AUs and Gaze Regularization |
Type |
Conference Article |
Year |
2020 |
Publication |
15th IEEE International Conference on Automatic Face and Gesture Recognition |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
444-451 |
Keywords |
|
Abstract |
In this work, we propose an encoder-decoder-like architecture to perform face reenactment in image sequences. Our goal is to transfer the training subject identity to a given test subject. We regularize face reenactment by facial action unit intensity and 3D gaze vector regression. This way, we enforce the network to transfer subtle facial expressions and eye dynamics, providing a more lifelike result. The proposed encoder-decoder receives as input the previous sequence frame stacked to the current frame image of facial landmarks. Thus, the generated frames benefit from appearance and geometry, while keeping temporal coherence for the generated sequence. At test stage, a new target subject with the facial performance of the source subject and the appearance of the training subject is reenacted. Principal component analysis is applied to project the test subject geometry to the closest training subject geometry before reenactment. Evaluation of our proposal shows faster convergence, and more accurate and realistic results in comparison to other architectures without action units and gaze regularization. |
Address |
Virtual; November 2020 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
FG |
Notes |
HUPBA |
Approved |
no |
Call Number |
Admin @ si @ FMP2020 |
Serial |
3517 |
Permanent link to this record |