|
Records |
Links |
|
Author |
Pau Baiget; Carles Fernandez; Xavier Roca; Jordi Gonzalez |
|
|
Title |
Generation of Augmented Video Sequences Combining Behavioral Animation and Multi Object Tracking |
Type |
Journal Article |
|
Year |
2009 |
Publication |
Computer Animation and Virtual Worlds |
Abbreviated Journal |
|
|
|
Volume |
20 |
Issue |
4 |
Pages |
473–489 |
|
|
Keywords |
|
|
|
Abstract |
In this paper we present a novel approach to generate augmented video sequences in real-time, involving interactions between virtual and real agents in real scenarios. On the one hand, real agent motion is estimated by means of a multi-object tracking algorithm, which determines real objects' position over the scenario for each time step. On the other hand, virtual agents are provided with behavior models considering their interaction with the environment and with other agents. The resulting framework allows to generate video sequences involving behavior-based virtual agents that react to real agent behavior and has applications in education, simulation, and in the game and movie industries. We show the performance of the proposed approach in an indoor and outdoor scenario simulating human and vehicle agents. Copyright © 2009 John Wiley & Sons, Ltd.
We present a novel approach to generate augmented video sequences in real-time, involving interactions between virtual and real agents in real scenarios. On the one hand, real agent motion is estimated by means of a multi-object tracking algorithm, which determines real objects' position over the scenario for each time step. On the other hand, virtual agents are provided with behavior models considering their interaction with the environment and with other agents. © 2009 Wiley Periodicals, Inc. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ISE |
Approved |
no |
|
|
Call Number |
ISE @ ise @ BFR2009 |
Serial |
1170 |
|
Permanent link to this record |
|
|
|
|
Author |
Marcel P. Lucassen; Theo Gevers; Arjan Gijsenij |
|
|
Title |
Texture Affects Color Emotion |
Type |
Journal Article |
|
Year |
2011 |
Publication |
Color Research & Applications |
Abbreviated Journal |
CRA |
|
|
Volume |
36 |
Issue |
6 |
Pages |
426–436 |
|
|
Keywords |
color;texture;color emotion;observer variability;ranking |
|
|
Abstract |
Several studies have recorded color emotions in subjects viewing uniform color (UC) samples. We conduct an experiment to measure and model how these color emotions change when texture is added to the color samples. Using a computer monitor, our subjects arrange samples along four scales: warm–cool, masculine–feminine, hard–soft, and heavy–light. Three sample types of increasing visual complexity are used: UC, grayscale textures, and color textures (CTs). To assess the intraobserver variability, the experiment is repeated after 1 week. Our results show that texture fully determines the responses on the Hard-Soft scale, and plays a role of decreasing weight for the masculine–feminine, heavy–light, and warm–cool scales. Using some 25,000 observer responses, we derive color emotion functions that predict the group-averaged scale responses from the samples' color and texture parameters. For UC samples, the accuracy of our functions is significantly higher (average R2 = 0.88) than that of previously reported functions applied to our data. The functions derived for CT samples have an accuracy of R2 = 0.80. We conclude that when textured samples are used in color emotion studies, the psychological responses may be strongly affected by texture. © 2010 Wiley Periodicals, Inc. Col Res Appl, 2010 |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ALTRES;ISE |
Approved |
no |
|
|
Call Number |
Admin @ si @ LGG2011 |
Serial |
1844 |
|
Permanent link to this record |
|
|
|
|
Author |
Pau Rodriguez; Diego Velazquez; Guillem Cucurull; Josep M. Gonfaus; Xavier Roca; Seiichi Ozawa; Jordi Gonzalez |
|
|
Title |
Personality Trait Analysis in Social Networks Based on Weakly Supervised Learning of Shared Images |
Type |
Journal Article |
|
Year |
2020 |
Publication |
Applied Sciences |
Abbreviated Journal |
APPLSCI |
|
|
Volume |
10 |
Issue |
22 |
Pages |
8170 |
|
|
Keywords |
sentiment analysis, personality trait analysis; weakly-supervised learning; visual classification; OCEAN model; social networks |
|
|
Abstract |
Social networks have attracted the attention of psychologists, as the behavior of users can be used to assess personality traits, and to detect sentiments and critical mental situations such as depression or suicidal tendencies. Recently, the increasing amount of image uploads to social networks has shifted the focus from text to image-based personality assessment. However, obtaining the ground-truth requires giving personality questionnaires to the users, making the process very costly and slow, and hindering research on large populations. In this paper, we demonstrate that it is possible to predict which images are most associated with each personality trait of the OCEAN personality model, without requiring ground-truth personality labels. Namely, we present a weakly supervised framework which shows that the personality scores obtained using specific images textually associated with particular personality traits are highly correlated with scores obtained using standard text-based personality questionnaires. We trained an OCEAN trait model based on Convolutional Neural Networks (CNNs), learned from 120K pictures posted with specific textual hashtags, to infer whether the personality scores from the images uploaded by users are consistent with those scores obtained from text. In order to validate our claims, we performed a personality test on a heterogeneous group of 280 human subjects, showing that our model successfully predicts which kind of image will match a person with a given level of a trait. Looking at the results, we obtained evidence that personality is not only correlated with text, but with image content too. Interestingly, different visual patterns emerged from those images most liked by persons with a particular personality trait: for instance, pictures most associated with high conscientiousness usually contained healthy food, while low conscientiousness pictures contained injuries, guns, and alcohol. These findings could pave the way to complement text-based personality questionnaires with image-based questions. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ISE; 600.119 |
Approved |
no |
|
|
Call Number |
Admin @ si @ RVC2020b |
Serial |
3553 |
|
Permanent link to this record |
|
|
|
|
Author |
Wenjuan Gong; Zhang Yue; Wei Wang; Cheng Peng; Jordi Gonzalez |
|
|
Title |
Meta-MMFNet: Meta-Learning Based Multi-Model Fusion Network for Micro-Expression Recognition |
Type |
Journal Article |
|
Year |
2022 |
Publication |
ACM Transactions on Multimedia Computing, Communications, and Applications |
Abbreviated Journal |
ACMTMC |
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
Feature Fusion; Model Fusion; Meta-Learning; Micro-Expression Recognition |
|
|
Abstract |
Despite its wide applications in criminal investigations and clinical communications with patients suffering from autism, automatic micro-expression recognition remains a challenging problem because of the lack of training data and imbalanced classes problems. In this study, we proposed a meta-learning based multi-model fusion network (Meta-MMFNet) to solve the existing problems. The proposed method is based on the metric-based meta-learning pipeline, which is specifically designed for few-shot learning and is suitable for model-level fusion. The frame difference and optical flow features were fused, deep features were extracted from the fused feature, and finally in the meta-learning-based framework, weighted sum model fusion method was applied for micro-expression classification. Meta-MMFNet achieved better results than state-of-the-art methods on four datasets. The code is available at https://github.com/wenjgong/meta-fusion-based-method. |
|
|
Address |
May 2022 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ISE; 600.157 |
Approved |
no |
|
|
Call Number |
Admin @ si @ GYW2022 |
Serial |
3692 |
|
Permanent link to this record |
|
|
|
|
Author |
Wenjuan Gong; Yue Zhang; Wei Wang; Peng Cheng; Jordi Gonzalez |
|
|
Title |
Meta-MMFNet: Meta-learning-based Multi-model Fusion Network for Micro-expression Recognition |
Type |
Journal Article |
|
Year |
2023 |
Publication |
ACM Transactions on Multimedia Computing, Communications, and Applications |
Abbreviated Journal |
TMCCA |
|
|
Volume |
20 |
Issue |
2 |
Pages |
1–20 |
|
|
Keywords |
|
|
|
Abstract |
Despite its wide applications in criminal investigations and clinical communications with patients suffering from autism, automatic micro-expression recognition remains a challenging problem because of the lack of training data and imbalanced classes problems. In this study, we proposed a meta-learning-based multi-model fusion network (Meta-MMFNet) to solve the existing problems. The proposed method is based on the metric-based meta-learning pipeline, which is specifically designed for few-shot learning and is suitable for model-level fusion. The frame difference and optical flow features were fused, deep features were extracted from the fused feature, and finally in the meta-learning-based framework, weighted sum model fusion method was applied for micro-expression classification. Meta-MMFNet achieved better results than state-of-the-art methods on four datasets. The code is available at https://github.com/wenjgong/meta-fusion-based-method. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ISE |
Approved |
no |
|
|
Call Number |
Admin @ si @ GZW2023 |
Serial |
3862 |
|
Permanent link to this record |