|
Records |
Links |
|
Author |
Ivet Rafegas; Maria Vanrell; Luis A Alexandre; G. Arias |
|
|
Title |
Understanding trained CNNs by indexing neuron selectivity |
Type |
Journal Article |
|
Year |
2020 |
Publication |
Pattern Recognition Letters |
Abbreviated Journal |
PRL |
|
|
Volume |
136 |
Issue |
|
Pages |
318-325 |
|
|
Keywords |
|
|
|
Abstract |
The impressive performance of Convolutional Neural Networks (CNNs) when solving different vision problems is shadowed by their black-box nature and our consequent lack of understanding of the representations they build and how these representations are organized. To help understanding these issues, we propose to describe the activity of individual neurons by their Neuron Feature visualization and quantify their inherent selectivity with two specific properties. We explore selectivity indexes for: an image feature (color); and an image label (class membership). Our contribution is a framework to seek or classify neurons by indexing on these selectivity properties. It helps to find color selective neurons, such as a red-mushroom neuron in layer Conv4 or class selective neurons such as dog-face neurons in layer Conv5 in VGG-M, and establishes a methodology to derive other selectivity properties. Indexing on neuron selectivity can statistically draw how features and classes are represented through layers in a moment when the size of trained nets is growing and automatic tools to index neurons can be helpful. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
CIC; 600.087; 600.140; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ RVL2019 |
Serial |
3310 |
|
Permanent link to this record |
|
|
|
|
Author |
Jordi Gonzalez; Dani Rowe; Javier Varona; Xavier Roca |
|
|
Title |
Understanding Dynamic Scenes based on Human Sequence Evaluation |
Type |
Journal Article |
|
Year |
2009 |
Publication |
Image and Vision Computing |
Abbreviated Journal |
IMAVIS |
|
|
Volume |
27 |
Issue |
10 |
Pages |
1433–1444 |
|
|
Keywords |
Image Sequence Evaluation; High-level processing of monitored scenes; Segmentation and tracking in complex scenes; Event recognition in dynamic scenes; Human motion understanding; Human behaviour interpretation; Natural-language text generation; Realistic demonstrators |
|
|
Abstract |
In this paper, a Cognitive Vision System (CVS) is presented, which explains the human behaviour of monitored scenes using natural-language texts. This cognitive analysis of human movements recorded in image sequences is here referred to as Human Sequence Evaluation (HSE) which defines a set of transformation modules involved in the automatic generation of semantic descriptions from pixel values. In essence, the trajectories of human agents are obtained to generate textual interpretations of their motion, and also to infer the conceptual relationships of each agent w.r.t. its environment. For this purpose, a human behaviour model based on Situation Graph Trees (SGTs) is considered, which permits both bottom-up (hypothesis generation) and top-down (hypothesis refinement) analysis of dynamic scenes. The resulting system prototype interprets different kinds of behaviour and reports textual descriptions in multiple languages. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ISE |
Approved |
no |
|
|
Call Number |
ISE @ ise @ GRV2009 |
Serial |
1211 |
|
Permanent link to this record |
|
|
|
|
Author |
Eduardo Aguilar; Petia Radeva |
|
|
Title |
Uncertainty-aware integration of local and flat classifiers for food recognition |
Type |
Journal Article |
|
Year |
2020 |
Publication |
Pattern Recognition Letters |
Abbreviated Journal |
PRL |
|
|
Volume |
136 |
Issue |
|
Pages |
237-243 |
|
|
Keywords |
|
|
|
Abstract |
Food image recognition has recently attracted the attention of many researchers, due to the challenging problem it poses, the ease collection of food images, and its numerous applications to health and leisure. In real applications, it is necessary to analyze and recognize thousands of different foods. For this purpose, we propose a novel prediction scheme based on a class hierarchy that considers local classifiers, in addition to a flat classifier. In order to make a decision about which approach to use, we define different criteria that take into account both the analysis of the Epistemic Uncertainty estimated from the ‘children’ classifiers and the prediction from the ‘parent’ classifier. We evaluate our proposal using three Uncertainty estimation methods, tested on two public food datasets. The results show that the proposed method reduces parent-child error propagation in hierarchical schemes and improves classification results compared to the single flat classifier, meanwhile maintains good performance regardless the Uncertainty estimation method chosen. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB; no proj |
Approved |
no |
|
|
Call Number |
Admin @ si @ AgR2020 |
Serial |
3525 |
|
Permanent link to this record |
|
|
|
|
Author |
Marta Diez-Ferrer; Arturo Morales; Rosa Lopez Lisbona; Noelia Cubero; Cristian Tebe; Susana Padrones; Samantha Aso; Jordi Dorca; Debora Gil; Antoni Rosell |
|
|
Title |
Ultrathin Bronchoscopy with and without Virtual Bronchoscopic Navigation: Influence of Segmentation on Diagnostic Yield |
Type |
Journal Article |
|
Year |
2019 |
Publication |
Respiration |
Abbreviated Journal |
RES |
|
|
Volume |
97 |
Issue |
3 |
Pages |
252-258 |
|
|
Keywords |
Lung cancer; Peripheral lung lesion; Diagnosis; Bronchoscopy; Ultrathin bronchoscopy; Virtual bronchoscopic navigation |
|
|
Abstract |
Background: Bronchoscopy is a safe technique for diagnosing peripheral pulmonary lesions (PPLs), and virtual bronchoscopic navigation (VBN) helps guide the bronchoscope to PPLs. Objectives: We aimed to compare the diagnostic yield of VBN-guided and unguided ultrathin bronchoscopy (UTB) and explore clinical and technical factors associated with better results. We developed a diagnostic algorithm for deciding whether to use VBN to reach PPLs or choose an alternative diagnostic approach. Methods: We compared diagnostic yield between VBN-UTB (prospective cases) and unguided UTB (historical controls) and analyzed the VBN-UTB subgroup to identify clinical and technical variables that could predict the success of VBN-UTB. Results: Fifty-five cases and 110 controls were included. The overall diagnostic yield did not differ between the VBN-guided and unguided arms (47 and 40%, respectively; p = 0.354). Although the yield was slightly higher for PPLs ≤20 mm in the VBN-UTB arm, the difference was not significant (p = 0.069). No other clinical characteristics were associated with a higher yield in a subgroup analysis, but an 85% diagnostic yield was observed when segmentation was optimal and the PPL was endobronchial (vs. 30% when segmentation was suboptimal and 20% when segmentation was optimal but the PPL was extrabronchial). Conclusions: VBN-guided UTB is not superior to unguided UTB. A greater impact of VBN-guided over unguided UTB is highly dependent on both segmentation quality and an endobronchial location of the PPL. Segmentation quality should be considered before starting a procedure, when an alternative technique that may improve yield can be chosen, saving time and resources. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
IAM; 600.145; 600.139 |
Approved |
no |
|
|
Call Number |
Admin @ si @ DML2019 |
Serial |
3134 |
|
Permanent link to this record |
|
|
|
|
Author |
Dorota Kaminska; Kadir Aktas; Davit Rizhinashvili; Danila Kuklyanov; Abdallah Hussein Sham; Sergio Escalera; Kamal Nasrollahi; Thomas B. Moeslund; Gholamreza Anbarjafari |
|
|
Title |
Two-stage Recognition and Beyond for Compound Facial Emotion Recognition |
Type |
Journal Article |
|
Year |
2021 |
Publication |
Electronics |
Abbreviated Journal |
ELEC |
|
|
Volume |
10 |
Issue |
22 |
Pages |
2847 |
|
|
Keywords |
compound emotion recognition; facial expression recognition; dominant and complementary emotion recognition; deep learning |
|
|
Abstract |
Facial emotion recognition is an inherently complex problem due to individual diversity in facial features and racial and cultural differences. Moreover, facial expressions typically reflect the mixture of people’s emotional statuses, which can be expressed using compound emotions. Compound facial emotion recognition makes the problem even more difficult because the discrimination between dominant and complementary emotions is usually weak. We have created a database that includes 31,250 facial images with different emotions of 115 subjects whose gender distribution is almost uniform to address compound emotion recognition. In addition, we have organized a competition based on the proposed dataset, held at FG workshop 2020. This paper analyzes the winner’s approach—a two-stage recognition method (1st stage, coarse recognition; 2nd stage, fine recognition), which enhances the classification of symmetrical emotion labels. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HUPBA; no proj |
Approved |
no |
|
|
Call Number |
Admin @ si @ KAR2021 |
Serial |
3642 |
|
Permanent link to this record |
|
|
|
|
Author |
A. Auge; Javier Varona; Juan J. Villanueva |
|
|
Title |
Tumour Segmentation in Mammographies with Neural Networks. Application to Tumoural Volume Approximation. |
Type |
Journal Article |
|
Year |
1997 |
Publication |
Proceedings of the VII NSPRIA, Vol. II, CVC–UAB |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
ISE @ ise @ AVV1997 |
Serial |
208 |
|
Permanent link to this record |
|
|
|
|
Author |
Shiqi Yang; Yaxing Wang; Joost Van de Weijer; Luis Herranz; Shangling Jui; Jian Yang |
|
|
Title |
Trust Your Good Friends: Source-Free Domain Adaptation by Reciprocal Neighborhood Clustering |
Type |
Journal Article |
|
Year |
2023 |
Publication |
IEEE Transactions on Pattern Analysis and Machine Intelligence |
Abbreviated Journal |
TPAMI |
|
|
Volume |
45 |
Issue |
12 |
Pages |
15883-15895 |
|
|
Keywords |
|
|
|
Abstract |
Domain adaptation (DA) aims to alleviate the domain shift between source domain and target domain. Most DA methods require access to the source data, but often that is not possible (e.g., due to data privacy or intellectual property). In this paper, we address the challenging source-free domain adaptation (SFDA) problem, where the source pretrained model is adapted to the target domain in the absence of source data. Our method is based on the observation that target data, which might not align with the source domain classifier, still forms clear clusters. We capture this intrinsic structure by defining local affinity of the target data, and encourage label consistency among data with high local affinity. We observe that higher affinity should be assigned to reciprocal neighbors. To aggregate information with more context, we consider expanded neighborhoods with small affinity values. Furthermore, we consider the density around each target sample, which can alleviate the negative impact of potential outliers. In the experimental results we verify that the inherent structure of the target features is an important source of information for domain adaptation. We demonstrate that this local structure can be efficiently captured by considering the local neighbors, the reciprocal neighbors, and the expanded neighborhood. Finally, we achieve state-of-the-art performance on several 2D image and 3D point cloud recognition datasets. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
LAMP; MACO |
Approved |
no |
|
|
Call Number |
Admin @ si @ YWW2023 |
Serial |
3889 |
|
Permanent link to this record |
|
|
|
|
Author |
Antonio Lopez; Gabriel Villalonga; Laura Sellart; German Ros; David Vazquez; Jiaolong Xu; Javier Marin; Azadeh S. Mozafari |
|
|
Title |
Training my car to see using virtual worlds |
Type |
Journal Article |
|
Year |
2017 |
Publication |
Image and Vision Computing |
Abbreviated Journal |
IMAVIS |
|
|
Volume |
38 |
Issue |
|
Pages |
102-118 |
|
|
Keywords |
|
|
|
Abstract |
Computer vision technologies are at the core of different advanced driver assistance systems (ADAS) and will play a key role in oncoming autonomous vehicles too. One of the main challenges for such technologies is to perceive the driving environment, i.e. to detect and track relevant driving information in a reliable manner (e.g. pedestrians in the vehicle route, free space to drive through). Nowadays it is clear that machine learning techniques are essential for developing such a visual perception for driving. In particular, the standard working pipeline consists of collecting data (i.e. on-board images), manually annotating the data (e.g. drawing bounding boxes around pedestrians), learning a discriminative data representation taking advantage of such annotations (e.g. a deformable part-based model, a deep convolutional neural network), and then assessing the reliability of such representation with the acquired data. In the last two decades most of the research efforts focused on representation learning (first, designing descriptors and learning classifiers; later doing it end-to-end). Hence, collecting data and, especially, annotating it, is essential for learning good representations. While this has been the case from the very beginning, only after the disruptive appearance of deep convolutional neural networks that it became a serious issue due to their data hungry nature. In this context, the problem is that manual data annotation is a tiresome work prone to errors. Accordingly, in the late 00’s we initiated a research line consisting of training visual models using photo-realistic computer graphics, especially focusing on assisted and autonomous driving. In this paper, we summarize such a work and show how it has become a new tendency with increasing acceptance. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ LVS2017 |
Serial |
2985 |
|
Permanent link to this record |
|
|
|
|
Author |
Xavier Baro; Sergio Escalera; Jordi Vitria; Oriol Pujol; Petia Radeva |
|
|
Title |
Traffic Sign Recognition Using Evolutionary Adaboost Detection and Forest-ECOC Classification |
Type |
Journal Article |
|
Year |
2009 |
Publication |
IEEE Transactions on Intelligent Transportation Systems |
Abbreviated Journal |
TITS |
|
|
Volume |
10 |
Issue |
1 |
Pages |
113–126 |
|
|
Keywords |
|
|
|
Abstract |
The high variability of sign appearance in uncontrolled environments has made the detection and classification of road signs a challenging problem in computer vision. In this paper, we introduce a novel approach for the detection and classification of traffic signs. Detection is based on a boosted detectors cascade, trained with a novel evolutionary version of Adaboost, which allows the use of large feature spaces. Classification is defined as a multiclass categorization problem. A battery of classifiers is trained to split classes in an Error-Correcting Output Code (ECOC) framework. We propose an ECOC design through a forest of optimal tree structures that are embedded in the ECOC matrix. The novel system offers high performance and better accuracy than the state-of-the-art strategies and is potentially better in terms of noise, affine deformation, partial occlusions, and reduced illumination. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1524-9050 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
OR;MILAB;HuPBA;MV |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ BEV2008 |
Serial |
1116 |
|
Permanent link to this record |
|
|
|
|
Author |
Sergio Escalera; Oriol Pujol; Petia Radeva |
|
|
Title |
Traffic sign recognition system with β -correction |
Type |
Journal Article |
|
Year |
2010 |
Publication |
Machine Vision and Applications |
Abbreviated Journal |
MVA |
|
|
Volume |
21 |
Issue |
2 |
Pages |
99–111 |
|
|
Keywords |
|
|
|
Abstract |
Traffic sign classification represents a classical application of multi-object recognition processing in uncontrolled adverse environments. Lack of visibility, illumination changes, and partial occlusions are just a few problems. In this paper, we introduce a novel system for multi-class classification of traffic signs based on error correcting output codes (ECOC). ECOC is based on an ensemble of binary classifiers that are trained on bi-partition of classes. We classify a wide set of traffic signs types using robust error correcting codings. Moreover, we introduce the novel β-correction decoding strategy that outperforms the state-of-the-art decoding techniques, classifying a high number of classes with great success. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer-Verlag |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0932-8092 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB;HUPBA |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ EPR2010a |
Serial |
1276 |
|
Permanent link to this record |
|
|
|
|
Author |
David Geronimo; Joan Serrat; Antonio Lopez; Ramon Baldrich |
|
|
Title |
Traffic sign recognition for computer vision project-based learning |
Type |
Journal Article |
|
Year |
2013 |
Publication |
IEEE Transactions on Education |
Abbreviated Journal |
T-EDUC |
|
|
Volume |
56 |
Issue |
3 |
Pages |
364-371 |
|
|
Keywords |
traffic signs |
|
|
Abstract |
This paper presents a graduate course project on computer vision. The aim of the project is to detect and recognize traffic signs in video sequences recorded by an on-board vehicle camera. This is a demanding problem, given that traffic sign recognition is one of the most challenging problems for driving assistance systems. Equally, it is motivating for the students given that it is a real-life problem. Furthermore, it gives them the opportunity to appreciate the difficulty of real-world vision problems and to assess the extent to which this problem can be solved by modern computer vision and pattern classification techniques taught in the classroom. The learning objectives of the course are introduced, as are the constraints imposed on its design, such as the diversity of students' background and the amount of time they and their instructors dedicate to the course. The paper also describes the course contents, schedule, and how the project-based learning approach is applied. The outcomes of the course are discussed, including both the students' marks and their personal feedback. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0018-9359 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS; CIC |
Approved |
no |
|
|
Call Number |
Admin @ si @ GSL2013; ADAS @ adas @ |
Serial |
2160 |
|
Permanent link to this record |
|
|
|
|
Author |
Marc Bolaños; Mariella Dimiccoli; Petia Radeva |
|
|
Title |
Towards Storytelling from Visual Lifelogging: An Overview |
Type |
Journal Article |
|
Year |
2017 |
Publication |
IEEE Transactions on Human-Machine Systems |
Abbreviated Journal |
THMS |
|
|
Volume |
47 |
Issue |
1 |
Pages |
77 - 90 |
|
|
Keywords |
|
|
|
Abstract |
Visual lifelogging consists of acquiring images that capture the daily experiences of the user by wearing a camera over a long period of time. The pictures taken offer considerable potential for knowledge mining concerning how people live their lives, hence, they open up new opportunities for many potential applications in fields including healthcare, security, leisure and
the quantified self. However, automatically building a story from a huge collection of unstructured egocentric data presents major challenges. This paper provides a thorough review of advances made so far in egocentric data analysis, and in view of the current state of the art, indicates new lines of research to move us towards storytelling from visual lifelogging. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB; 601.235 |
Approved |
no |
|
|
Call Number |
Admin @ si @ BDR2017 |
Serial |
2712 |
|
Permanent link to this record |
|
|
|
|
Author |
Maedeh Aghaei; Mariella Dimiccoli; C. Canton-Ferrer; Petia Radeva |
|
|
Title |
Towards social pattern characterization from egocentric photo-streams |
Type |
Journal Article |
|
Year |
2018 |
Publication |
Computer Vision and Image Understanding |
Abbreviated Journal |
CVIU |
|
|
Volume |
171 |
Issue |
|
Pages |
104-117 |
|
|
Keywords |
Social pattern characterization; Social signal extraction; Lifelogging; Convolutional and recurrent neural networks |
|
|
Abstract |
Following the increasingly popular trend of social interaction analysis in egocentric vision, this article presents a comprehensive pipeline for automatic social pattern characterization of a wearable photo-camera user. The proposed framework relies merely on the visual analysis of egocentric photo-streams and consists of three major steps. The first step is to detect social interactions of the user where the impact of several social signals on the task is explored. The detected social events are inspected in the second step for categorization into different social meetings. These two steps act at event-level where each potential social event is modeled as a multi-dimensional time-series, whose dimensions correspond to a set of relevant features for each task; finally, LSTM is employed to classify the time-series. The last step of the framework is to characterize social patterns of the user. Our goal is to quantify the duration, the diversity and the frequency of the user social relations in various social situations. This goal is achieved by the discovery of recurrences of the same people across the whole set of social events related to the user. Experimental evaluation over EgoSocialStyle – the proposed dataset in this work, and EGO-GROUP demonstrates promising results on the task of social pattern characterization from egocentric photo-streams. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB; no proj |
Approved |
no |
|
|
Call Number |
Admin @ si @ ADC2018 |
Serial |
3022 |
|
Permanent link to this record |
|
|
|
|
Author |
Carles Sanchez; Jorge Bernal; F. Javier Sanchez; Antoni Rosell; Marta Diez-Ferrer; Debora Gil |
|
|
Title |
Towards On-line Quantification of Tracheal Stenosis from Videobronchoscopy |
Type |
Journal Article |
|
Year |
2015 |
Publication |
International Journal of Computer Assisted Radiology and Surgery |
Abbreviated Journal |
IJCAR |
|
|
Volume |
10 |
Issue |
6 |
Pages |
935-945 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
IAM; MV; 600.075 |
Approved |
no |
|
|
Call Number |
Admin @ si @ SBS2015a |
Serial |
2611 |
|
Permanent link to this record |
|
|
|
|
Author |
Alina Matei; Andreea Glavan; Petia Radeva; Estefania Talavera |
|
|
Title |
Towards Eating Habits Discovery in Egocentric Photo-Streams |
Type |
Journal Article |
|
Year |
2021 |
Publication |
IEEE Access |
Abbreviated Journal |
ACCESS |
|
|
Volume |
9 |
Issue |
|
Pages |
17495-17506 |
|
|
Keywords |
|
|
|
Abstract |
Eating habits are learned throughout the early stages of our lives. However, it is not easy to be aware of how our food-related routine affects our healthy living. In this work, we address the unsupervised discovery of nutritional habits from egocentric photo-streams. We build a food-related behavioral pattern discovery model, which discloses nutritional routines from the activities performed throughout the days. To do so, we rely on Dynamic-Time-Warping for the evaluation of similarity among the collected days. Within this framework, we present a simple, but robust and fast novel classification pipeline that outperforms the state-of-the-art on food-related image classification with a weighted accuracy and F-score of 70% and 63%, respectively. Later, we identify days composed of nutritional activities that do not describe the habits of the person as anomalies in the daily life of the user with the Isolation Forest method. Furthermore, we show an application for the identification of food-related scenes when the camera wearer eats in isolation. Results have shown the good performance of the proposed model and its relevance to visualize the nutritional habits of individuals. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB; no proj |
Approved |
no |
|
|
Call Number |
Admin @ si @ MGR2021 |
Serial |
3637 |
|
Permanent link to this record |