|
M. Gomez, J. Mauri, E. Fernandez-Nofrerias, Oriol Rodriguez-Leor, Carme Julia, Misael Rosales, et al. (2002). Modelo fisico para la simulacion de ultrasonido Intravascular. XXXVIII Congreso Nacional de la Sociedad Española de Cardiologia..
|
|
|
M. Gomez, J. Mauri, E. Fernandez-Nofrerias, Oriol Rodriguez-Leor, Carme Julia, Oriol Pujol, et al. (2002). Diferenciacion de las estructuras del vaso coronario mediante el procesamiento de imagenes y el analisis de las diferentes texturas a partir de la ecografia intracoronaria. XXXVIII Congreso Nacional de la Sociedad Española de Cardiologia.
|
|
|
Maedeh Aghaei, Mariella Dimiccoli, C. Canton-Ferrer, & Petia Radeva. (2018). Towards social pattern characterization from egocentric photo-streams. CVIU - Computer Vision and Image Understanding, 171, 104–117.
Abstract: Following the increasingly popular trend of social interaction analysis in egocentric vision, this article presents a comprehensive pipeline for automatic social pattern characterization of a wearable photo-camera user. The proposed framework relies merely on the visual analysis of egocentric photo-streams and consists of three major steps. The first step is to detect social interactions of the user where the impact of several social signals on the task is explored. The detected social events are inspected in the second step for categorization into different social meetings. These two steps act at event-level where each potential social event is modeled as a multi-dimensional time-series, whose dimensions correspond to a set of relevant features for each task; finally, LSTM is employed to classify the time-series. The last step of the framework is to characterize social patterns of the user. Our goal is to quantify the duration, the diversity and the frequency of the user social relations in various social situations. This goal is achieved by the discovery of recurrences of the same people across the whole set of social events related to the user. Experimental evaluation over EgoSocialStyle – the proposed dataset in this work, and EGO-GROUP demonstrates promising results on the task of social pattern characterization from egocentric photo-streams.
Keywords: Social pattern characterization; Social signal extraction; Lifelogging; Convolutional and recurrent neural networks
|
|
|
Maedeh Aghaei, Mariella Dimiccoli, & Petia Radeva. (2016). Multi-face tracking by extended bag-of-tracklets in egocentric photo-streams. CVIU - Computer Vision and Image Understanding, 149, 146–156.
Abstract: Wearable cameras offer a hands-free way to record egocentric images of daily experiences, where social events are of special interest. The first step towards detection of social events is to track the appearance of multiple persons involved in them. In this paper, we propose a novel method to find correspondences of multiple faces in low temporal resolution egocentric videos acquired through a wearable camera. This kind of photo-stream imposes additional challenges to the multi-tracking problem with respect to conventional videos. Due to the free motion of the camera and to its low temporal resolution, abrupt changes in the field of view, in illumination condition and in the target location are highly frequent. To overcome such difficulties, we propose a multi-face tracking method that generates a set of tracklets through finding correspondences along the whole sequence for each detected face and takes advantage of the tracklets redundancy to deal with unreliable ones. Similar tracklets are grouped into the so called extended bag-of-tracklets (eBoT), which is aimed to correspond to a specific person. Finally, a prototype tracklet is extracted for each eBoT, where the occurred occlusions are estimated by relying on a new measure of confidence. We validated our approach over an extensive dataset of egocentric photo-streams and compared it to state of the art methods, demonstrating its effectiveness and robustness.
|
|
|
Manisha Das, Deep Gupta, Petia Radeva, & Ashwini M. Bakde. (2021). Optimized CT-MR neurological image fusion framework using biologically inspired spiking neural model in hybrid ℓ1 - ℓ0 layer decomposition domain. BSPC - Biomedical Signal Processing and Control, 68, 102535.
Abstract: Medical image fusion plays an important role in the clinical diagnosis of several critical neurological diseases by merging complementary information available in multimodal images. In this paper, a novel CT-MR neurological image fusion framework is proposed using an optimized biologically inspired feedforward neural model in two-scale hybrid ℓ1 − ℓ0 decomposition domain using gray wolf optimization to preserve the structural as well as texture information present in source CT and MR images. Initially, the source images are subjected to two-scale ℓ1 − ℓ0 decomposition with optimized parameters, giving a scale-1 detail layer, a scale-2 detail layer and a scale-2 base layer. Two detail layers at scale-1 and 2 are fused using an optimized biologically inspired neural model and weighted average scheme based on local energy and modified spatial frequency to maximize the preservation of edges and local textures, respectively, while the scale-2 base layer gets fused using choose max rule to preserve the background information. To optimize the hyper-parameters of hybrid ℓ1 − ℓ0 decomposition and biologically inspired neural model, a fitness function is evaluated based on spatial frequency and edge index of the resultant fused image obtained by adding all the fused components. The fusion performance is analyzed by conducting extensive experiments on different CT-MR neurological images. Experimental results indicate that the proposed method provides better-fused images and outperforms the other state-of-the-art fusion methods in both visual and quantitative assessments.
|
|