toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author (up) M.Gomez; J.Mauri; E.Fernandez-Nofrerias; O.Rodriguez-Leor; C.Julia; Oriol Pujol; Petia Radeva edit  openurl
  Title Diferenciacion de las estructuras del vaso coronario mediante el procesamiento de imagenes y el analisis de las diferentes texturas a partir de la ecografia intracoronaria Type Journal
  Year 2002 Publication XXXVIII Congreso Nacional de la Sociedad Española de Cardiologia Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Madrid  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB;HuPBA Approved no  
  Call Number BCNPCL @ bcnpcl @ GMF2002f Serial 433  
Permanent link to this record
 

 
Author (up) M.Gomez; J.Mauri; E.Fernandez-Nofrerias; O.Rodriguez-Leor; C.Julia; Petia Radeva; David Rotger; V.Valle edit  openurl
  Title Nuevos Avances para la correlacion de imagenes angiograficas y de ecograia intracoronaria. Type Miscellaneous
  Year 2002 Publication Congreso de las Enfermedades Cardiovasculares, XXXVIII Congreso de la Sociedad Española de Cardiologia. Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Madrid, Espanya  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB Approved no  
  Call Number BCNPCL @ bcnpcl @ GMF2002c Serial 309  
Permanent link to this record
 

 
Author (up) M.Gomez; Josefina Mauri; Eduard Fernandez-Nofrerias; Oriol Rodriguez-Leon; Carme Julia; Debora Gil; Petia Radeva edit  openurl
  Title Reconstrucción de un modelo espacio-temporal de la luz del vaso a partir de secuencias de ecografía intracoronaria Type Conference Article
  Year 2002 Publication XXXVIII Congreso Nacional de la Sociedad Española de Cardiología. Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes IAM;ADAS;MILAB Approved no  
  Call Number IAM @ iam @ GMF2002d Serial 1516  
Permanent link to this record
 

 
Author (up) M.J. Yzuel; J. Pladellorens; Joan Serrat; A. Dupuy edit  openurl
  Title Application restauration and edge detection techniques in the calculation of left ventricular volumes. Type Conference Article
  Year 1993 Publication Optics in Medicine, Biology and Environmental Research : Selected contributions to the first International Conference on Optics within Life Sciences (OWLS I) Abbreviated Journal  
  Volume Issue Pages 374-375  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ YPS1993 Serial 244  
Permanent link to this record
 

 
Author (up) Maciej Wielgosz; Antonio Lopez; Muhamad Naveed Riaz edit   pdf
url  openurl
  Title CARLA-BSP: a simulated dataset with pedestrians Type Miscellaneous
  Year 2023 Publication Arxiv Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract We present a sample dataset featuring pedestrians generated using the ARCANE framework, a new framework for generating datasets in CARLA (0.9.13). We provide use cases for pedestrian detection, autoencoding, pose estimation, and pose lifting. We also showcase baseline results.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number Admin @ si @ WLN2023 Serial 3866  
Permanent link to this record
 

 
Author (up) Maedeh Aghaei; Mariella Dimiccoli; C. Canton-Ferrer; Petia Radeva edit   pdf
url  doi
openurl 
  Title Towards social pattern characterization from egocentric photo-streams Type Journal Article
  Year 2018 Publication Computer Vision and Image Understanding Abbreviated Journal CVIU  
  Volume 171 Issue Pages 104-117  
  Keywords Social pattern characterization; Social signal extraction; Lifelogging; Convolutional and recurrent neural networks  
  Abstract Following the increasingly popular trend of social interaction analysis in egocentric vision, this article presents a comprehensive pipeline for automatic social pattern characterization of a wearable photo-camera user. The proposed framework relies merely on the visual analysis of egocentric photo-streams and consists of three major steps. The first step is to detect social interactions of the user where the impact of several social signals on the task is explored. The detected social events are inspected in the second step for categorization into different social meetings. These two steps act at event-level where each potential social event is modeled as a multi-dimensional time-series, whose dimensions correspond to a set of relevant features for each task; finally, LSTM is employed to classify the time-series. The last step of the framework is to characterize social patterns of the user. Our goal is to quantify the duration, the diversity and the frequency of the user social relations in various social situations. This goal is achieved by the discovery of recurrences of the same people across the whole set of social events related to the user. Experimental evaluation over EgoSocialStyle – the proposed dataset in this work, and EGO-GROUP demonstrates promising results on the task of social pattern characterization from egocentric photo-streams.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; no proj Approved no  
  Call Number Admin @ si @ ADC2018 Serial 3022  
Permanent link to this record
 

 
Author (up) Maedeh Aghaei; Mariella Dimiccoli; Petia Radeva edit  doi
openurl 
  Title Towards social interaction detection in egocentric photo-streams Type Conference Article
  Year 2015 Publication Proceedings of SPIE, 8th International Conference on Machine Vision , ICMV 2015 Abbreviated Journal  
  Volume 9875 Issue Pages  
  Keywords  
  Abstract Detecting social interaction in videos relying solely on visual cues is a valuable task that is receiving increasing attention in recent years. In this work, we address this problem in the challenging domain of egocentric photo-streams captured by a low temporal resolution wearable camera (2fpm). The major difficulties to be handled in this context are the sparsity of observations as well as unpredictability of camera motion and attention orientation due to the fact that the camera is worn as part of clothing. Our method consists of four steps: multi-faces localization and tracking, 3D localization, pose estimation and analysis of f-formations. By estimating pair-to-pair interaction probabilities over the sequence, our method states the presence or absence of interaction with the camera wearer and specifies which people are more involved in the interaction. We tested our method over a dataset of 18.000 images and we show its reliability on our considered purpose. © (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICMV  
  Notes MILAB Approved no  
  Call Number Admin @ si @ ADR2015a Serial 2702  
Permanent link to this record
 

 
Author (up) Maedeh Aghaei; Mariella Dimiccoli; Petia Radeva edit  openurl
  Title Multi-Face Tracking by Extended Bag-of-Tracklets in Egocentric Videos Type Miscellaneous
  Year 2015 Publication Arxiv Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Egocentric images offer a hands-free way to record daily experiences and special events, where social interactions are of special interest. A natural question that arises is how to extract and track the appearance of multiple persons in a social event captured by a wearable camera. In this paper, we propose a novel method to find correspondences of multiple-faces in low temporal resolution egocentric sequences acquired through a wearable camera. This kind of sequences imposes additional challenges to the multitracking problem with respect to conventional videos. Due to the free motion of the camera and to its low temporal resolution (2 fpm), abrupt changes in the field of view, in illumination conditions and in the target location are very frequent. To overcome such a difficulty, we propose to generate, for each detected face, a set of correspondences along the whole sequence that we call tracklet and to take advantage of their redundancy to deal with both false positive face detections and unreliable tracklets. Similar tracklets are grouped into the so called extended bag-of-tracklets (eBoT), which are aimed to correspond to specific persons. Finally, a prototype tracklet is extracted for each eBoT. We validated our method over a dataset of 18.000 images from 38 egocentric sequences with 52 trackable persons and compared to the state-of-the-art methods, demonstrating its effectiveness and robustness.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB Approved no  
  Call Number Admin @ si @ ADR2015b Serial 2713  
Permanent link to this record
 

 
Author (up) Maedeh Aghaei; Mariella Dimiccoli; Petia Radeva edit   pdf
doi  openurl
  Title Multi-face tracking by extended bag-of-tracklets in egocentric photo-streams Type Journal Article
  Year 2016 Publication Computer Vision and Image Understanding Abbreviated Journal CVIU  
  Volume 149 Issue Pages 146-156  
  Keywords  
  Abstract Wearable cameras offer a hands-free way to record egocentric images of daily experiences, where social events are of special interest. The first step towards detection of social events is to track the appearance of multiple persons involved in them. In this paper, we propose a novel method to find correspondences of multiple faces in low temporal resolution egocentric videos acquired through a wearable camera. This kind of photo-stream imposes additional challenges to the multi-tracking problem with respect to conventional videos. Due to the free motion of the camera and to its low temporal resolution, abrupt changes in the field of view, in illumination condition and in the target location are highly frequent. To overcome such difficulties, we propose a multi-face tracking method that generates a set of tracklets through finding correspondences along the whole sequence for each detected face and takes advantage of the tracklets redundancy to deal with unreliable ones. Similar tracklets are grouped into the so called extended bag-of-tracklets (eBoT), which is aimed to correspond to a specific person. Finally, a prototype tracklet is extracted for each eBoT, where the occurred occlusions are estimated by relying on a new measure of confidence. We validated our approach over an extensive dataset of egocentric photo-streams and compared it to state of the art methods, demonstrating its effectiveness and robustness.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; Approved no  
  Call Number Admin @ si @ ADR2016b Serial 2742  
Permanent link to this record
 

 
Author (up) Maedeh Aghaei; Mariella Dimiccoli; Petia Radeva edit   pdf
openurl 
  Title With whom do I interact with? Social interaction detection in egocentric photo-streams Type Conference Article
  Year 2016 Publication 23rd International Conference on Pattern Recognition Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Given a user wearing a low frame rate wearable camera during a day, this work aims to automatically detect the moments when the user gets engaged into a social interaction solely by reviewing the automatically captured photos by the worn camera. The proposed method, inspired by the sociological concept of F-formation, exploits distance and orientation of the appearing individuals -with respect to the user- in the scene from a bird-view perspective. As a result, the interaction pattern over the sequence can be understood as a two-dimensional time series that corresponds to the temporal evolution of the distance and orientation features over time. A Long-Short Term Memory-based Recurrent Neural Network is then trained to classify each time series. Experimental evaluation over a dataset of 30.000 images has shown promising results on the proposed method for social interaction detection in egocentric photo-streams.  
  Address Cancun; Mexico; December 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICPR  
  Notes MILAB Approved no  
  Call Number Admin @ si @ADR2016a Serial 2791  
Permanent link to this record
 

 
Author (up) Maedeh Aghaei; Mariella Dimiccoli; Petia Radeva edit   pdf
url  doi
openurl 
  Title With Whom Do I Interact? Detecting Social Interactions in Egocentric Photo-streams Type Conference Article
  Year 2016 Publication 23rd International Conference on Pattern Recognition Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Given a user wearing a low frame rate wearable camera during a day, this work aims to automatically detect the moments when the user gets engaged into a social interaction solely by reviewing the automatically captured photos by the worn camera. The proposed method, inspired by the sociological concept of F-formation, exploits distance and orientation of the appearing individuals -with respect to the user- in the scene from a bird-view perspective. As a result, the interaction pattern over the sequence can be understood as a two-dimensional time series that corresponds to the temporal evolution of the distance and orientation features over time. A Long-Short Term Memory-based Recurrent Neural Network is then trained to classify each time series. Experimental evaluation over a dataset of 30.000 images has shown promising results on the proposed method for social interaction detection in egocentric photo-streams.  
  Address Cancun; Mexico; December 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICPR  
  Notes MILAB Approved no  
  Call Number Admin @ si @ ADR2016d Serial 2835  
Permanent link to this record
 

 
Author (up) Maedeh Aghaei; Mariella Dimiccoli; Petia Radeva edit   pdf
openurl 
  Title All the people around me: face clustering in egocentric photo streams Type Conference Article
  Year 2017 Publication 24th International Conference on Image Processing Abbreviated Journal  
  Volume Issue Pages  
  Keywords face discovery; face clustering; deepmatching; bag-of-tracklets; egocentric photo-streams  
  Abstract arxiv1703.01790
Given an unconstrained stream of images captured by a wearable photo-camera (2fpm), we propose an unsupervised bottom-up approach for automatic clustering appearing faces into the individual identities present in these data. The problem is challenging since images are acquired under real world conditions; hence the visible appearance of the people in the images undergoes intensive variations. Our proposed pipeline consists of first arranging the photo-stream into events, later, localizing the appearance of multiple people in them, and
finally, grouping various appearances of the same person across different events. Experimental results performed on a dataset acquired by wearing a photo-camera during one month, demonstrate the effectiveness of the proposed approach for the considered purpose.
 
  Address Beijing; China; September 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICIP  
  Notes MILAB; no menciona Approved no  
  Call Number Admin @ si @ EDR2017 Serial 3025  
Permanent link to this record
 

 
Author (up) Maedeh Aghaei; Petia Radeva edit  doi
isbn  openurl
  Title Bag-of-Tracklets for Person Tracking in Life-Logging Data Type Conference Article
  Year 2014 Publication 17th International Conference of the Catalan Association for Artificial Intelligence Abbreviated Journal  
  Volume 269 Issue Pages 35-44  
  Keywords  
  Abstract By increasing popularity of wearable cameras, life-logging data analysis is becoming more and more important and useful to derive significant events out of this substantial collection of images. In this study, we introduce a new tracking method applied to visual life-logging, called bag-of-tracklets, which is based on detecting, localizing and tracking of people. Given the low spatial and temporal resolution of the image data, our model generates and groups tracklets in a unsupervised framework and extracts image sequences of person appearance according to a similarity score of the bag-of-tracklets. The model output is a meaningful sequence of events expressing human appearance and tracking them in life-logging data. The achieved results prove the robustness of our model in terms of efficiency and accuracy despite the low spatial and temporal resolution of the data.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-61499-451-0 Medium  
  Area Expedition Conference CCIA  
  Notes MILAB Approved no  
  Call Number Admin @ si @ AgR2015 Serial 2607  
Permanent link to this record
 

 
Author (up) Manisha Das; Deep Gupta; Petia Radeva; Ashwini M. Bakde edit  url
doi  openurl
  Title Optimized CT-MR neurological image fusion framework using biologically inspired spiking neural model in hybrid ℓ1 - ℓ0 layer decomposition domain Type Journal Article
  Year 2021 Publication Biomedical Signal Processing and Control Abbreviated Journal BSPC  
  Volume 68 Issue Pages 102535  
  Keywords  
  Abstract Medical image fusion plays an important role in the clinical diagnosis of several critical neurological diseases by merging complementary information available in multimodal images. In this paper, a novel CT-MR neurological image fusion framework is proposed using an optimized biologically inspired feedforward neural model in two-scale hybrid ℓ1 − ℓ0 decomposition domain using gray wolf optimization to preserve the structural as well as texture information present in source CT and MR images. Initially, the source images are subjected to two-scale ℓ1 − ℓ0 decomposition with optimized parameters, giving a scale-1 detail layer, a scale-2 detail layer and a scale-2 base layer. Two detail layers at scale-1 and 2 are fused using an optimized biologically inspired neural model and weighted average scheme based on local energy and modified spatial frequency to maximize the preservation of edges and local textures, respectively, while the scale-2 base layer gets fused using choose max rule to preserve the background information. To optimize the hyper-parameters of hybrid ℓ1 − ℓ0 decomposition and biologically inspired neural model, a fitness function is evaluated based on spatial frequency and edge index of the resultant fused image obtained by adding all the fused components. The fusion performance is analyzed by conducting extensive experiments on different CT-MR neurological images. Experimental results indicate that the proposed method provides better-fused images and outperforms the other state-of-the-art fusion methods in both visual and quantitative assessments.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; no proj Approved no  
  Call Number Admin @ si @ DGR2021b Serial 3636  
Permanent link to this record
 

 
Author (up) Manisha Das; Deep Gupta; Petia Radeva; Ashwini M. Bakde edit  url
openurl 
  Title Multi-scale decomposition-based CT-MR neurological image fusion using optimized bio-inspired spiking neural model with meta-heuristic optimization Type Journal Article
  Year 2021 Publication International Journal of Imaging Systems and Technology Abbreviated Journal IMA  
  Volume 31 Issue 4 Pages 2170-2188  
  Keywords  
  Abstract Multi-modal medical image fusion plays an important role in clinical diagnosis and works as an assistance model for clinicians. In this paper, a computed tomography-magnetic resonance (CT-MR) image fusion model is proposed using an optimized bio-inspired spiking feedforward neural network in different decomposition domains. First, source images are decomposed into base (low-frequency) and detail (high-frequency) layer components. Low-frequency subbands are fused using texture energy measures to capture the local energy, contrast, and small edges in the fused image. High-frequency coefficients are fused using firing maps obtained by pixel-activated neural model with the optimized parameters using three different optimization techniques such as differential evolution, cuckoo search, and gray wolf optimization, individually. In the optimization model, a fitness function is computed based on the edge index of resultant fused images, which helps to extract and preserve sharp edges available in the source CT and MR images. To validate the fusion performance, a detailed comparative analysis is presented among the proposed and state-of-the-art methods in terms of quantitative and qualitative measures along with computational complexity. Experimental results show that the proposed method produces a significantly better visual quality of fused images meanwhile outperforms the existing methods.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; no menciona Approved no  
  Call Number Admin @ si @ DGR2021a Serial 3630  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: