|   | 
Details
   web
Records
Author Francesco Ciompi; Simone Balocco; Juan Rigla; Xavier Carrillo; J. Mauri; Petia Radeva
Title Computer-Aided Detection of Intra-Coronary Stent in Intravascular Ultrasound Sequences Type Journal Article
Year 2016 Publication (up) Medical Physics Abbreviated Journal MP
Volume 43 Issue 10 Pages
Keywords
Abstract Purpose: An intraluminal coronary stent is a metal mesh tube deployed in a stenotic artery during Percutaneous Coronary Intervention (PCI), in order to prevent acute vessel occlusion. The identication of struts location and the denition of the stent shape are relevant for PCI planning 15 and for patient follow-up. We present a fully-automatic framework for Computer-Aided Detection
(CAD) of intra-coronary stents in Intravascular Ultrasound (IVUS) image sequences. The CAD system is able to detect stent struts and estimate the stent shape.

Methods: The proposed CAD uses machine learning to provide a comprehensive interpretation of the local structure of the vessel by means of semantic classication. The output of the classication 20 stage is then used to detect struts and to estimate the stent shape. The proposed approach is validated using a multi-centric data-set of 1,015 images from 107 IVUS sequences containing both metallic and bio-absorbable stents.

Results: The method was able to detect structs in both metallic stents with an overall F-measure of 77.7% and a mean distance of 0.15 mm from manually annotated struts, and in bio-absorbable 25 stents with an overall F-measure of 77.4% and a mean distance of 0.09 mm from manually annotated struts.

Conclusions: The results are close to the inter-observer variability and suggest that the system has the potential of being used as method for aiding percutaneous interventions.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB Approved no
Call Number Admin @ si @ CBR2016 Serial 2819
Permanent link to this record
 

 
Author Simone Balocco; Francesco Ciompi; Juan Rigla; Xavier Carrillo; J. Mauri; Petia Radeva
Title Assessment of intracoronary stent location and extension in intravascular ultrasound sequences Type Journal Article
Year 2019 Publication (up) Medical Physics Abbreviated Journal MEDPHYS
Volume 46 Issue 2 Pages 484-493
Keywords IVUS; malapposition; stent; ultrasound
Abstract PURPOSE:

An intraluminal coronary stent is a metal scaffold deployed in a stenotic artery during percutaneous coronary intervention (PCI). In order to have an effective deployment, a stent should be optimally placed with regard to anatomical structures such as bifurcations and stenoses. Intravascular ultrasound (IVUS) is a catheter-based imaging technique generally used for PCI guiding and assessing the correct placement of the stent. A novel approach that automatically detects the boundaries and the position of the stent along the IVUS pullback is presented. Such a technique aims at optimizing the stent deployment.
METHODS:

The method requires the identification of the stable frames of the sequence and the reliable detection of stent struts. Using these data, a measure of likelihood for a frame to contain a stent is computed. Then, a robust binary representation of the presence of the stent in the pullback is obtained applying an iterative and multiscale quantization of the signal to symbols using the Symbolic Aggregate approXimation algorithm.
RESULTS:

The technique was extensively validated on a set of 103 IVUS of sequences of in vivo coronary arteries containing metallic and bioabsorbable stents acquired through an international multicentric collaboration across five clinical centers. The method was able to detect the stent position with an overall F-measure of 86.4%, a Jaccard index score of 75% and a mean distance of 2.5 mm from manually annotated stent boundaries, and in bioabsorbable stents with an overall F-measure of 88.6%, a Jaccard score of 77.7 and a mean distance of 1.5 mm from manually annotated stent boundaries. Additionally, a map indicating the distance between the lumen and the stent along the pullback is created in order to show the angular sectors of the sequence in which the malapposition is present.
CONCLUSIONS:

Results obtained comparing the automatic results vs the manual annotation of two observers shows that the method approaches the interobserver variability. Similar performances are obtained on both metallic and bioabsorbable stents, showing the flexibility and robustness of the method.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB; no proj Approved no
Call Number Admin @ si @ BCR2019 Serial 3231
Permanent link to this record
 

 
Author Misael Rosales; Petia Radeva; J. Mauri; Oriol Pujol
Title Simulation Model of Intravascular Ultrasound Images Type Miscellaneous
Year 2004 Publication (up) MICCAI, 2004, Saint Melo, France Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address Springer Verlag
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB;HuPBA Approved no
Call Number BCNPCL @ bcnpcl @ RRM2004b Serial 464
Permanent link to this record
 

 
Author B. Moghaddam; David Guillamet; Jordi Vitria
Title Local Appearance-Based Models using High-Order Statistics of Image Features Type Miscellaneous
Year 2003 Publication (up) Mitsubishi Electrical Reasearch Lab Technical Report Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes OR;MV Approved no
Call Number BCNPCL @ bcnpcl @ TR2003-85 Serial 396
Permanent link to this record
 

 
Author Xavier Otazu; M. Ribo; M. Peracaula; J.M. Paredes; J. Nuñez
Title Detection of superimposed periodic signals using wavelets Type Journal
Year 2002 Publication (up) Monthly Notices of the Royal Astronomical Society, 333, 2: 365–372 (IF: 4.671) Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes CIC Approved no
Call Number CAT @ cat @ ORP2002 Serial 272
Permanent link to this record
 

 
Author Xavier Otazu; M. Ribo; J.M. Paredes; M. Peracaula; J. Nuñez
Title Multiresolution approach for period determination on unevenly sampled data Type Journal
Year 2004 Publication (up) Monthly Notices of the Royal Astronomical Society, 351:251–219 (IF: 5.238) Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes CIC Approved no
Call Number CAT @ cat @ ORP2004 Serial 451
Permanent link to this record
 

 
Author Raul Gomez; Lluis Gomez; Jaume Gibert; Dimosthenis Karatzas
Title Self-Supervised Learning from Web Data for Multimodal Retrieval Type Book Chapter
Year 2019 Publication (up) Multi-Modal Scene Understanding Book Abbreviated Journal
Volume Issue Pages 279-306
Keywords self-supervised learning; webly supervised learning; text embeddings; multimodal retrieval; multimodal embedding
Abstract Self-Supervised learning from multimodal image and text data allows deep neural networks to learn powerful features with no need of human annotated data. Web and Social Media platforms provide a virtually unlimited amount of this multimodal data. In this work we propose to exploit this free available data to learn a multimodal image and text embedding, aiming to leverage the semantic knowledge learnt in the text domain and transfer it to a visual model for semantic image retrieval. We demonstrate that the proposed pipeline can learn from images with associated text without supervision and analyze the semantic structure of the learnt joint image and text embeddingspace. Weperformathoroughanalysisandperformancecomparisonoffivedifferentstateof the art text embeddings in three different benchmarks. We show that the embeddings learnt with Web and Social Media data have competitive performances over supervised methods in the text basedimageretrievaltask,andweclearlyoutperformstateoftheartintheMIRFlickrdatasetwhen training in the target data. Further, we demonstrate how semantic multimodal image retrieval can be performed using the learnt embeddings, going beyond classical instance-level retrieval problems. Finally, we present a new dataset, InstaCities1M, composed by Instagram images and their associated texts that can be used for fair comparison of image-text embeddings.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes DAG; 600.129; 601.338; 601.310 Approved no
Call Number Admin @ si @ GGG2019 Serial 3266
Permanent link to this record
 

 
Author Gabriela Ramirez; Esau Villatoro; Bogdan Ionescu; Hugo Jair Escalante; Sergio Escalera; Martha Larson; Henning Muller; Isabelle Guyon
Title Overview of the Multimedia Information Processing for Personality & Social Networks Analysis Contes Type Conference Article
Year 2018 Publication (up) Multimedia Information Processing for Personality and Social Networks Analysis (MIPPSNA 2018) Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address Beijing; China; August 2018
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICPRW
Notes HUPBA Approved no
Call Number Admin @ si @ RVI2018 Serial 3211
Permanent link to this record
 

 
Author Bogdan Raducanu; D. Gatica-Perez
Title Inferring competitive role patterns in reality TV show through nonverbal analysis Type Journal Article
Year 2012 Publication (up) Multimedia Tools and Applications Abbreviated Journal MTAP
Volume 56 Issue 1 Pages 207-226
Keywords
Abstract This paper introduces a new facet of social media, namely that depicting social interaction. More concretely, we address this problem from the perspective of nonverbal behavior-based analysis of competitive meetings. For our study, we made use of “The Apprentice” reality TV show, which features a competition for a real, highly paid corporate job. Our analysis is centered around two tasks regarding a person's role in a meeting: predicting the person with the highest status, and predicting the fired candidates. We address this problem by adopting both supervised and unsupervised strategies. The current study was carried out using nonverbal audio cues. Our approach is based only on the nonverbal interaction dynamics during the meeting without relying on the spoken words. The analysis is based on two types of data: individual and relational measures. Results obtained from the analysis of a full season of the show are promising (up to 85.7% of accuracy in the first case and up to 92.8% in the second case). Our approach has been conveniently compared with the Influence Model, demonstrating its superiority.
Address
Corporate Author Thesis
Publisher Elsevier Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1380-7501 ISBN Medium
Area Expedition Conference
Notes OR;MV Approved no
Call Number BCNPCL @ bcnpcl @ RaG2012 Serial 1360
Permanent link to this record
 

 
Author Palaiahnakote Shivakumara; Anjan Dutta; Chew Lim Tan; Umapada Pal
Title Multi-oriented scene text detection in video based on wavelet and angle projection boundary growing Type Journal Article
Year 2014 Publication (up) Multimedia Tools and Applications Abbreviated Journal MTAP
Volume 72 Issue 1 Pages 515-539
Keywords
Abstract In this paper, we address two complex issues: 1) Text frame classification and 2) Multi-oriented text detection in video text frame. We first divide a video frame into 16 blocks and propose a combination of wavelet and median-moments with k-means clustering at the block level to identify probable text blocks. For each probable text block, the method applies the same combination of feature with k-means clustering over a sliding window running through the blocks to identify potential text candidates. We introduce a new idea of symmetry on text candidates in each block based on the observation that pixel distribution in text exhibits a symmetric pattern. The method integrates all blocks containing text candidates in the frame and then all text candidates are mapped on to a Sobel edge map of the original frame to obtain text representatives. To tackle the multi-orientation problem, we present a new method called Angle Projection Boundary Growing (APBG) which is an iterative algorithm and works based on a nearest neighbor concept. APBG is then applied on the text representatives to fix the bounding box for multi-oriented text lines in the video frame. Directional information is used to eliminate false positives. Experimental results on a variety of datasets such as non-horizontal, horizontal, publicly available data (Hua’s data) and ICDAR-03 competition data (camera images) show that the proposed method outperforms existing methods proposed for video and the state of the art methods for scene text as well.
Address
Corporate Author Thesis
Publisher Springer US Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1380-7501 ISBN Medium
Area Expedition Conference
Notes DAG; 600.077 Approved no
Call Number Admin @ si @ SDT2014 Serial 2357
Permanent link to this record
 

 
Author Cesar Isaza; Joaquin Salas; Bogdan Raducanu
Title Rendering ground truth data sets to detect shadows cast by static objects in outdoors Type Journal Article
Year 2014 Publication (up) Multimedia Tools and Applications Abbreviated Journal MTAP
Volume 70 Issue 1 Pages 557-571
Keywords Synthetic ground truth data set; Sun position; Shadow detection; Static objects shadow detection
Abstract In our work, we are particularly interested in studying the shadows cast by static objects in outdoor environments, during daytime. To assess the accuracy of a shadow detection algorithm, we need ground truth information. The collection of such information is a very tedious task because it is a process that requires manual annotation. To overcome this severe limitation, we propose in this paper a methodology to automatically render ground truth using a virtual environment. To increase the degree of realism and usefulness of the simulated environment, we incorporate in the scenario the precise longitude, latitude and elevation of the actual location of the object, as well as the sun’s position for a given time and day. To evaluate our method, we consider a qualitative and a quantitative comparison. In the quantitative one, we analyze the shadow cast by a real object in a particular geographical location and its corresponding rendered model. To evaluate qualitatively the methodology, we use some ground truth images obtained both manually and automatically.
Address
Corporate Author Thesis
Publisher Springer US Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1380-7501 ISBN Medium
Area Expedition Conference
Notes LAMP; Approved no
Call Number Admin @ si @ ISR2014 Serial 2229
Permanent link to this record
 

 
Author Naveen Onkarappa; Angel Sappa
Title Synthetic sequences and ground-truth flow field generation for algorithm validation Type Journal Article
Year 2015 Publication (up) Multimedia Tools and Applications Abbreviated Journal MTAP
Volume 74 Issue 9 Pages 3121-3135
Keywords Ground-truth optical flow; Synthetic sequence; Algorithm validation
Abstract Research in computer vision is advancing by the availability of good datasets that help to improve algorithms, validate results and obtain comparative analysis. The datasets can be real or synthetic. For some of the computer vision problems such as optical flow it is not possible to obtain ground-truth optical flow with high accuracy in natural outdoor real scenarios directly by any sensor, although it is possible to obtain ground-truth data of real scenarios in a laboratory setup with limited motion. In this difficult situation computer graphics offers a viable option for creating realistic virtual scenarios. In the current work we present a framework to design virtual scenes and generate sequences as well as ground-truth flow fields. Particularly, we generate a dataset containing sequences of driving scenarios. The sequences in the dataset vary in different speeds of the on-board vision system, different road textures, complex motion of vehicle and independent moving vehicles in the scene. This dataset enables analyzing and adaptation of existing optical flow methods, and leads to invention of new approaches particularly for driver assistance systems.
Address
Corporate Author Thesis
Publisher Springer US Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1380-7501 ISBN Medium
Area Expedition Conference
Notes ADAS; 600.055; 601.215; 600.076 Approved no
Call Number Admin @ si @ OnS2014b Serial 2472
Permanent link to this record
 

 
Author Svebor Karaman; Andrew Bagdanov; Lea Landucci; Gianpaolo D'Amico; Andrea Ferracani; Daniele Pezzatini; Alberto del Bimbo
Title Personalized multimedia content delivery on an interactive table by passive observation of museum visitors Type Journal Article
Year 2016 Publication (up) Multimedia Tools and Applications Abbreviated Journal MTAP
Volume 75 Issue 7 Pages 3787-3811
Keywords Computer vision; Video surveillance; Cultural heritage; Multimedia museum; Personalization; Natural interaction; Passive profiling
Abstract The amount of multimedia data collected in museum databases is growing fast, while the capacity of museums to display information to visitors is acutely limited by physical space. Museums must seek the perfect balance of information given on individual pieces in order to provide sufficient information to aid visitor understanding while maintaining sparse usage of the walls and guaranteeing high appreciation of the exhibit. Moreover, museums often target the interests of average visitors instead of the entire spectrum of different interests each individual visitor might have. Finally, visiting a museum should not be an experience contained in the physical space of the museum but a door opened onto a broader context of related artworks, authors, artistic trends, etc. In this paper we describe the MNEMOSYNE system that attempts to address these issues through a new multimedia museum experience. Based on passive observation, the system builds a profile of the artworks of interest for each visitor. These profiles of interest are then used to drive an interactive table that personalizes multimedia content delivery. The natural user interface on the interactive table uses the visitor’s profile, an ontology of museum content and a recommendation system to personalize exploration of multimedia content. At the end of their visit, the visitor can take home a personalized summary of their visit on a custom mobile application. In this article we describe in detail each component of our approach as well as the first field trials of our prototype system built and deployed at our permanent exhibition space at LeMurate (http://www.lemurate.comune.fi.it/lemurate/) in Florence together with the first results of the evaluation process during the official installation in the National Museum of Bargello (http://www.uffizi.firenze.it/musei/?m=bargello).
Address
Corporate Author Thesis
Publisher Springer US Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1380-7501 ISBN Medium
Area Expedition Conference
Notes LAMP; 601.240; 600.079 Approved no
Call Number Admin @ si @ KBL2016 Serial 2520
Permanent link to this record
 

 
Author Anastasios Doulamis; Nikolaos Doulamis; Marco Bertini; Jordi Gonzalez; Thomas B. Moeslund
Title Introduction to the Special Issue on the Analysis and Retrieval of Events/Actions and Workflows in Video Streams Type Journal Article
Year 2016 Publication (up) Multimedia Tools and Applications Abbreviated Journal MTAP
Volume 75 Issue 22 Pages 14985-14990
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE; HUPBA Approved no
Call Number Admin @ si @ DDB2016 Serial 2934
Permanent link to this record
 

 
Author Marçal Rusiñol; J. Chazalon; Katerine Diaz
Title Augmented Songbook: an Augmented Reality Educational Application for Raising Music Awareness Type Journal Article
Year 2018 Publication (up) Multimedia Tools and Applications Abbreviated Journal MTAP
Volume 77 Issue 11 Pages 13773-13798
Keywords Augmented reality; Document image matching; Educational applications
Abstract This paper presents the development of an Augmented Reality mobile application which aims at sensibilizing young children to abstract concepts of music. Such concepts are, for instance, the musical notation or the idea of rhythm. Recent studies in Augmented Reality for education suggest that such technologies have multiple benefits for students, including younger ones. As mobile document image acquisition and processing gains maturity on mobile platforms, we explore how it is possible to build a markerless and real-time application to augment the physical documents with didactic animations and interactive virtual content. Given a standard image processing pipeline, we compare the performance of different local descriptors at two key stages of the process. Results suggest alternatives to the SIFT local descriptors, regarding result quality and computational efficiency, both for document model identification and perspective transform estimation. All experiments are performed on an original and public dataset we introduce here.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes DAG; ADAS; 600.084; 600.121; 600.118; 600.129 Approved no
Call Number Admin @ si @ RCD2018 Serial 2996
Permanent link to this record