toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links (down)
Author T. Widemann; Xavier Otazu edit  doi
openurl 
  Title Titanias radius and an upper limit on its atmosphere from the September 8, 2001 stellar occultation Type Journal Article
  Year 2009 Publication International Journal of Solar System Studies Abbreviated Journal  
  Volume 199 Issue 2 Pages 458–476  
  Keywords Occultations; Uranus, satellites; Satellites, shapes; Satellites, dynamics; Ices; Satellites, atmospheres  
  Abstract On September 8, 2001 around 2 h UT, the largest uranian moon, Titania, occulted Hipparcos star 106829 (alias SAO 164538, a V=7.2, K0 III star). This was the first-ever observed occultation by this satellite, a rare event as Titania subtends only 0.11 arcsec on the sky. The star's unusual brightness allowed many observers, both amateurs or professionals, to monitor this unique event, providing fifty-seven occultations chords over three continents, all reported here. Selecting the best 27 occultation chords, and assuming a circular limb, we derive Titania's radius: View the MathML source (1-σ error bar). This implies a density of View the MathML source using the value View the MathML source derived by Taylor [Taylor, D.B., 1998. Astron. Astrophys. 330, 362–374]. We do not detect any significant difference between equatorial and polar radii, in the limit View the MathML source, in agreement with Voyager limb image retrieval during the 1986 flyby. Titania's offset with respect to the DE405 + URA027 (based on GUST86 theory) ephemeris is derived: ΔαTcos(δT)=−108±13 mas and ΔδT=−62±7 mas (ICRF J2000.0 system). Most of this offset is attributable to a Uranus' barycentric offset with respect to DE405, that we estimate to be: View the MathML source and ΔδU=−85±25 mas at the moment of occultation. This offset is confirmed by another Titania stellar occultation observed on August 1st, 2003, which provides an offset of ΔαTcos(δT)=−127±20 mas and ΔδT=−97±13 mas for the satellite. The combined ingress and egress data do not show any significant hint for atmospheric refraction, allowing us to set surface pressure limits at the level of 10–20 nbar. More specifically, we find an upper limit of 13 nbar (1-σ level) at 70 K and 17 nbar at 80 K, for a putative isothermal CO2 atmosphere. We also provide an upper limit of 8 nbar for a possible CH4 atmosphere, and 22 nbar for pure N2, again at the 1-σ level. We finally constrain the stellar size using the time-resolved star disappearance and reappearance at ingress and egress. We find an angular diameter of 0.54±0.03 mas (corresponding to View the MathML source projected at Titania). With a distance of 170±25 parsecs, this corresponds to a radius of 9.8±0.2 solar radii for HIP 106829, typical of a K0 III giant.  
  Address  
  Corporate Author Thesis  
  Publisher ELSEVIER Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0019-1035 ISBN Medium  
  Area Expedition Conference  
  Notes CIC Approved no  
  Call Number CAT @ cat @ Wid2009 Serial 1052  
Permanent link to this record
 

 
Author Joan Marc Llargues Asensio; Juan Peralta; Raul Arrabales; Manuel Gonzalez Bedia; Paulo Cortez; Antonio Lopez edit  doi
openurl 
  Title Artificial Intelligence Approaches for the Generation and Assessment of Believable Human-Like Behaviour in Virtual Characters Type Journal Article
  Year 2014 Publication Expert Systems With Applications Abbreviated Journal EXSY  
  Volume 41 Issue 16 Pages 7281–7290  
  Keywords Turing test; Human-like behaviour; Believability; Non-player characters; Cognitive architectures; Genetic algorithm; Artificial neural networks  
  Abstract Having artificial agents to autonomously produce human-like behaviour is one of the most ambitious original goals of Artificial Intelligence (AI) and remains an open problem nowadays. The imitation game originally proposed by Turing constitute a very effective method to prove the indistinguishability of an artificial agent. The behaviour of an agent is said to be indistinguishable from that of a human when observers (the so-called judges in the Turing test) cannot tell apart humans and non-human agents. Different environments, testing protocols, scopes and problem domains can be established to develop limited versions or variants of the original Turing test. In this paper we use a specific version of the Turing test, based on the international BotPrize competition, built in a First-Person Shooter video game, where both human players and non-player characters interact in complex virtual environments. Based on our past experience both in the BotPrize competition and other robotics and computer game AI applications we have developed three new more advanced controllers for believable agents: two based on a combination of the CERA–CRANIUM and SOAR cognitive architectures and other based on ADANN, a system for the automatic evolution and adaptation of artificial neural networks. These two new agents have been put to the test jointly with CCBot3, the winner of BotPrize 2010 competition (Arrabales et al., 2012), and have showed a significant improvement in the humanness ratio. Additionally, we have confronted all these bots to both First-person believability assessment (BotPrize original judging protocol) and Third-person believability assessment, demonstrating that the active involvement of the judge has a great impact in the recognition of human-like behaviour.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.055; 600.057; 600.076 Approved no  
  Call Number Admin @ si @ LPA2014 Serial 2500  
Permanent link to this record
 

 
Author Carles Fernandez; Pau Baiget; Xavier Roca; Jordi Gonzalez edit   pdf
doi  openurl
  Title Determining the Best Suited Semantic Events for Cognitive Surveillance Type Journal Article
  Year 2011 Publication Expert Systems with Applications Abbreviated Journal EXSY  
  Volume 38 Issue 4 Pages 4068–4079  
  Keywords Cognitive surveillance; Event modeling; Content-based video retrieval; Ontologies; Advanced user interfaces  
  Abstract State-of-the-art systems on cognitive surveillance identify and describe complex events in selected domains, thus providing end-users with tools to easily access the contents of massive video footage. Nevertheless, as the complexity of events increases in semantics and the types of indoor/outdoor scenarios diversify, it becomes difficult to assess which events describe better the scene, and how to model them at a pixel level to fulfill natural language requests. We present an ontology-based methodology that guides the identification, step-by-step modeling, and generalization of the most relevant events to a specific domain. Our approach considers three steps: (1) end-users provide textual evidence from surveilled video sequences; (2) transcriptions are analyzed top-down to build the knowledge bases for event description; and (3) the obtained models are used to generalize event detection to different image sequences from the surveillance domain. This framework produces user-oriented knowledge that improves on existing advanced interfaces for video indexing and retrieval, by determining the best suited events for video understanding according to end-users. We have conducted experiments with outdoor and indoor scenes showing thefts, chases, and vandalism, demonstrating the feasibility and generalization of this proposal.  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ISE Approved no  
  Call Number Admin @ si @ FBR2011a Serial 1722  
Permanent link to this record
 

 
Author Kaida Xiao; Chenyang Fu; Dimosthenis Karatzas; Sophie Wuerger edit  doi
openurl 
  Title Visual Gamma Correction for LCD Displays Type Journal Article
  Year 2011 Publication Displays Abbreviated Journal DIS  
  Volume 32 Issue 1 Pages 17-23  
  Keywords Display calibration; Psychophysics ; Perceptual; Visual gamma correction; Luminance matching; Observer-based calibration  
  Abstract An improved method for visual gamma correction is developed for LCD displays to increase the accuracy of digital colour reproduction. Rather than utilising a photometric measurement device, we use observ- ers’ visual luminance judgements for gamma correction. Eight half tone patterns were designed to gen- erate relative luminances from 1/9 to 8/9 for each colour channel. A psychophysical experiment was conducted on an LCD display to find the digital signals corresponding to each relative luminance by visually matching the half-tone background to a uniform colour patch. Both inter- and intra-observer vari- ability for the eight luminance matches in each channel were assessed and the luminance matches proved to be consistent across observers (DE00 < 3.5) and repeatable (DE00 < 2.2). Based on the individual observer judgements, the display opto-electronic transfer function (OETF) was estimated by using either a 3rd order polynomial regression or linear interpolation for each colour channel. The performance of the proposed method is evaluated by predicting the CIE tristimulus values of a set of coloured patches (using the observer-based OETFs) and comparing them to the expected CIE tristimulus values (using the OETF obtained from spectro-radiometric luminance measurements). The resulting colour differences range from 2 to 4.6 DE00. We conclude that this observer-based method of visual gamma correction is useful to estimate the OETF for LCD displays. Its major advantage is that no particular functional relationship between digital inputs and luminance outputs has to be assumed.  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes DAG Approved no  
  Call Number Admin @ si @ XFK2011 Serial 1815  
Permanent link to this record
 

 
Author Gerard Canal; Sergio Escalera; Cecilio Angulo edit   pdf
doi  openurl
  Title A Real-time Human-Robot Interaction system based on gestures for assistive scenarios Type Journal Article
  Year 2016 Publication Computer Vision and Image Understanding Abbreviated Journal CVIU  
  Volume 149 Issue Pages 65-77  
  Keywords Gesture recognition; Human Robot Interaction; Dynamic Time Warping; Pointing location estimation  
  Abstract Natural and intuitive human interaction with robotic systems is a key point to develop robots assisting people in an easy and effective way. In this paper, a Human Robot Interaction (HRI) system able to recognize gestures usually employed in human non-verbal communication is introduced, and an in-depth study of its usability is performed. The system deals with dynamic gestures such as waving or nodding which are recognized using a Dynamic Time Warping approach based on gesture specific features computed from depth maps. A static gesture consisting in pointing at an object is also recognized. The pointed location is then estimated in order to detect candidate objects the user may refer to. When the pointed object is unclear for the robot, a disambiguation procedure by means of either a verbal or gestural dialogue is performed. This skill would lead to the robot picking an object in behalf of the user, which could present difficulties to do it by itself. The overall system — which is composed by a NAO and Wifibot robots, a KinectTM v2 sensor and two laptops — is firstly evaluated in a structured lab setup. Then, a broad set of user tests has been completed, which allows to assess correct performance in terms of recognition rates, easiness of use and response times.  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier B.V. Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA;MILAB; Approved no  
  Call Number Admin @ si @ CEA2016 Serial 2768  
Permanent link to this record
 

 
Author Maedeh Aghaei; Mariella Dimiccoli; Petia Radeva edit   pdf
doi  openurl
  Title Multi-face tracking by extended bag-of-tracklets in egocentric photo-streams Type Journal Article
  Year 2016 Publication Computer Vision and Image Understanding Abbreviated Journal CVIU  
  Volume 149 Issue Pages 146-156  
  Keywords  
  Abstract Wearable cameras offer a hands-free way to record egocentric images of daily experiences, where social events are of special interest. The first step towards detection of social events is to track the appearance of multiple persons involved in them. In this paper, we propose a novel method to find correspondences of multiple faces in low temporal resolution egocentric videos acquired through a wearable camera. This kind of photo-stream imposes additional challenges to the multi-tracking problem with respect to conventional videos. Due to the free motion of the camera and to its low temporal resolution, abrupt changes in the field of view, in illumination condition and in the target location are highly frequent. To overcome such difficulties, we propose a multi-face tracking method that generates a set of tracklets through finding correspondences along the whole sequence for each detected face and takes advantage of the tracklets redundancy to deal with unreliable ones. Similar tracklets are grouped into the so called extended bag-of-tracklets (eBoT), which is aimed to correspond to a specific person. Finally, a prototype tracklet is extracted for each eBoT, where the occurred occlusions are estimated by relying on a new measure of confidence. We validated our approach over an extensive dataset of egocentric photo-streams and compared it to state of the art methods, demonstrating its effectiveness and robustness.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; Approved no  
  Call Number Admin @ si @ ADR2016b Serial 2742  
Permanent link to this record
 

 
Author Josep M. Gonfaus; Marco Pedersoli; Jordi Gonzalez; Andrea Vedaldi; Xavier Roca edit   pdf
doi  openurl
  Title Factorized appearances for object detection Type Journal Article
  Year 2015 Publication Computer Vision and Image Understanding Abbreviated Journal CVIU  
  Volume 138 Issue Pages 92–101  
  Keywords Object recognition; Deformable part models; Learning and sharing parts; Discovering discriminative parts  
  Abstract Deformable object models capture variations in an object’s appearance that can be represented as image deformations. Other effects such as out-of-plane rotations, three-dimensional articulations, and self-occlusions are often captured by considering mixture of deformable models, one per object aspect. A more scalable approach is representing instead the variations at the level of the object parts, applying the concept of a mixture locally. Combining a few part variations can in fact cheaply generate a large number of global appearances.

A limited version of this idea was proposed by Yang and Ramanan [1], for human pose dectection. In this paper we apply it to the task of generic object category detection and extend it in several ways. First, we propose a model for the relationship between part appearances more general than the tree of Yang and Ramanan [1], which is more suitable for generic categories. Second, we treat part locations as well as their appearance as latent variables so that training does not need part annotations but only the object bounding boxes. Third, we modify the weakly-supervised learning of Felzenszwalb et al. and Girshick et al. [2], [3] to handle a significantly more complex latent structure.
Our model is evaluated on standard object detection benchmarks and is found to improve over existing approaches, yielding state-of-the-art results for several object categories.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ISE; 600.063; 600.078 Approved no  
  Call Number Admin @ si @ GPG2015 Serial 2705  
Permanent link to this record
 

 
Author Jordi Gonzalez; Thomas B. Moeslund; Liang Wang edit   pdf
doi  openurl
  Title Semantic Understanding of Human Behaviors in Image Sequences: From video-surveillance to video-hermeneutics Type Journal Article
  Year 2012 Publication Computer Vision and Image Understanding Abbreviated Journal CVIU  
  Volume 116 Issue 3 Pages 305–306  
  Keywords  
  Abstract Purpose: Atheromatic plaque progression is affected, among others phenomena, by biomechanical, biochemical, and physiological factors. In this paper, the authors introduce a novel framework able to provide both morphological (vessel radius, plaque thickness, and type) and biomechanical (wall shear stress and Von Mises stress) indices of coronary arteries.Methods: First, the approach reconstructs the three-dimensional morphology of the vessel from intravascular ultrasound (IVUS) and Angiographic sequences, requiring minimal user interaction. Then, a computational pipeline allows to automatically assess fluid-dynamic and mechanical indices. Ten coronary arteries are analyzed illustrating the capabilities of the tool and confirming previous technical and clinical observations.Results: The relations between the arterial indices obtained by IVUS measurement and simulations have been quantitatively analyzed along the whole surface of the artery, extending the analysis of the coronary arteries shown in previous state of the art studies. Additionally, for the first time in the literature, the framework allows the computation of the membrane stresses using a simplified mechanical model of the arterial wall.Conclusions: Circumferentially (within a given frame), statistical analysis shows an inverse relation between the wall shear stress and the plaque thickness. At the global level (comparing a frame within the entire vessel), it is observed that heavy plaque accumulations are in general calcified and are located in the areas of the vessel having high wall shear stress. Finally, in their experiments the inverse proportionality between fluid and structural stresses is observed.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1077-3142 ISBN Medium  
  Area Expedition Conference  
  Notes ISE Approved no  
  Call Number Admin @ si @ GMW2012 Serial 2005  
Permanent link to this record
 

 
Author Bhaskar Chakraborty; Michael Holte; Thomas B. Moeslund; Jordi Gonzalez edit   pdf
doi  openurl
  Title Selective Spatio-Temporal Interest Points Type Journal Article
  Year 2012 Publication Computer Vision and Image Understanding Abbreviated Journal CVIU  
  Volume 116 Issue 3 Pages 396-410  
  Keywords  
  Abstract Recent progress in the field of human action recognition points towards the use of Spatio-TemporalInterestPoints (STIPs) for local descriptor-based recognition strategies. In this paper, we present a novel approach for robust and selective STIP detection, by applying surround suppression combined with local and temporal constraints. This new method is significantly different from existing STIP detection techniques and improves the performance by detecting more repeatable, stable and distinctive STIPs for human actors, while suppressing unwanted background STIPs. For action representation we use a bag-of-video words (BoV) model of local N-jet features to build a vocabulary of visual-words. To this end, we introduce a novel vocabulary building strategy by combining spatial pyramid and vocabulary compression techniques, resulting in improved performance and efficiency. Action class specific Support Vector Machine (SVM) classifiers are trained for categorization of human actions. A comprehensive set of experiments on popular benchmark datasets (KTH and Weizmann), more challenging datasets of complex scenes with background clutter and camera motion (CVC and CMU), movie and YouTube video clips (Hollywood 2 and YouTube), and complex scenes with multiple actors (MSR I and Multi-KTH), validates our approach and show state-of-the-art performance. Due to the unavailability of ground truth action annotation data for the Multi-KTH dataset, we introduce an actor specific spatio-temporal clustering of STIPs to address the problem of automatic action annotation of multiple simultaneous actors. Additionally, we perform cross-data action recognition by training on source datasets (KTH and Weizmann) and testing on completely different and more challenging target datasets (CVC, CMU, MSR I and Multi-KTH). This documents the robustness of our proposed approach in the realistic scenario, using separate training and test datasets, which in general has been a shortcoming in the performance evaluation of human action recognition techniques.  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1077-3142 ISBN Medium  
  Area Expedition Conference  
  Notes ISE Approved no  
  Call Number Admin @ si @ CHM2012 Serial 1806  
Permanent link to this record
 

 
Author Debora Gil; Petia Radeva edit   pdf
doi  openurl
  Title Extending anisotropic operators to recover smooth shapes Type Journal Article
  Year 2005 Publication Computer Vision and Image Understanding Abbreviated Journal  
  Volume 99 Issue 1 Pages 110-125  
  Keywords Contour completion; Functional extension; Differential operators; Riemmanian manifolds; Snake segmentation  
  Abstract Anisotropic differential operators are widely used in image enhancement processes. Recently, their property of smoothly extending functions to the whole image domain has begun to be exploited. Strong ellipticity of differential operators is a requirement that ensures existence of a unique solution. This condition is too restrictive for operators designed to extend image level sets: their own functionality implies that they should restrict to some vector field. The diffusion tensor that defines the diffusion operator links anisotropic processes with Riemmanian manifolds. In this context, degeneracy implies restricting diffusion to the varieties generated by the vector fields of positive eigenvalues, provided that an integrability condition is satisfied. We will use that any smooth vector field fulfills this integrability requirement to design line connection algorithms for contour completion. As application we present a segmenting strategy that assures convergent snakes whatever the geometry of the object to be modelled is.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1077-3142 ISBN Medium  
  Area Expedition Conference  
  Notes IAM;MILAB Approved no  
  Call Number IAM @ iam @ GIR2005 Serial 1530  
Permanent link to this record
 

 
Author Jorge Bernal; F. Javier Sanchez; Gloria Fernandez Esparrach; Debora Gil; Cristina Rodriguez de Miguel; Fernando Vilariño edit   pdf
doi  openurl
  Title WM-DOVA Maps for Accurate Polyp Highlighting in Colonoscopy: Validation vs. Saliency Maps from Physicians Type Journal Article
  Year 2015 Publication Computerized Medical Imaging and Graphics Abbreviated Journal CMIG  
  Volume 43 Issue Pages 99-111  
  Keywords Polyp localization; Energy Maps; Colonoscopy; Saliency; Valley detection  
  Abstract We introduce in this paper a novel polyp localization method for colonoscopy videos. Our method is based on a model of appearance for polyps which defines polyp boundaries in terms of valley information. We propose the integration of valley information in a robust way fostering complete, concave and continuous boundaries typically associated to polyps. This integration is done by using a window of radial sectors which accumulate valley information to create WMDOVA1 energy maps related with the likelihood of polyp presence. We perform a double validation of our maps, which include the introduction of two new databases, including the first, up to our knowledge, fully annotated database with clinical metadata associated. First we assess that the highest value corresponds with the location of the polyp in the image. Second, we show that WM-DOVA energy maps can be comparable with saliency maps obtained from physicians' fixations obtained via an eye-tracker. Finally, we prove that our method outperforms state-of-the-art computational saliency results. Our method shows good performance, particularly for small polyps which are reported to be the main sources of polyp miss-rate, which indicates the potential applicability of our method in clinical practice.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0895-6111 ISBN Medium  
  Area Expedition Conference  
  Notes MV; IAM; 600.047; 600.060; 600.075;SIAI Approved no  
  Call Number Admin @ si @ BSF2015 Serial 2609  
Permanent link to this record
 

 
Author Simeon Petkov; Xavier Carrillo; Petia Radeva; Carlo Gatta edit   pdf
doi  openurl
  Title Diaphragm border detection in coronary X-ray angiographies: New method and applications Type Journal Article
  Year 2014 Publication Computerized Medical Imaging and Graphics Abbreviated Journal CMIG  
  Volume 38 Issue 4 Pages 296-305  
  Keywords  
  Abstract X-ray angiography is widely used in cardiac disease diagnosis during or prior to intravascular interventions. The diaphragm motion and the heart beating induce gray-level changes, which are one of the main obstacles in quantitative analysis of myocardial perfusion. In this paper we focus on detecting the diaphragm border in both single images or whole X-ray angiography sequences. We show that the proposed method outperforms state of the art approaches. We extend a previous publicly available data set, adding new ground truth data. We also compose another set of more challenging images, thus having two separate data sets of increasing difficulty. Finally, we show three applications of our method: (1) a strategy to reduce false positives in vessel enhanced images; (2) a digital diaphragm removal algorithm; (3) an improvement in Myocardial Blush Grade semi-automatic estimation.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; LAMP; 600.079 Approved no  
  Call Number Admin @ si @ PCR2014 Serial 2468  
Permanent link to this record
 

 
Author Simone Balocco; Carlo Gatta; Francesco Ciompi; A. Wahle; Petia Radeva; S. Carlier; G. Unal; E. Sanidas; J. Mauri; X. Carillo; T. Kovarnik; C. Wang; H. Chen; T. P. Exarchos; D. I. Fotiadis; F. Destrempes; G. Cloutier; Oriol Pujol; Marina Alberti; E. G. Mendizabal-Ruiz; M. Rivera; T. Aksoy; R. W. Downe; I. A. Kakadiaris edit   pdf
doi  openurl
  Title Standardized evaluation methodology and reference database for evaluating IVUS image segmentation Type Journal Article
  Year 2014 Publication Computerized Medical Imaging and Graphics Abbreviated Journal CMIG  
  Volume 38 Issue 2 Pages 70-90  
  Keywords IVUS (intravascular ultrasound); Evaluation framework; Algorithm comparison; Image segmentation  
  Abstract This paper describes an evaluation framework that allows a standardized and quantitative comparison of IVUS lumen and media segmentation algorithms. This framework has been introduced at the MICCAI 2011 Computing and Visualization for (Intra)Vascular Imaging (CVII) workshop, comparing the results of eight teams that participated.
We describe the available data-base comprising of multi-center, multi-vendor and multi-frequency IVUS datasets, their acquisition, the creation of the reference standard and the evaluation measures. The approaches address segmentation of the lumen, the media, or both borders; semi- or fully-automatic operation; and 2-D vs. 3-D methodology. Three performance measures for quantitative analysis have
been proposed. The results of the evaluation indicate that segmentation of the vessel lumen and media is possible with an accuracy that is comparable to manual annotation when semi-automatic methods are used, as well as encouraging results can be obtained also in case of fully-automatic segmentation. The analysis performed in this paper also highlights the challenges in IVUS segmentation that remains to be
solved.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; LAMP; HuPBA; 600.046; 600.063; 600.079 Approved no  
  Call Number Admin @ si @ BGC2013 Serial 2314  
Permanent link to this record
 

 
Author Michal Drozdzal; Santiago Segui; Carolina Malagelada; Fernando Azpiroz; Petia Radeva edit   pdf
doi  openurl
  Title Adaptable image cuts for motility inspection using WCE Type Journal Article
  Year 2013 Publication Computerized Medical Imaging and Graphics Abbreviated Journal CMIG  
  Volume 37 Issue 1 Pages 72-80  
  Keywords  
  Abstract The Wireless Capsule Endoscopy (WCE) technology allows the visualization of the whole small intestine tract. Since the capsule is freely moving, mainly by the means of peristalsis, the data acquired during the study gives a lot of information about the intestinal motility. However, due to: (1) huge amount of frames, (2) complex intestinal scene appearance and (3) intestinal dynamics that make difficult the visualization of the small intestine physiological phenomena, the analysis of the WCE data requires computer-aided systems to speed up the analysis. In this paper, we propose an efficient algorithm for building a novel representation of the WCE video data, optimal for motility analysis and inspection. The algorithm transforms the 3D video data into 2D longitudinal view by choosing the most informative, from the intestinal motility point of view, part of each frame. This step maximizes the lumen visibility in its longitudinal extension. The task of finding “the best longitudinal view” has been defined as a cost function optimization problem which global minimum is obtained by using Dynamic Programming. Validation on both synthetic data and WCE data shows that the adaptive longitudinal view is a good alternative to the traditional motility analysis done by video analysis. The proposed novel data representation a new, holistic insight into the small intestine motility, allowing to easily define and analyze motility events that are difficult to spot by analyzing WCE video. Moreover, the visual inspection of small intestine motility is 4 times faster then by means of video skimming of the WCE.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; OR; 600.046; 605.203 Approved no  
  Call Number Admin @ si @ DSM2012 Serial 2151  
Permanent link to this record
 

 
Author Joan Serrat; Felipe Lumbreras; Antonio Lopez edit   pdf
doi  openurl
  Title Cost estimation of custom hoses from STL files and CAD drawings Type Journal Article
  Year 2013 Publication Computers in Industry Abbreviated Journal COMPUTIND  
  Volume 64 Issue 3 Pages 299-309  
  Keywords On-line quotation; STL format; Regression; Gaussian process  
  Abstract We present a method for the cost estimation of custom hoses from CAD models. They can come in two formats, which are easy to generate: a STL file or the image of a CAD drawing showing several orthogonal projections. The challenges in either cases are, first, to obtain from them a high level 3D description of the shape, and second, to learn a regression function for the prediction of the manufacturing time, based on geometric features of the reconstructed shape. The chosen description is the 3D line along the medial axis of the tube and the diameter of the circular sections along it. In order to extract it from STL files, we have adapted RANSAC, a robust parametric fitting algorithm. As for CAD drawing images, we propose a new technique for 3D reconstruction from data entered on any number of orthogonal projections. The regression function is a Gaussian process, which does not constrain the function to adopt any specific form and is governed by just two parameters. We assess the accuracy of the manufacturing time estimation by k-fold cross validation on 171 STL file models for which the time is provided by an expert. The results show the feasibility of the method, whereby the relative error for 80% of the testing samples is below 15%.  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.057; 600.054; 605.203 Approved no  
  Call Number Admin @ si @ SLL2013; ADAS @ adas @ Serial 2161  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: