toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author David Dueñas; Mostafa Kamal; Petia Radeva edit  openurl
  Title Efficient Deep Learning Ensemble for Skin Lesion Classification Type Conference Article
  Year 2023 Publication Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications Abbreviated Journal  
  Volume Issue Pages 303-314  
  Keywords  
  Abstract Vision Transformers (ViTs) are deep learning techniques that have been gaining in popularity in recent years.
In this work, we study the performance of ViTs and Convolutional Neural Networks (CNNs) on skin lesions classification tasks, specifically melanoma diagnosis. We show that regardless of the performance of both architectures, an ensemble of them can improve their generalization. We also present an adaptation to the Gram-OOD* method (detecting Out-of-distribution (OOD) using Gram matrices) for skin lesion images. Moreover, the integration of super-convergence was critical to success in building models with strict computing and training time constraints. We evaluated our ensemble of ViTs and CNNs, demonstrating that generalization is enhanced by placing first in the 2019 and third in the 2020 ISIC Challenge Live Leaderboards
(available at https://challenge.isic-archive.com/leaderboards/live/).
 
  Address Lisboa; Portugal; February 2023  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference VISIGRAPP  
  Notes MILAB Approved no  
  Call Number (down) Admin @ si @ DKR2023 Serial 3928  
Permanent link to this record
 

 
Author Reuben Dorent; Aaron Kujawa; Marina Ivory; Spyridon Bakas; Nikola Rieke; Samuel Joutard; Ben Glocker; Jorge Cardoso; Marc Modat; Kayhan Batmanghelich; Arseniy Belkov; Maria Baldeon Calisto; Jae Won Choi; Benoit M. Dawant; Hexin Dong; Sergio Escalera; Yubo Fan; Lasse Hansen; Mattias P. Heinrich; Smriti Joshi; Victoriya Kashtanova; Hyeon Gyu Kim; Satoshi Kondo; Christian N. Kruse; Susana K. Lai-Yuen; Hao Li; Han Liu; Buntheng Ly; Ipek Oguz; Hyungseob Shin; Boris Shirokikh; Zixian Su; Guotai Wang; Jianghao Wu; Yanwu Xu; Kai Yao; Li Zhang; Sebastien Ourselin, edit   pdf
url  doi
openurl 
  Title CrossMoDA 2021 challenge: Benchmark of Cross-Modality Domain Adaptation techniques for Vestibular Schwannoma and Cochlea Segmentation Type Journal Article
  Year 2023 Publication Medical Image Analysis Abbreviated Journal MIA  
  Volume 83 Issue Pages 102628  
  Keywords Domain Adaptation; Segmen tation; Vestibular Schwnannoma  
  Abstract Domain Adaptation (DA) has recently raised strong interests in the medical imaging community. While a large variety of DA techniques has been proposed for image segmentation, most of these techniques have been validated either on private datasets or on small publicly available datasets. Moreover, these datasets mostly addressed single-class problems. To tackle these limitations, the Cross-Modality Domain Adaptation (crossMoDA) challenge was organised in conjunction with the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021). CrossMoDA is the first large and multi-class benchmark for unsupervised cross-modality DA. The challenge's goal is to segment two key brain structures involved in the follow-up and treatment planning of vestibular schwannoma (VS): the VS and the cochleas. Currently, the diagnosis and surveillance in patients with VS are performed using contrast-enhanced T1 (ceT1) MRI. However, there is growing interest in using non-contrast sequences such as high-resolution T2 (hrT2) MRI. Therefore, we created an unsupervised cross-modality segmentation benchmark. The training set provides annotated ceT1 (N=105) and unpaired non-annotated hrT2 (N=105). The aim was to automatically perform unilateral VS and bilateral cochlea segmentation on hrT2 as provided in the testing set (N=137). A total of 16 teams submitted their algorithm for the evaluation phase. The level of performance reached by the top-performing teams is strikingly high (best median Dice – VS:88.4%; Cochleas:85.7%) and close to full supervision (median Dice – VS:92.5%; Cochleas:87.7%). All top-performing methods made use of an image-to-image translation approach to transform the source-domain images into pseudo-target-domain images. A segmentation network was then trained using these generated images and the manual annotations provided for the source image.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HUPBA Approved no  
  Call Number (down) Admin @ si @ DKI2023 Serial 3706  
Permanent link to this record
 

 
Author M. Danelljan; Fahad Shahbaz Khan; Michael Felsberg; Joost Van de Weijer edit   pdf
doi  openurl
  Title Adaptive color attributes for real-time visual tracking Type Conference Article
  Year 2014 Publication 27th IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 1090 - 1097  
  Keywords  
  Abstract Visual tracking is a challenging problem in computer vision. Most state-of-the-art visual trackers either rely on luminance information or use simple color representations for image description. Contrary to visual tracking, for object
recognition and detection, sophisticated color features when combined with luminance have shown to provide excellent performance. Due to the complexity of the tracking problem, the desired color feature should be computationally
efficient, and possess a certain amount of photometric invariance while maintaining high discriminative power.
This paper investigates the contribution of color in a tracking-by-detection framework. Our results suggest that color attributes provides superior performance for visual tracking. We further propose an adaptive low-dimensional
variant of color attributes. Both quantitative and attributebased evaluations are performed on 41 challenging benchmark color sequences. The proposed approach improves the baseline intensity-based tracker by 24% in median distance precision. Furthermore, we show that our approach outperforms
state-of-the-art tracking methods while running at more than 100 frames per second.
 
  Address Nottingham; UK; September 2014  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPR  
  Notes CIC; LAMP; 600.074; 600.079 Approved no  
  Call Number (down) Admin @ si @ DKF2014 Serial 2509  
Permanent link to this record
 

 
Author Mariella Dimiccoli; Jean-Pascal Jacob; Lionel Moisan edit   pdf
url  openurl
  Title Particle detection and tracking in fluorescence time-lapse imaging: a contrario approach Type Journal Article
  Year 2016 Publication Journal of Machine Vision and Applications Abbreviated Journal MVAP  
  Volume 27 Issue Pages 511-527  
  Keywords particle detection; particle tracking; a-contrario approach; time-lapse fluorescence imaging  
  Abstract In this work, we propose a probabilistic approach for the detection and the
tracking of particles on biological images. In presence of very noised and poor
quality data, particles and trajectories can be characterized by an a-contrario
model, that estimates the probability of observing the structures of interest
in random data. This approach, first introduced in the modeling of human visual
perception and then successfully applied in many image processing tasks, leads
to algorithms that do not require a previous learning stage, nor a tedious
parameter tuning and are very robust to noise. Comparative evaluations against
a well established baseline show that the proposed approach outperforms the
state of the art.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; Approved no  
  Call Number (down) Admin @ si @ DJM2016 Serial 2735  
Permanent link to this record
 

 
Author Ferran Diego edit  openurl
  Title Alignment of Videos Recorded from Moving Vehicles Type Report
  Year 2007 Publication CVC Technical Report #111 Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address CVC (UAB)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number (down) Admin @ si @ DiS2007 Serial 825  
Permanent link to this record
 

 
Author Mariella Dimiccoli edit   pdf
doi  openurl
  Title Figure-ground segregation: A fully nonlocal approach Type Journal Article
  Year 2016 Publication Vision Research Abbreviated Journal VR  
  Volume 126 Issue Pages 308-317  
  Keywords Figure-ground segregation; Nonlocal approach; Directional linear voting; Nonlinear diffusion  
  Abstract We present a computational model that computes and integrates in a nonlocal fashion several configural cues for automatic figure-ground segregation. Our working hypothesis is that the figural status of each pixel is a nonlocal function of several geometric shape properties and it can be estimated without explicitly relying on object boundaries. The methodology is grounded on two elements: multi-directional linear voting and nonlinear diffusion. A first estimation of the figural status of each pixel is obtained as a result of a voting process, in which several differently oriented line-shaped neighborhoods vote to express their belief about the figural status of the pixel. A nonlinear diffusion process is then applied to enforce the coherence of figural status estimates among perceptually homogeneous regions. Computer simulations fit human perception and match the experimental evidence that several cues cooperate in defining figure-ground segregation. The results of this work suggest that figure-ground segregation involves feedback from cells with larger receptive fields in higher visual cortical areas.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; Approved no  
  Call Number (down) Admin @ si @ Dim2016b Serial 2623  
Permanent link to this record
 

 
Author Katerine Diaz; Francesc J. Ferri edit  url
isbn  openurl
  Title Extensiones del método de vectores comunes discriminantes Aplicadas a la clasificación de imágenes Type Book Whole
  Year 2013 Publication Extensiones del método de vectores comunes discriminantes Aplicadas a la clasificación de imágenes Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Los métodos basados en subespacios son una herramienta muy utilizada en aplicaciones de visión por computador. Aquí se presentan y validan algunos algoritmos que hemos propuesto en este campo de investigación. El primer algoritmo está relacionado con una extensión del método de vectores comunes discriminantes con kernel, que reinterpreta el espacio nulo de la matriz de dispersión intra-clase del conjunto de entrenamiento para obtener las características discriminantes. Dentro de los métodos basados en subespacios existen diferentes tipos de entrenamiento. Uno de los más populares, pero no por ello uno de los más eficientes, es el aprendizaje por lotes. En este tipo de aprendizaje, todas las muestras del conjunto de entrenamiento tienen que estar disponibles desde el inicio. De este modo, cuando nuevas muestras se ponen a disposición del algoritmo, el sistema tiene que ser reentrenado de nuevo desde cero. Una alternativa a este tipo de entrenamiento es el aprendizaje incremental. Aquí­ se proponen diferentes algoritmos incrementales del método de vectores comunes discriminantes.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-3-639-55339-0 Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number (down) Admin @ si @ DiF2013 Serial 2440  
Permanent link to this record
 

 
Author Ferran Diego edit  openurl
  Title Probabilistic Alignment of Video Sequences Recorded by Moving Cameras Type Book Whole
  Year 2011 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Video alignment consists of integrating multiple video sequences recorded independently into a single video sequence. This means to register both in time (synchronize
frames) and space (image registration) so that the two videos sequences can be fused
or compared pixel–wise. In spite of being relatively unknown, many applications today may benefit from the availability of robust and efficient video alignment methods.
For instance, video surveillance requires to integrate video sequences that are recorded
of the same scene at different times in order to detect changes. The problem of aligning videos has been addressed before, but in the relatively simple cases of fixed or rigidly attached cameras and simultaneous acquisition. In addition, most works rely
on restrictive assumptions which reduce its difficulty such as linear time correspondence or the knowledge of the complete trajectories of corresponding scene points on the images; to some extent, these assumptions limit the practical applicability of the solutions developed until now. In this thesis, we focus on the challenging problem of aligning sequences recorded at different times from independent moving cameras following similar but not coincident trajectories. More precisely, this thesis covers four studies that advance the state-of-the-art in video alignment. First, we focus on analyzing and developing a probabilistic framework for video alignment, that is, a principled way to integrate multiple observations and prior information. In this way, two different approaches are presented to exploit the combination of several purely visual features (image–intensities, visual words and dense motion field descriptor), and
global positioning system (GPS) information. Second, we focus on reformulating the
problem into a single alignment framework since previous works on video alignment
adopt a divide–and–conquer strategy, i.e., first solve the synchronization, and then
register corresponding frames. This also generalizes the ’classic’ case of fixed geometric transform and linear time mapping. Third, we focus on exploiting directly the
time domain of the video sequences in order to avoid exhaustive cross–frame search.
This provides relevant information used for learning the temporal mapping between
pairs of video sequences. Finally, we focus on adapting these methods to the on–line
setting for road detection and vehicle geolocation. The qualitative and quantitative
results presented in this thesis on a variety of real–world pairs of video sequences show that the proposed method is: robust to varying imaging conditions, different image
content (e.g., incoming and outgoing vehicles), variations on camera velocity, and
different scenarios (indoor and outdoor) going beyond the state–of–the–art. Moreover, the on–line video alignment has been successfully applied for road detection and
vehicle geolocation achieving promising results.
 
  Address  
  Corporate Author Thesis Ph.D. thesis  
  Publisher Ediciones Graficas Rey Place of Publication Editor Joan Serrat  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number (down) Admin @ si @ Die2011 Serial 1787  
Permanent link to this record
 

 
Author Katerine Diaz; Aura Hernandez-Sabate; Antonio Lopez edit   pdf
doi  openurl
  Title A reduced feature set for driver head pose estimation Type Journal Article
  Year 2016 Publication Applied Soft Computing Abbreviated Journal ASOC  
  Volume 45 Issue Pages 98-107  
  Keywords Head pose estimation; driving performance evaluation; subspace based methods; linear regression  
  Abstract Evaluation of driving performance is of utmost importance in order to reduce road accident rate. Since driving ability includes visual-spatial and operational attention, among others, head pose estimation of the driver is a crucial indicator of driving performance. This paper proposes a new automatic method for coarse and fine head's yaw angle estimation of the driver. We rely on a set of geometric features computed from just three representative facial keypoints, namely the center of the eyes and the nose tip. With these geometric features, our method combines two manifold embedding methods and a linear regression one. In addition, the method has a confidence mechanism to decide if the classification of a sample is not reliable. The approach has been tested using the CMU-PIE dataset and our own driver dataset. Despite the very few facial keypoints required, the results are comparable to the state-of-the-art techniques. The low computational cost of the method and its robustness makes feasible to integrate it in massive consume devices as a real time application.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.085; 600.076; Approved no  
  Call Number (down) Admin @ si @ DHL2016 Serial 2760  
Permanent link to this record
 

 
Author Hamdi Dibeklioglu; M.O. Hortas; I. Kosunen; P. Zuzánek; Albert Ali Salah; Theo Gevers edit  doi
openurl 
  Title Design and implementation of an affect-responsive interactive photo frame Type Journal
  Year 2011 Publication Journal on Multimodal User Interfaces Abbreviated Journal JMUI  
  Volume 4 Issue 2 Pages 81-95  
  Keywords  
  Abstract This paper describes an affect-responsive interactive photo-frame application that offers its user a different experience with every use. It relies on visual analysis of activity levels and facial expressions of its users to select responses from a database of short video segments. This ever-growing database is automatically prepared by an offline analysis of user-uploaded videos. The resulting system matches its user’s affect along dimensions of valence and arousal, and gradually adapts its response to each specific user. In an extended mode, two such systems are coupled and feed each other with visual content. The strengths and weaknesses of the system are assessed through a usability study, where a Wizard-of-Oz response logic is contrasted with the fully automatic system that uses affective and activity-based features, either alone, or in tandem.  
  Address  
  Corporate Author Thesis  
  Publisher Springer–Verlag Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1783-7677 ISBN Medium  
  Area Expedition Conference  
  Notes ALTRES;ISE Approved no  
  Call Number (down) Admin @ si @ DHK2011 Serial 1842  
Permanent link to this record
 

 
Author Arka Ujjal Dey; Suman Ghosh; Ernest Valveny; Gaurav Harit edit   pdf
url  doi
openurl 
  Title Beyond Visual Semantics: Exploring the Role of Scene Text in Image Understanding Type Journal Article
  Year 2021 Publication Pattern Recognition Letters Abbreviated Journal PRL  
  Volume 149 Issue Pages 164-171  
  Keywords  
  Abstract Images with visual and scene text content are ubiquitous in everyday life. However, current image interpretation systems are mostly limited to using only the visual features, neglecting to leverage the scene text content. In this paper, we propose to jointly use scene text and visual channels for robust semantic interpretation of images. We do not only extract and encode visual and scene text cues, but also model their interplay to generate a contextual joint embedding with richer semantics. The contextual embedding thus generated is applied to retrieval and classification tasks on multimedia images, with scene text content, to demonstrate its effectiveness. In the retrieval framework, we augment our learned text-visual semantic representation with scene text cues, to mitigate vocabulary misses that may have occurred during the semantic embedding. To deal with irrelevant or erroneous recognition of scene text, we also apply query-based attention to our text channel. We show how the multi-channel approach, involving visual semantics and scene text, improves upon state of the art.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes DAG; 600.121 Approved no  
  Call Number (down) Admin @ si @ DGV2021 Serial 3364  
Permanent link to this record
 

 
Author Arka Ujjal Dey; Suman Ghosh; Ernest Valveny edit   pdf
openurl 
  Title Don't only Feel Read: Using Scene text to understand advertisements Type Conference Article
  Year 2018 Publication IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract We propose a framework for automated classification of Advertisement Images, using not just Visual features but also Textual cues extracted from embedded text. Our approach takes inspiration from the assumption that Ad images contain meaningful textual content, that can provide discriminative semantic interpretetion, and can thus aid in classifcation tasks. To this end, we develop a framework using off-the-shelf components, and demonstrate the effectiveness of Textual cues in semantic Classfication tasks.  
  Address Salt Lake City; Utah; USA; June 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes DAG; 600.121; 600.129 Approved no  
  Call Number (down) Admin @ si @ DGV2018 Serial 3551  
Permanent link to this record
 

 
Author Marta Diez-Ferrer; Debora Gil; Cristian Tebe; Carles Sanchez edit   pdf
doi  openurl
  Title Positive Airway Pressure to Enhance Computed Tomography Imaging for Airway Segmentation for Virtual Bronchoscopic Navigation Type Journal Article
  Year 2018 Publication Respiration Abbreviated Journal RES  
  Volume 96 Issue 6 Pages 525-534  
  Keywords Multidetector computed tomography; Bronchoscopy; Continuous positive airway pressure; Image enhancement; Virtual bronchoscopic navigation  
  Abstract Abstract
RATIONALE:
Virtual bronchoscopic navigation (VBN) guidance to peripheral pulmonary lesions is often limited by insufficient segmentation of the peripheral airways.

OBJECTIVES:
To test the effect of applying positive airway pressure (PAP) during CT acquisition to improve segmentation, particularly at end-expiration.

METHODS:
CT acquisitions in inspiration and expiration with 4 PAP protocols were recorded prospectively and compared to baseline inspiratory acquisitions in 20 patients. The 4 protocols explored differences between devices (flow vs. turbine), exposures (within seconds vs. 15-min) and pressure levels (10 vs. 14 cmH2O). Segmentation quality was evaluated with the number of airways and number of endpoints reached. A generalized mixed-effects model explored the estimated effect of each protocol.

MEASUREMENTS AND MAIN RESULTS:
Patient characteristics and lung function did not significantly differ between protocols. Compared to baseline inspiratory acquisitions, expiratory acquisitions after 15 min of 14 cmH2O PAP segmented 1.63-fold more airways (95% CI 1.07-2.48; p = 0.018) and reached 1.34-fold more endpoints (95% CI 1.08-1.66; p = 0.004). Inspiratory acquisitions performed immediately under 10 cmH2O PAP reached 1.20-fold (95% CI 1.09-1.33; p < 0.001) more endpoints; after 15 min the increase was 1.14-fold (95% CI 1.05-1.24; p < 0.001).

CONCLUSIONS:
CT acquisitions with PAP segment more airways and reach more endpoints than baseline inspiratory acquisitions. The improvement is particularly evident at end-expiration after 15 min of 14 cmH2O PAP. Further studies must confirm that the improvement increases diagnostic yield when using VBN to evaluate peripheral pulmonary lesions.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes IAM; 600.145 Approved no  
  Call Number (down) Admin @ si @ DGT2018 Serial 3135  
Permanent link to this record
 

 
Author Hamdi Dibeklioglu; Theo Gevers; Albert Ali Salah edit  doi
isbn  openurl
  Title Are You Really Smiling at Me? Spontaneous versus Posed Enjoyment Smiles Type Conference Article
  Year 2012 Publication 12th European Conference on Computer Vision Abbreviated Journal  
  Volume 7574 Issue III Pages 525-538  
  Keywords  
  Abstract Smiling is an indispensable element of nonverbal social interaction. Besides, automatic distinction between spontaneous and posed expressions is important for visual analysis of social signals. Therefore, in this paper, we propose a method to distinguish between spontaneous and posed enjoyment smiles by using the dynamics of eyelid, cheek, and lip corner movements. The discriminative power of these movements, and the effect of different fusion levels are investigated on multiple databases. Our results improve the state-of-the-art. We also introduce the largest spontaneous/posed enjoyment smile database collected to date, and report new empirical and conceptual findings on smile dynamics. The collected database consists of 1240 samples of 400 subjects. Moreover, it has the unique property of having an age range from 8 to 76 years. Large scale experiments on the new database indicate that eyelid dynamics are highly relevant for smile classification, and there are age-related differences in smile dynamics.  
  Address Florence, Italy  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-33711-6 Medium  
  Area Expedition Conference ECCV  
  Notes ALTRES;ISE Approved no  
  Call Number (down) Admin @ si @ DGS2012 Serial 2024  
Permanent link to this record
 

 
Author Manisha Das; Deep Gupta; Petia Radeva; Ashwini M. Bakde edit  url
doi  openurl
  Title Optimized CT-MR neurological image fusion framework using biologically inspired spiking neural model in hybrid ℓ1 - ℓ0 layer decomposition domain Type Journal Article
  Year 2021 Publication Biomedical Signal Processing and Control Abbreviated Journal BSPC  
  Volume 68 Issue Pages 102535  
  Keywords  
  Abstract Medical image fusion plays an important role in the clinical diagnosis of several critical neurological diseases by merging complementary information available in multimodal images. In this paper, a novel CT-MR neurological image fusion framework is proposed using an optimized biologically inspired feedforward neural model in two-scale hybrid ℓ1 − ℓ0 decomposition domain using gray wolf optimization to preserve the structural as well as texture information present in source CT and MR images. Initially, the source images are subjected to two-scale ℓ1 − ℓ0 decomposition with optimized parameters, giving a scale-1 detail layer, a scale-2 detail layer and a scale-2 base layer. Two detail layers at scale-1 and 2 are fused using an optimized biologically inspired neural model and weighted average scheme based on local energy and modified spatial frequency to maximize the preservation of edges and local textures, respectively, while the scale-2 base layer gets fused using choose max rule to preserve the background information. To optimize the hyper-parameters of hybrid ℓ1 − ℓ0 decomposition and biologically inspired neural model, a fitness function is evaluated based on spatial frequency and edge index of the resultant fused image obtained by adding all the fused components. The fusion performance is analyzed by conducting extensive experiments on different CT-MR neurological images. Experimental results indicate that the proposed method provides better-fused images and outperforms the other state-of-the-art fusion methods in both visual and quantitative assessments.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; no proj Approved no  
  Call Number (down) Admin @ si @ DGR2021b Serial 3636  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: