|
Records |
Links |
|
Author |
Rozenn Dhayot; Fernando Vilariño; Gerard Lacey |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Improving the Quality of Color Colonoscopy Videos |
Type |
Journal Article |
|
Year |
2008 |
Publication |
EURASIP Journal on Image and Video Processing |
Abbreviated Journal |
EURASIP JIVP |
|
|
Volume ![sorted by Volume (numeric) field, descending order (down)](img/sort_desc.gif) |
139429 |
Issue |
1 |
Pages |
1-9 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
800 |
Expedition |
|
Conference |
|
|
|
Notes |
MV;SIAI |
Approved |
no |
|
|
Call Number |
fernando @ fernando @ |
Serial |
2422 |
|
Permanent link to this record |
|
|
|
|
Author |
Mirko Arnold; Anarta Ghosh; Stephen Ameling; G Lacey |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Automatic segmentation and inpainting of specular highlights for endoscopic imaging |
Type |
Journal Article |
|
Year |
2010 |
Publication |
EURASIP Journal on Image and Video Processing |
Abbreviated Journal |
EURASIP JIVP |
|
|
Volume ![sorted by Volume (numeric) field, descending order (down)](img/sort_desc.gif) |
2010 |
Issue |
9 |
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
800 |
Expedition |
|
Conference |
|
|
|
Notes |
MV |
Approved |
no |
|
|
Call Number |
fernando @ fernando @ |
Serial |
2423 |
|
Permanent link to this record |
|
|
|
|
Author |
Ernest Valveny; Enric Marti |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Deformable Template Matching within a Bayesian Framework for Hand-Written Graphic Symbol Recognition |
Type |
Journal Article |
|
Year |
2000 |
Publication |
Graphics Recognition Recent Advances |
Abbreviated Journal |
|
|
|
Volume ![sorted by Volume (numeric) field, descending order (down)](img/sort_desc.gif) |
1941 |
Issue |
|
Pages |
193-208 |
|
|
Keywords |
|
|
|
Abstract |
We describe a method for hand-drawn symbol recognition based on deformable template matching able to handle uncertainty and imprecision inherent to hand-drawing. Symbols are represented as a set of straight lines and their deformations as geometric transformations of these lines. Matching, however, is done over the original binary image to avoid loss of information during line detection. It is defined as an energy minimization problem, using a Bayesian framework which allows to combine fidelity to ideal shape of the symbol and flexibility to modify the symbol in order to get the best fit to the binary input image. Prior to matching, we find the best global transformation of the symbol to start the recognition process, based on the distance between symbol lines and image lines. We have applied this method to the recognition of dimensions and symbols in architectural floor plans and we show its flexibility to recognize distorted symbols. |
|
|
Address |
|
|
|
Corporate Author |
Springer Verlag |
Thesis |
|
|
|
Publisher |
Springer Verlag |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG;IAM; |
Approved |
no |
|
|
Call Number |
IAM @ iam @ MVA2000 |
Serial |
1655 |
|
Permanent link to this record |
|
|
|
|
Author |
Patricia Suarez; Dario Carpio; Angel Sappa |
![goto web page url](img/www.gif)
|
|
Title |
Enhancement of guided thermal image super-resolution approaches |
Type |
Journal Article |
|
Year |
2024 |
Publication |
Neurocomputing |
Abbreviated Journal |
NEUCOM |
|
|
Volume ![sorted by Volume (numeric) field, descending order (down)](img/sort_desc.gif) |
573 |
Issue |
127197 |
Pages |
1-17 |
|
|
Keywords |
|
|
|
Abstract |
Guided image processing techniques are widely used to extract meaningful information from a guiding image and facilitate the enhancement of the guided one. This paper specifically addresses the challenge of guided thermal image super-resolution, where a low-resolution thermal image is enhanced using a high-resolution visible spectrum image. We propose a new strategy that enhances outcomes from current guided super-resolution methods. This is achieved by transforming the initial guiding data into a representation resembling a thermal-like image, which is more closely in sync with the intended output. Experimental results with upscale factors of 8 and 16, demonstrate the outstanding performance of our approach in guided thermal image super-resolution obtained by mapping the original guiding information to a thermal-like image representation. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MSIAU |
Approved |
no |
|
|
Call Number |
Admin @ si @ SCS2024 |
Serial |
3998 |
|
Permanent link to this record |
|
|
|
|
Author |
Fei Yang; Yaxing Wang; Luis Herranz; Yongmei Cheng; Mikhail Mozerov |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
A Novel Framework for Image-to-image Translation and Image Compression |
Type |
Journal Article |
|
Year |
2022 |
Publication |
Neurocomputing |
Abbreviated Journal |
NEUCOM |
|
|
Volume ![sorted by Volume (numeric) field, descending order (down)](img/sort_desc.gif) |
508 |
Issue |
|
Pages |
58-70 |
|
|
Keywords |
|
|
|
Abstract |
Data-driven paradigms using machine learning are becoming ubiquitous in image processing and communications. In particular, image-to-image (I2I) translation is a generic and widely used approach to image processing problems, such as image synthesis, style transfer, and image restoration. At the same time, neural image compression has emerged as a data-driven alternative to traditional coding approaches in visual communications. In this paper, we study the combination of these two paradigms into a joint I2I compression and translation framework, focusing on multi-domain image synthesis. We first propose distributed I2I translation by integrating quantization and entropy coding into an I2I translation framework (i.e. I2Icodec). In practice, the image compression functionality (i.e. autoencoding) is also desirable, requiring to deploy alongside I2Icodec a regular image codec. Thus, we further propose a unified framework that allows both translation and autoencoding capabilities in a single codec. Adaptive residual blocks conditioned on the translation/compression mode provide flexible adaptation to the desired functionality. The experiments show promising results in both I2I translation and image compression using a single model. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
LAMP |
Approved |
no |
|
|
Call Number |
Admin @ si @ YWH2022 |
Serial |
3679 |
|
Permanent link to this record |
|
|
|
|
Author |
David Berga; Xavier Otazu |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Modeling Bottom-Up and Top-Down Attention with a Neurodynamic Model of V1 |
Type |
Journal Article |
|
Year |
2020 |
Publication |
Neurocomputing |
Abbreviated Journal |
NEUCOM |
|
|
Volume ![sorted by Volume (numeric) field, descending order (down)](img/sort_desc.gif) |
417 |
Issue |
|
Pages |
270-289 |
|
|
Keywords |
|
|
|
Abstract |
Previous studies suggested that lateral interactions of V1 cells are responsible, among other visual effects, of bottom-up visual attention (alternatively named visual salience or saliency). Our objective is to mimic these connections with a neurodynamic network of firing-rate neurons in order to predict visual attention. Early visual subcortical processes (i.e. retinal and thalamic) are functionally simulated. An implementation of the cortical magnification function is included to define the retinotopical projections towards V1, processing neuronal activity for each distinct view during scene observation. Novel computational definitions of top-down inhibition (in terms of inhibition of return, oculomotor and selection mechanisms), are also proposed to predict attention in Free-Viewing and Visual Search tasks. Results show that our model outpeforms other biologically inspired models of saliency prediction while predicting visual saccade sequences with the same model. We also show how temporal and spatial characteristics of saccade amplitude and inhibition of return can improve prediction of saccades, as well as how distinct search strategies (in terms of feature-selective or category-specific inhibition) can predict attention at distinct image contexts. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
NEUROBIT |
Approved |
no |
|
|
Call Number |
Admin @ si @ BeO2020c |
Serial |
3444 |
|
Permanent link to this record |
|
|
|
|
Author |
Carolina Malagelada; Michal Drozdzal; Santiago Segui; Sara Mendez; Jordi Vitria; Petia Radeva; Javier Santos; Anna Accarino; Juan R. Malagelada; Fernando Azpiroz |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Classification of functional bowel disorders by objective physiological criteria based on endoluminal image analysis |
Type |
Journal Article |
|
Year |
2015 |
Publication |
American Journal of Physiology-Gastrointestinal and Liver Physiology |
Abbreviated Journal |
AJPGI |
|
|
Volume ![sorted by Volume (numeric) field, descending order (down)](img/sort_desc.gif) |
309 |
Issue |
6 |
Pages |
G413--G419 |
|
|
Keywords |
capsule endoscopy; computer vision analysis; functional bowel disorders; intestinal motility; machine learning |
|
|
Abstract |
We have previously developed an original method to evaluate small bowel motor function based on computer vision analysis of endoluminal images obtained by capsule endoscopy. Our aim was to demonstrate intestinal motor abnormalities in patients with functional bowel disorders by endoluminal vision analysis. Patients with functional bowel disorders (n = 205) and healthy subjects (n = 136) ingested the endoscopic capsule (Pillcam-SB2, Given-Imaging) after overnight fast and 45 min after gastric exit of the capsule a liquid meal (300 ml, 1 kcal/ml) was administered. Endoluminal image analysis was performed by computer vision and machine learning techniques to define the normal range and to identify clusters of abnormal function. After training the algorithm, we used 196 patients and 48 healthy subjects, completely naive, as test set. In the test set, 51 patients (26%) were detected outside the normal range (P < 0.001 vs. 3 healthy subjects) and clustered into hypo- and hyperdynamic subgroups compared with healthy subjects. Patients with hypodynamic behavior (n = 38) exhibited less luminal closure sequences (41 ± 2% of the recording time vs. 61 ± 2%; P < 0.001) and more static sequences (38 ± 3 vs. 20 ± 2%; P < 0.001); in contrast, patients with hyperdynamic behavior (n = 13) had an increased proportion of luminal closure sequences (73 ± 4 vs. 61 ± 2%; P = 0.029) and more high-motion sequences (3 ± 1 vs. 0.5 ± 0.1%; P < 0.001). Applying an original methodology, we have developed a novel classification of functional gut disorders based on objective, physiological criteria of small bowel function. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
American Physiological Society |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB; OR;MV |
Approved |
no |
|
|
Call Number |
Admin @ si @ MDS2015 |
Serial |
2666 |
|
Permanent link to this record |
|
|
|
|
Author |
Marta Ligero; Alonso Garcia Ruiz; Cristina Viaplana; Guillermo Villacampa; Maria V Raciti; Jaid Landa; Ignacio Matos; Juan Martin Liberal; Maria Ochoa de Olza; Cinta Hierro; Joaquin Mateo; Macarena Gonzalez; Rafael Morales Barrera; Cristina Suarez; Jordi Rodon; Elena Elez; Irene Braña; Eva Muñoz-Couselo; Ana Oaknin; Roberta Fasani; Paolo Nuciforo; Debora Gil; Carlota Rubio Perez; Joan Seoane; Enriqueta Felip; Manuel Escobar; Josep Tabernero; Joan Carles; Rodrigo Dienstmann; Elena Garralda; Raquel Perez Lopez |
![goto web page url](img/www.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
A CT-based radiomics signature is associated with response to immune checkpoint inhibitors in advanced solid tumors |
Type |
Journal Article |
|
Year |
2021 |
Publication |
Radiology |
Abbreviated Journal |
|
|
|
Volume ![sorted by Volume (numeric) field, descending order (down)](img/sort_desc.gif) |
299 |
Issue |
1 |
Pages |
109-119 |
|
|
Keywords |
|
|
|
Abstract |
Background Reliable predictive imaging markers of response to immune checkpoint inhibitors are needed. Purpose To develop and validate a pretreatment CT-based radiomics signature to predict response to immune checkpoint inhibitors in advanced solid tumors. Materials and Methods In this retrospective study, a radiomics signature was developed in patients with advanced solid tumors (including breast, cervix, gastrointestinal) treated with anti-programmed cell death-1 or programmed cell death ligand-1 monotherapy from August 2012 to May 2018 (cohort 1). This was tested in patients with bladder and lung cancer (cohorts 2 and 3). Radiomics variables were extracted from all metastases delineated at pretreatment CT and selected by using an elastic-net model. A regression model combined radiomics and clinical variables with response as the end point. Biologic validation of the radiomics score with RNA profiling of cytotoxic cells (cohort 4) was assessed with Mann-Whitney analysis. Results The radiomics signature was developed in 85 patients (cohort 1: mean age, 58 years ± 13 [standard deviation]; 43 men) and tested on 46 patients (cohort 2: mean age, 70 years ± 12; 37 men) and 47 patients (cohort 3: mean age, 64 years ± 11; 40 men). Biologic validation was performed in a further cohort of 20 patients (cohort 4: mean age, 60 years ± 13; 14 men). The radiomics signature was associated with clinical response to immune checkpoint inhibitors (area under the curve [AUC], 0.70; 95% CI: 0.64, 0.77; P < .001). In cohorts 2 and 3, the AUC was 0.67 (95% CI: 0.58, 0.76) and 0.67 (95% CI: 0.56, 0.77; P < .001), respectively. A radiomics-clinical signature (including baseline albumin level and lymphocyte count) improved on radiomics-only performance (AUC, 0.74 [95% CI: 0.63, 0.84; P < .001]; Akaike information criterion, 107.00 and 109.90, respectively). Conclusion A pretreatment CT-based radiomics signature is associated with response to immune checkpoint inhibitors, likely reflecting the tumor immunophenotype. © RSNA, 2021 Online supplemental material is available for this article. See also the editorial by Summers in this issue. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
IAM; 600.145 |
Approved |
no |
|
|
Call Number |
Admin @ si @ LGV2021 |
Serial |
3593 |
|
Permanent link to this record |
|
|
|
|
Author |
Tao Wu; Kai Wang; Chuanming Tang; Jianlin Zhang |
![goto web page url](img/www.gif)
|
|
Title |
Diffusion-based network for unsupervised landmark detection |
Type |
Journal Article |
|
Year |
2024 |
Publication |
Knowledge-Based Systems |
Abbreviated Journal |
|
|
|
Volume ![sorted by Volume (numeric) field, descending order (down)](img/sort_desc.gif) |
292 |
Issue |
|
Pages |
111627 |
|
|
Keywords |
|
|
|
Abstract |
Landmark detection is a fundamental task aiming at identifying specific landmarks that serve as representations of distinct object features within an image. However, the present landmark detection algorithms often adopt complex architectures and are trained in a supervised manner using large datasets to achieve satisfactory performance. When faced with limited data, these algorithms tend to experience a notable decline in accuracy. To address these drawbacks, we propose a novel diffusion-based network (DBN) for unsupervised landmark detection, which leverages the generation ability of the diffusion models to detect the landmark locations. In particular, we introduce a dual-branch encoder (DualE) for extracting visual features and predicting landmarks. Additionally, we lighten the decoder structure for faster inference, referred to as LightD. By this means, we avoid relying on extensive data comparison and the necessity of designing complex architectures as in previous methods. Experiments on CelebA, AFLW, 300W and Deepfashion benchmarks have shown that DBN performs state-of-the-art compared to the existing methods. Furthermore, DBN shows robustness even when faced with limited data cases. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
LAMP |
Approved |
no |
|
|
Call Number |
Admin @ si @ WWT2024 |
Serial |
4024 |
|
Permanent link to this record |
|
|
|
|
Author |
Olivier Penacchio |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Mixed Hodge Structures and Equivariant Sheaves on the Projective Plane |
Type |
Journal Article |
|
Year |
2011 |
Publication |
Mathematische Nachrichten |
Abbreviated Journal |
MN |
|
|
Volume ![sorted by Volume (numeric) field, descending order (down)](img/sort_desc.gif) |
284 |
Issue |
4 |
Pages |
526-542 |
|
|
Keywords |
Mixed Hodge structures, equivariant sheaves, MSC (2010) Primary: 14C30, Secondary: 14F05, 14M25 |
|
|
Abstract |
We describe an equivalence of categories between the category of mixed Hodge structures and a category of equivariant vector bundles on a toric model of the complex projective plane which verify some semistability condition. We then apply this correspondence to define an invariant which generalizes the notion of R-split mixed Hodge structure and give calculations for the first group of cohomology of possibly non smooth or non-complete curves of genus 0 and 1. Finally, we describe some extension groups of mixed Hodge structures in terms of equivariant extensions of coherent sheaves. © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
WILEY-VCH Verlag |
Place of Publication |
|
Editor |
R. Mennicken |
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1522-2616 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
CIC |
Approved |
no |
|
|
Call Number |
Admin @ si @ Pen2011 |
Serial |
1721 |
|
Permanent link to this record |
|
|
|
|
Author |
Mireia Forns-Nadal; Federico Sem; Anna Mane; Laura Igual; Dani Guinart; Oscar Vilarroya |
![goto web page url](img/www.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Increased Nucleus Accumbens Volume in First-Episode Psychosis |
Type |
Journal Article |
|
Year |
2017 |
Publication |
Psychiatry Research-Neuroimaging |
Abbreviated Journal |
PRN |
|
|
Volume ![sorted by Volume (numeric) field, descending order (down)](img/sort_desc.gif) |
263 |
Issue |
|
Pages |
57-60 |
|
|
Keywords |
|
|
|
Abstract |
Nucleus accumbens has been reported as a key structure in the neurobiology of schizophrenia. Studies analyzing structural abnormalities have shown conflicting results, possibly related to confounding factors. We investigated the nucleus accumbens volume using manual delimitation in first-episode psychosis (FEP) controlling for age, cannabis use and medication. Thirty-one FEP subjects who were naive or minimally exposed to antipsychotics and a control group were MRI scanned and clinically assessed from baseline to 6 months of follow-up. FEP showed increased relative and total accumbens volumes. Clinical correlations with negative symptoms, duration of untreated psychosis and cannabis use were not significant. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB; no menciona |
Approved |
no |
|
|
Call Number |
Admin @ si @ FSM2017 |
Serial |
3028 |
|
Permanent link to this record |
|
|
|
|
Author |
Yecong Wan; Yuanshuo Cheng; Miingwen Shao; Jordi Gonzalez |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Image rain removal and illumination enhancement done in one go |
Type |
Journal Article |
|
Year |
2022 |
Publication |
Knowledge-Based Systems |
Abbreviated Journal |
KBS |
|
|
Volume ![sorted by Volume (numeric) field, descending order (down)](img/sort_desc.gif) |
252 |
Issue |
|
Pages |
109244 |
|
|
Keywords |
|
|
|
Abstract |
Rain removal plays an important role in the restoration of degraded images. Recently, CNN-based methods have achieved remarkable success. However, these approaches neglect that the appearance of real-world rain is often accompanied by low light conditions, which will further degrade the image quality, thereby hindering the restoration mission. Therefore, it is very indispensable to jointly remove the rain and enhance illumination for real-world rain image restoration. To this end, we proposed a novel spatially-adaptive network, dubbed SANet, which can remove the rain and enhance illumination in one go with the guidance of degradation mask. Meanwhile, to fully utilize negative samples, a contrastive loss is proposed to preserve more natural textures and consistent illumination. In addition, we present a new synthetic dataset, named DarkRain, to boost the development of rain image restoration algorithms in practical scenarios. DarkRain not only contains different degrees of rain, but also considers different lighting conditions, and more realistically simulates real-world rainfall scenarios. SANet is extensively evaluated on the proposed dataset and attains new state-of-the-art performance against other combining methods. Moreover, after a simple transformation, our SANet surpasses existing the state-of-the-art algorithms in both rain removal and low-light image enhancement. |
|
|
Address |
Sept 2022 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Elsevier |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ISE; 600.157; 600.168 |
Approved |
no |
|
|
Call Number |
Admin @ si @ WCS2022 |
Serial |
3744 |
|
Permanent link to this record |
|
|
|
|
Author |
Sounak Dey; Palaiahnakote Shivakumara; K.S. Raghunanda; Umapada Pal; Tong Lu; G. Hemantha Kumar; Chee Seng Chan |
![goto web page url](img/www.gif)
|
|
Title |
Script independent approach for multi-oriented text detection in scene image |
Type |
Journal Article |
|
Year |
2017 |
Publication |
Neurocomputing |
Abbreviated Journal |
NEUCOM |
|
|
Volume ![sorted by Volume (numeric) field, descending order (down)](img/sort_desc.gif) |
242 |
Issue |
|
Pages |
96-112 |
|
|
Keywords |
|
|
|
Abstract |
Developing a text detection method which is invariant to scripts in natural scene images is a challeng- ing task due to different geometrical structures of various scripts. Besides, multi-oriented of text lines in natural scene images make the problem more challenging. This paper proposes to explore ring radius transform (RRT) for text detection in multi-oriented and multi-script environments. The method finds component regions based on convex hull to generate radius matrices using RRT. It is a fact that RRT pro- vides low radius values for the pixels that are near to edges, constant radius values for the pixels that represent stroke width, and high radius values that represent holes created in background and convex hull because of the regular structures of text components. We apply k -means clustering on the radius matrices to group such spatially coherent regions into individual clusters. Then the proposed method studies the radius values of such cluster components that are close to the centroid and far from the cen- troid to detect text components. Furthermore, we have developed a Bangla dataset (named as ISI-UM dataset) and propose a semi-automatic system for generating its ground truth for text detection of arbi- trary orientations, which can be used by the researchers for text detection and recognition in the future. The ground truth will be released to public. Experimental results on our ISI-UM data and other standard datasets, namely, ICDAR 2013 scene, SVT and MSRA data, show that the proposed method outperforms the existing methods in terms of multi-lingual and multi-oriented text detection ability. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG; 600.121 |
Approved |
no |
|
|
Call Number |
Admin @ si @ DSR2017 |
Serial |
3260 |
|
Permanent link to this record |
|
|
|
|
Author |
Shiqi Yang; Yaxing Wang; Luis Herranz; Shangling Jui; Joost Van de Weijer |
![goto web page url](img/www.gif)
|
|
Title |
Casting a BAIT for offline and online source-free domain adaptation |
Type |
Journal Article |
|
Year |
2023 |
Publication |
Computer Vision and Image Understanding |
Abbreviated Journal |
CVIU |
|
|
Volume ![sorted by Volume (numeric) field, descending order (down)](img/sort_desc.gif) |
234 |
Issue |
|
Pages |
103747 |
|
|
Keywords |
|
|
|
Abstract |
We address the source-free domain adaptation (SFDA) problem, where only the source model is available during adaptation to the target domain. We consider two settings: the offline setting where all target data can be visited multiple times (epochs) to arrive at a prediction for each target sample, and the online setting where the target data needs to be directly classified upon arrival. Inspired by diverse classifier based domain adaptation methods, in this paper we introduce a second classifier, but with another classifier head fixed. When adapting to the target domain, the additional classifier initialized from source classifier is expected to find misclassified features. Next, when updating the feature extractor, those features will be pushed towards the right side of the source decision boundary, thus achieving source-free domain adaptation. Experimental results show that the proposed method achieves competitive results for offline SFDA on several benchmark datasets compared with existing DA and SFDA methods, and our method surpasses by a large margin other SFDA methods under online source-free domain adaptation setting. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
LAMP; MACO |
Approved |
no |
|
|
Call Number |
Admin @ si @ YWH2023 |
Serial |
3874 |
|
Permanent link to this record |
|
|
|
|
Author |
Juan Borrego-Carazo; Carles Sanchez; David Castells; Jordi Carrabina; Debora Gil |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
BronchoPose: an analysis of data and model configuration for vision-based bronchoscopy pose estimation |
Type |
Journal Article |
|
Year |
2023 |
Publication |
Computer Methods and Programs in Biomedicine |
Abbreviated Journal |
CMPB |
|
|
Volume ![sorted by Volume (numeric) field, descending order (down)](img/sort_desc.gif) |
228 |
Issue |
|
Pages |
107241 |
|
|
Keywords |
Videobronchoscopy guiding; Deep learning; Architecture optimization; Datasets; Standardized evaluation framework; Pose estimation |
|
|
Abstract |
Vision-based bronchoscopy (VB) models require the registration of the virtual lung model with the frames from the video bronchoscopy to provide effective guidance during the biopsy. The registration can be achieved by either tracking the position and orientation of the bronchoscopy camera or by calibrating its deviation from the pose (position and orientation) simulated in the virtual lung model. Recent advances in neural networks and temporal image processing have provided new opportunities for guided bronchoscopy. However, such progress has been hindered by the lack of comparative experimental conditions.
In the present paper, we share a novel synthetic dataset allowing for a fair comparison of methods. Moreover, this paper investigates several neural network architectures for the learning of temporal information at different levels of subject personalization. In order to improve orientation measurement, we also present a standardized comparison framework and a novel metric for camera orientation learning. Results on the dataset show that the proposed metric and architectures, as well as the standardized conditions, provide notable improvements to current state-of-the-art camera pose estimation in video bronchoscopy. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Elsevier |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
IAM; |
Approved |
no |
|
|
Call Number |
Admin @ si @ BSC2023 |
Serial |
3702 |
|
Permanent link to this record |