toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Oscar Argudo; Marc Comino; Antonio Chica; Carlos Andujar; Felipe Lumbreras edit  url
openurl 
  Title Segmentation of aerial images for plausible detail synthesis Type Journal Article
  Year 2018 Publication Computers & Graphics Abbreviated Journal (down) CG  
  Volume 71 Issue Pages 23-34  
  Keywords Terrain editing; Detail synthesis; Vegetation synthesis; Terrain rendering; Image segmentation  
  Abstract The visual enrichment of digital terrain models with plausible synthetic detail requires the segmentation of aerial images into a suitable collection of categories. In this paper we present a complete pipeline for segmenting high-resolution aerial images into a user-defined set of categories distinguishing e.g. terrain, sand, snow, water, and different types of vegetation. This segmentation-for-synthesis problem implies that per-pixel categories must be established according to the algorithms chosen for rendering the synthetic detail. This precludes the definition of a universal set of labels and hinders the construction of large training sets. Since artists might choose to add new categories on the fly, the whole pipeline must be robust against unbalanced datasets, and fast on both training and inference. Under these constraints, we analyze the contribution of common per-pixel descriptors, and compare the performance of state-of-the-art supervised learning algorithms. We report the findings of two user studies. The first one was conducted to analyze human accuracy when manually labeling aerial images. The second user study compares detailed terrains built using different segmentation strategies, including official land cover maps. These studies demonstrate that our approach can be used to turn digital elevation models into fully-featured, detailed terrains with minimal authoring efforts.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0097-8493 ISBN Medium  
  Area Expedition Conference  
  Notes MSIAU; 600.086; 600.118 Approved no  
  Call Number Admin @ si @ ACC2018 Serial 3147  
Permanent link to this record
 

 
Author Henry Velesaca; Patricia Suarez; Raul Mira; Angel Sappa edit   pdf
url  openurl
  Title Computer Vision based Food Grain Classification: a Comprehensive Survey Type Journal Article
  Year 2021 Publication Computers and Electronics in Agriculture Abbreviated Journal (down) CEA  
  Volume 187 Issue Pages 106287  
  Keywords  
  Abstract This manuscript presents a comprehensive survey on recent computer vision based food grain classification techniques. It includes state-of-the-art approaches intended for different grain varieties. The approaches proposed in the literature are analyzed according to the processing stages considered in the classification pipeline, making it easier to identify common techniques and comparisons. Additionally, the type of images considered by each approach (i.e., images from the: visible, infrared, multispectral, hyperspectral bands) together with the strategy used to generate ground truth data (i.e., real and synthetic images) are reviewed. Finally, conclusions highlighting future needs and challenges are presented.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MSIAU; 600.130; 600.122 Approved no  
  Call Number Admin @ si @ VSM2021 Serial 3576  
Permanent link to this record
 

 
Author Xavier Soria; Gonzalo Pomboza-Junez; Angel Sappa edit  doi
openurl 
  Title LDC: Lightweight Dense CNN for Edge Detection Type Journal Article
  Year 2022 Publication IEEE Access Abbreviated Journal (down) ACCESS  
  Volume 10 Issue Pages 68281-68290  
  Keywords  
  Abstract This paper presents a Lightweight Dense Convolutional (LDC) neural network for edge detection. The proposed model is an adaptation of two state-of-the-art approaches, but it requires less than 4% of parameters in comparison with these approaches. The proposed architecture generates thin edge maps and reaches the highest score (i.e., ODS) when compared with lightweight models (models with less than 1 million parameters), and reaches a similar performance when compare with heavy architectures (models with about 35 million parameters). Both quantitative and qualitative results and comparisons with state-of-the-art models, using different edge detection datasets, are provided. The proposed LDC does not use pre-trained weights and requires straightforward hyper-parameter settings. The source code is released at https://github.com/xavysp/LDC  
  Address 27 June 2022  
  Corporate Author Thesis  
  Publisher IEEE Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MSIAU; MACO; 600.160; 600.167 Approved no  
  Call Number Admin @ si @ SPS2022 Serial 3751  
Permanent link to this record
 

 
Author Armin Mehri; Parichehr Behjati; Angel Sappa edit  url
openurl 
  Title TnTViT-G: Transformer in Transformer Network for Guidance Super Resolution Type Journal Article
  Year 2023 Publication IEEE Access Abbreviated Journal (down) ACCESS  
  Volume 11 Issue Pages 11529-11540  
  Keywords  
  Abstract Image Super Resolution is a potential approach that can improve the image quality of low-resolution optical sensors, leading to improved performance in various industrial applications. It is important to emphasize that most state-of-the-art super resolution algorithms often use a single channel of input data for training and inference. However, this practice ignores the fact that the cost of acquiring high-resolution images in various spectral domains can differ a lot from one another. In this paper, we attempt to exploit complementary information from a low-cost channel (visible image) to increase the image quality of an expensive channel (infrared image). We propose a dual stream Transformer-based super resolution approach that uses the visible image as a guide to super-resolve another spectral band image. To this end, we introduce Transformer in Transformer network for Guidance super resolution, named TnTViT-G, an efficient and effective method that extracts the features of input images via different streams and fuses them together at various stages. In addition, unlike other guidance super resolution approaches, TnTViT-G is not limited to a fixed upsample size and it can generate super-resolved images of any size. Extensive experiments on various datasets show that the proposed model outperforms other state-of-the-art super resolution approaches. TnTViT-G surpasses state-of-the-art methods by up to 0.19∼2.3dB , while it is memory efficient.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MSIAU Approved no  
  Call Number Admin @ si @ MBS2023 Serial 3876  
Permanent link to this record
 

 
Author Armin Mehri; Parichehr Behjati; Dario Carpio; Angel Sappa edit  url
doi  openurl
  Title SRFormer: Efficient Yet Powerful Transformer Network for Single Image Super Resolution Type Journal Article
  Year 2023 Publication IEEE Access Abbreviated Journal (down) ACCESS  
  Volume 11 Issue Pages  
  Keywords  
  Abstract Recent breakthroughs in single image super resolution have investigated the potential of deep Convolutional Neural Networks (CNNs) to improve performance. However, CNNs based models suffer from their limited fields and their inability to adapt to the input content. Recently, Transformer based models were presented, which demonstrated major performance gains in Natural Language Processing and Vision tasks while mitigating the drawbacks of CNNs. Nevertheless, Transformer computational complexity can increase quadratically for high-resolution images, and the fact that it ignores the original structures of the image by converting them to the 1D structure can make it problematic to capture the local context information and adapt it for real-time applications. In this paper, we present, SRFormer, an efficient yet powerful Transformer-based architecture, by making several key designs in the building of Transformer blocks and Transformer layers that allow us to consider the original structure of the image (i.e., 2D structure) while capturing both local and global dependencies without raising computational demands or memory consumption. We also present a Gated Multi-Layer Perceptron (MLP) Feature Fusion module to aggregate the features of different stages of Transformer blocks by focusing on inter-spatial relationships while adding minor computational costs to the network. We have conducted extensive experiments on several super-resolution benchmark datasets to evaluate our approach. SRFormer demonstrates superior performance compared to state-of-the-art methods from both Transformer and Convolutional networks, with an improvement margin of 0.1∼0.53dB . Furthermore, while SRFormer has almost the same model size, it outperforms SwinIR by 0.47% and inference time by half the time of SwinIR. The code will be available on GitHub.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MSIAU Approved no  
  Call Number Admin @ si @ MBC2023 Serial 3887  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: