toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Fahad Shahbaz Khan; Muhammad Anwer Rao; Joost Van de Weijer; Michael Felsberg; J.Laaksonen edit  doi
openurl 
  Title Compact color texture description for texture classification Type Journal Article
  Year 2015 Publication Pattern Recognition Letters Abbreviated Journal PRL  
  Volume 51 Issue Pages 16-22  
  Keywords  
  Abstract (up) Describing textures is a challenging problem in computer vision and pattern recognition. The classification problem involves assigning a category label to the texture class it belongs to. Several factors such as variations in scale, illumination and viewpoint make the problem of texture description extremely challenging. A variety of histogram based texture representations exists in literature.
However, combining multiple texture descriptors and assessing their complementarity is still an open research problem. In this paper, we first show that combining multiple local texture descriptors significantly improves the recognition performance compared to using a single best method alone. This
gain in performance is achieved at the cost of high-dimensional final image representation. To counter this problem, we propose to use an information-theoretic compression technique to obtain a compact texture description without any significant loss in accuracy. In addition, we perform a comprehensive
evaluation of pure color descriptors, popular in object recognition, for the problem of texture classification. Experiments are performed on four challenging texture datasets namely, KTH-TIPS-2a, KTH-TIPS-2b, FMD and Texture-10. The experiments clearly demonstrate that our proposed compact multi-texture approach outperforms the single best texture method alone. In all cases, discriminative color names outperforms other color features for texture classification. Finally, we show that combining discriminative color names with compact texture representation outperforms state-of-the-art methods by 7:8%, 4:3% and 5:0% on KTH-TIPS-2a, KTH-TIPS-2b and Texture-10 datasets respectively.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes LAMP; 600.068; 600.079;ADAS Approved no  
  Call Number Admin @ si @ KRW2015a Serial 2587  
Permanent link to this record
 

 
Author Muhammad Anwer Rao; Fahad Shahbaz Khan; Joost Van de Weijer; Matthieu Molinier; Jorma Laaksonen edit   pdf
url  openurl
  Title Binary patterns encoded convolutional neural networks for texture recognition and remote sensing scene classification Type Journal Article
  Year 2018 Publication ISPRS Journal of Photogrammetry and Remote Sensing Abbreviated Journal ISPRS J  
  Volume 138 Issue Pages 74-85  
  Keywords Remote sensing; Deep learning; Scene classification; Local Binary Patterns; Texture analysis  
  Abstract (up) Designing discriminative powerful texture features robust to realistic imaging conditions is a challenging computer vision problem with many applications, including material recognition and analysis of satellite or aerial imagery. In the past, most texture description approaches were based on dense orderless statistical distribution of local features. However, most recent approaches to texture recognition and remote sensing scene classification are based on Convolutional Neural Networks (CNNs). The de facto practice when learning these CNN models is to use RGB patches as input with training performed on large amounts of labeled data (ImageNet). In this paper, we show that Local Binary Patterns (LBP) encoded CNN models, codenamed TEX-Nets, trained using mapped coded images with explicit LBP based texture information provide complementary information to the standard RGB deep models. Additionally, two deep architectures, namely early and late fusion, are investigated to combine the texture and color information. To the best of our knowledge, we are the first to investigate Binary Patterns encoded CNNs and different deep network fusion architectures for texture recognition and remote sensing scene classification. We perform comprehensive experiments on four texture recognition datasets and four remote sensing scene classification benchmarks: UC-Merced with 21 scene categories, WHU-RS19 with 19 scene classes, RSSCN7 with 7 categories and the recently introduced large scale aerial image dataset (AID) with 30 aerial scene types. We demonstrate that TEX-Nets provide complementary information to standard RGB deep model of the same network architecture. Our late fusion TEX-Net architecture always improves the overall performance compared to the standard RGB network on both recognition problems. Furthermore, our final combination leads to consistent improvement over the state-of-the-art for remote sensing scene  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes LAMP; 600.109; 600.106; 600.120 Approved no  
  Call Number Admin @ si @ RKW2018 Serial 3158  
Permanent link to this record
 

 
Author Shiqi Yang; Yaxing Wang; Joost Van de Weijer; Luis Herranz; Shangling Jui; Jian Yang edit  url
doi  openurl
  Title Trust Your Good Friends: Source-Free Domain Adaptation by Reciprocal Neighborhood Clustering Type Journal Article
  Year 2023 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
  Volume 45 Issue 12 Pages 15883-15895  
  Keywords  
  Abstract (up) Domain adaptation (DA) aims to alleviate the domain shift between source domain and target domain. Most DA methods require access to the source data, but often that is not possible (e.g., due to data privacy or intellectual property). In this paper, we address the challenging source-free domain adaptation (SFDA) problem, where the source pretrained model is adapted to the target domain in the absence of source data. Our method is based on the observation that target data, which might not align with the source domain classifier, still forms clear clusters. We capture this intrinsic structure by defining local affinity of the target data, and encourage label consistency among data with high local affinity. We observe that higher affinity should be assigned to reciprocal neighbors. To aggregate information with more context, we consider expanded neighborhoods with small affinity values. Furthermore, we consider the density around each target sample, which can alleviate the negative impact of potential outliers. In the experimental results we verify that the inherent structure of the target features is an important source of information for domain adaptation. We demonstrate that this local structure can be efficiently captured by considering the local neighbors, the reciprocal neighbors, and the expanded neighborhood. Finally, we achieve state-of-the-art performance on several 2D image and 3D point cloud recognition datasets.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes LAMP; MACO Approved no  
  Call Number Admin @ si @ YWW2023 Serial 3889  
Permanent link to this record
 

 
Author Qingshan Chen; Zhenzhen Quan; Yujun Li; Chao Zhai; Mikhail Mozerov edit  url
doi  openurl
  Title An Unsupervised Domain Adaption Approach for Cross-Modality RGB-Infrared Person Re-Identification Type Journal Article
  Year 2023 Publication IEEE Sensors Journal Abbreviated Journal IEEE-SENS  
  Volume 23 Issue 24 Pages  
  Keywords Q. Chen, Z. Quan, Y. Li, C. Zhai and M. G. Mozerov  
  Abstract (up) Dual-camera systems commonly employed in surveillance serve as the foundation for RGB-infrared (IR) cross-modality person re-identification (ReID). However, significant modality differences give rise to inferior performance compared to single-modality scenarios. Furthermore, most existing studies in this area rely on supervised training with meticulously labeled datasets. Labeling RGB-IR image pairs is more complex than labeling conventional image data, and deploying pretrained models on unlabeled datasets can lead to catastrophic performance degradation. In contrast to previous solutions that focus solely on cross-modality or domain adaptation issues, this article presents an end-to-end unsupervised domain adaptation (UDA) framework for the cross-modality person ReID, which can simultaneously address both of these challenges. This model employs source domain classes, target domain clusters, and unclustered instance samples for the training, maximizing the comprehensive use of the dataset. Moreover, it addresses the problem of mismatched clustering labels between the two modalities in the target domain by incorporating a label matching module that reassigns reliable clusters with labels, ensuring correspondence between different modality labels. We construct the loss function by incorporating distinctiveness loss and multiplicity loss, both of which are determined by the similarity of neighboring features in the predicted feature space and the difference between distant features. This approach enables efficient feature clustering and cluster class assignment to occur concurrently. Eight UDA cross-modality person ReID experiments are conducted on three real datasets and six synthetic datasets. The experimental results unequivocally demonstrate that the proposed model outperforms the existing state-of-the-art algorithms to a significant degree. Notably, in RegDB → RegDB_light, the Rank-1 accuracy exhibits a remarkable improvement of 8.24%.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes LAMP Approved no  
  Call Number Admin @ si @ CQL2023 Serial 3884  
Permanent link to this record
 

 
Author Juan Ramon Terven Salinas; Bogdan Raducanu; Maria Elena Meza-de-Luna; Joaquin Salas edit   pdf
doi  openurl
  Title Head-gestures mirroring detection in dyadic social linteractions with computer vision-based wearable devices Type Journal Article
  Year 2016 Publication Neurocomputing Abbreviated Journal NEUCOM  
  Volume 175 Issue B Pages 866–876  
  Keywords Head gestures recognition; Mirroring detection; Dyadic social interaction analysis; Wearable devices  
  Abstract (up) During face-to-face human interaction, nonverbal communication plays a fundamental role. A relevant aspect that takes part during social interactions is represented by mirroring, in which a person tends to mimic the non-verbal behavior (head and body gestures, vocal prosody, etc.) of the counterpart. In this paper, we introduce a computer vision-based system to detect mirroring in dyadic social interactions with the use of a wearable platform. In our context, mirroring is inferred as simultaneous head noddings displayed by the interlocutors. Our approach consists of the following steps: (1) facial features extraction; (2) facial features stabilization; (3) head nodding recognition; and (4) mirroring detection. Our system achieves a mirroring detection accuracy of 72% on a custom mirroring dataset.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes LAMP; 600.072; 600.068; Approved no  
  Call Number Admin @ si @ TRM2016 Serial 2721  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: