|
Armin Mehri, Parichehr Behjati Ardakani, & Angel Sappa. (2021). LiNet: A Lightweight Network for Image Super Resolution. In 25th International Conference on Pattern Recognition (pp. 7196–7202).
Abstract: This paper proposes a new lightweight network, LiNet, that enhancing technical efficiency in lightweight super resolution and operating approximately like very large and costly networks in terms of number of network parameters and operations. The proposed architecture allows the network to learn more abstract properties by avoiding low-level information via multiple links. LiNet introduces a Compact Dense Module, which contains set of inner and outer blocks, to efficiently extract meaningful information, to better leverage multi-level representations before upsampling stage, and to allow an efficient information and gradient flow within the network. Experiments on benchmark datasets show that the proposed LiNet achieves favorable performance against lightweight state-of-the-art methods.
|
|
|
Alejandro Cartas, Petia Radeva, & Mariella Dimiccoli. (2021). Modeling long-term interactions to enhance action recognition. In 25th International Conference on Pattern Recognition (pp. 10351–10358).
Abstract: In this paper, we propose a new approach to under-stand actions in egocentric videos that exploits the semantics of object interactions at both frame and temporal levels. At the frame level, we use a region-based approach that takes as input a primary region roughly corresponding to the user hands and a set of secondary regions potentially corresponding to the interacting objects and calculates the action score through a CNN formulation. This information is then fed to a Hierarchical LongShort-Term Memory Network (HLSTM) that captures temporal dependencies between actions within and across shots. Ablation studies thoroughly validate the proposed approach, showing in particular that both levels of the HLSTM architecture contribute to performance improvement. Furthermore, quantitative comparisons show that the proposed approach outperforms the state-of-the-art in terms of action recognition on standard benchmarks,without relying on motion information
|
|
|
Quentin Angermann, Jorge Bernal, Cristina Sanchez Montes, Maroua Hammami, Gloria Fernandez Esparrach, Xavier Dray, et al. (2017). Clinical Usability Quantification Of a Real-Time Polyp Detection Method In Videocolonoscopy. In 25th United European Gastroenterology Week.
|
|
|
Cristina Sanchez Montes, F. Javier Sanchez, Cristina Rodriguez de Miguel, Henry Cordova, Jorge Bernal, Maria Lopez Ceron, et al. (2017). Histological Prediction Of Colonic Polyps By Computer Vision. Preliminary Results. In 25th United European Gastroenterology Week.
Abstract: during colonoscopy, clinicians perform visual inspection of the polyps to predict histology. Kudo’s pit pattern classification is one of the most commonly used for optical diagnosis. These surface patterns present a contrast with respect to their neighboring regions and they can be considered as bright regions in the image that can attract the attention of computational methods.
Keywords: polyps; histology; computer vision
|
|
|
Xinhang Song, Shuqiang Jiang, & Luis Herranz. (2017). Combining Models from Multiple Sources for RGB-D Scene Recognition. In 26th International Joint Conference on Artificial Intelligence (pp. 4523–4529).
Abstract: Depth can complement RGB with useful cues about object volumes and scene layout. However, RGB-D image datasets are still too small for directly training deep convolutional neural networks (CNNs), in contrast to the massive monomodal RGB datasets. Previous works in RGB-D recognition typically combine two separate networks for RGB and depth data, pretrained with a large RGB dataset and then fine tuned to the respective target RGB and depth datasets. These approaches have several limitations: 1) only use low-level filters learned from RGB data, thus not being able to exploit properly depth-specific patterns, and 2) RGB and depth features are only combined at high-levels but rarely at lower-levels. In this paper, we propose a framework that leverages both knowledge acquired from large RGB datasets together with depth-specific cues learned from the limited depth data, obtaining more effective multi-source and multi-modal representations. We propose a multi-modal combination method that selects discriminative combinations of layers from the different source models and target modalities, capturing both high-level properties of the task and intrinsic low-level properties of both modalities.
Keywords: Robotics and Vision; Vision and Perception
|
|
|
Victor Ponce, Hugo Jair Escalante, Sergio Escalera, & Xavier Baro. (2015). Gesture and Action Recognition by Evolved Dynamic Subgestures. In 26th British Machine Vision Conference (129.pp. 1–129.13).
Abstract: This paper introduces a framework for gesture and action recognition based on the evolution of temporal gesture primitives, or subgestures. Our work is inspired on the principle of producing genetic variations within a population of gesture subsequences, with the goal of obtaining a set of gesture units that enhance the generalization capability of standard gesture recognition approaches. In our context, gesture primitives are evolved over time using dynamic programming and generative models in order to recognize complex actions. In few generations, the proposed subgesture-based representation
of actions and gestures outperforms the state of the art results on the MSRDaily3D and MSRAction3D datasets.
|
|
|
Huamin Ren, Weifeng Liu, Soren Ingvor Olsen, Sergio Escalera, & Thomas B. Moeslund. (2015). Unsupervised Behavior-Specific Dictionary Learning for Abnormal Event Detection. In 26th British Machine Vision Conference.
|
|
|
Mohammad Ali Bagheri, Qigang Gao, & Sergio Escalera. (2013). Logo recognition Based on the Dempster-Shafer Fusion of Multiple Classifiers. In 26th Canadian Conference on Artificial Intelligence (Vol. 7884, pp. 1–12). Springer Berlin Heidelberg.
Abstract: Best paper award
The performance of different feature extraction and shape description methods in trademark image recognition systems have been studied by several researchers. However, the potential improvement in classification through feature fusion by ensemble-based methods has remained unattended. In this work, we evaluate the performance of an ensemble of three classifiers, each trained on different feature sets. Three promising shape description techniques, including Zernike moments, generic Fourier descriptors, and shape signature are used to extract informative features from logo images, and each set of features is fed into an individual classifier. In order to reduce recognition error, a powerful combination strategy based on the Dempster-Shafer theory is utilized to fuse the three classifiers trained on different sources of information. This combination strategy can effectively make use of diversity of base learners generated with different set of features. The recognition results of the individual classifiers are compared with those obtained from fusing the classifiers’ output, showing significant performance improvements of the proposed methodology.
Keywords: Logo recognition; ensemble classification; Dempster-Shafer fusion; Zernike moments; generic Fourier descriptor; shape signature
|
|
|
Hassan Ahmed Sial, S. Sancho, Ramon Baldrich, Robert Benavente, & Maria Vanrell. (2018). Color-based data augmentation for Reflectance Estimation. In 26th Color Imaging Conference (pp. 284–289).
Abstract: Deep convolutional architectures have shown to be successful frameworks to solve generic computer vision problems. The estimation of intrinsic reflectance from single image is not a solved problem yet. Encoder-Decoder architectures are a perfect approach for pixel-wise reflectance estimation, although it usually suffers from the lack of large datasets. Lack of data can be partially solved with data augmentation, however usual techniques focus on geometric changes which does not help for reflectance estimation. In this paper we propose a color-based data augmentation technique that extends the training data by increasing the variability of chromaticity. Rotation on the red-green blue-yellow plane of an opponent space enable to increase the training set in a coherent and sound way that improves network generalization capability for reflectance estimation. We perform some experiments on the Sintel dataset showing that our color-based augmentation increase performance and overcomes one of the state-of-the-art methods.
|
|
|
Emanuel Sanchez Aimar, Petia Radeva, & Mariella Dimiccoli. (2019). Social Relation Recognition in Egocentric Photostreams. In 26th International Conference on Image Processing (pp. 3227–3231).
Abstract: This paper proposes an approach to automatically categorize the social interactions of a user wearing a photo-camera (2fpm), by relying solely on what the camera is seeing. The problem is challenging due to the overwhelming complexity of social life and the extreme intra-class variability of social interactions captured under unconstrained conditions. We adopt the formalization proposed in Bugental's social theory, that groups human relations into five social domains with related categories. Our method is a new deep learning architecture that exploits the hierarchical structure of the label space and relies on a set of social attributes estimated at frame level to provide a semantic representation of social interactions. Experimental results on the new EgoSocialRelation dataset demonstrate the effectiveness of our proposal.
|
|
|
Mohamed Ali Souibgui, Sanket Biswas, Sana Khamekhem Jemni, Yousri Kessentini, Alicia Fornes, Josep Llados, et al. (2022). DocEnTr: An End-to-End Document Image Enhancement Transformer. In 26th International Conference on Pattern Recognition (pp. 1699–1705).
Abstract: Document images can be affected by many degradation scenarios, which cause recognition and processing difficulties. In this age of digitization, it is important to denoise them for proper usage. To address this challenge, we present a new encoder-decoder architecture based on vision transformers to enhance both machine-printed and handwritten document images, in an end-to-end fashion. The encoder operates directly on the pixel patches with their positional information without the use of any convolutional layers, while the decoder reconstructs a clean image from the encoded patches. Conducted experiments show a superiority of the proposed model compared to the state-of the-art methods on several DIBCO benchmarks. Code and models will be publicly available at: https://github.com/dali92002/DocEnTR
Keywords: Degradation; Head; Optical character recognition; Self-supervised learning; Benchmark testing; Transformers; Magnetic heads
|
|
|
Carlos Boned Riera, & Oriol Ramos Terrades. (2022). Discriminative Neural Variational Model for Unbalanced Classification Tasks in Knowledge Graph. In 26th International Conference on Pattern Recognition (pp. 2186–2191).
Abstract: Nowadays the paradigm of link discovery problems has shown significant improvements on Knowledge Graphs. However, method performances are harmed by the unbalanced nature of this classification problem, since many methods are easily biased to not find proper links. In this paper we present a discriminative neural variational auto-encoder model, called DNVAE from now on, in which we have introduced latent variables to serve as embedding vectors. As a result, the learnt generative model approximate better the underlying distribution and, at the same time, it better differentiate the type of relations in the knowledge graph. We have evaluated this approach on benchmark knowledge graph and Census records. Results in this last data set are quite impressive since we reach the highest possible score in the evaluation metrics. However, further experiments are still needed to deeper evaluate the performance of the method in more challenging tasks.
Keywords: Measurement; Couplings; Semantics; Ear; Benchmark testing; Data models; Pattern recognition
|
|
|
Vacit Oguz Yazici, Joost Van de Weijer, & Longlong Yu. (2022). Visual Transformers with Primal Object Queries for Multi-Label Image Classification. In 26th International Conference on Pattern Recognition.
Abstract: Multi-label image classification is about predicting a set of class labels that can be considered as orderless sequential data. Transformers process the sequential data as a whole, therefore they are inherently good at set prediction. The first vision-based transformer model, which was proposed for the object detection task introduced the concept of object queries. Object queries are learnable positional encodings that are used by attention modules in decoder layers to decode the object classes or bounding boxes using the region of interests in an image. However, inputting the same set of object queries to different decoder layers hinders the training: it results in lower performance and delays convergence. In this paper, we propose the usage of primal object queries that are only provided at the start of the transformer decoder stack. In addition, we improve the mixup technique proposed for multi-label classification. The proposed transformer model with primal object queries improves the state-of-the-art class wise F1 metric by 2.1% and 1.8%; and speeds up the convergence by 79.0% and 38.6% on MS-COCO and NUS-WIDE datasets respectively.
|
|
|
Ayan Banerjee, Palaiahnakote Shivakumara, Parikshit Acharya, Umapada Pal, & Josep Llados. (2022). TWD: A New Deep E2E Model for Text Watermark Detection in Video Images. In 26th International Conference on Pattern Recognition.
Abstract: Text watermark detection in video images is challenging because text watermark characteristics are different from caption and scene texts in the video images. Developing a successful model for detecting text watermark, caption, and scene texts is an open challenge. This study aims at developing a new Deep End-to-End model for Text Watermark Detection (TWD), caption and scene text in video images. To standardize non-uniform contrast, quality, and resolution, we explore the U-Net3+ model for enhancing poor quality text without affecting high-quality text. Similarly, to address the challenges of arbitrary orientation, text shapes and complex background, we explore Stacked Hourglass Encoded Fourier Contour Embedding Network (SFCENet) by feeding the output of the U-Net3+ model as input. Furthermore, the proposed work integrates enhancement and detection models as an end-to-end model for detecting multi-type text in video images. To validate the proposed model, we create our own dataset (named TW-866), which provides video images containing text watermark, caption (subtitles), as well as scene text. The proposed model is also evaluated on standard natural scene text detection datasets, namely, ICDAR 2019 MLT, CTW1500, Total-Text, and DAST1500. The results show that the proposed method outperforms the existing methods. This is the first work on text watermark detection in video images to the best of our knowledge
Keywords: Deep learning; U-Net; FCENet; Scene text detection; Video text detection; Watermark text detection
|
|
|
Yaxing Wang, Abel Gonzalez-Garcia, Joost Van de Weijer, & Luis Herranz. (2019). SDIT: Scalable and Diverse Cross-domain Image Translation. In 27th ACM International Conference on Multimedia (1267–1276).
Abstract: Recently, image-to-image translation research has witnessed remarkable progress. Although current approaches successfully generate diverse outputs or perform scalable image transfer, these properties have not been combined into a single method. To address this limitation, we propose SDIT: Scalable and Diverse image-to-image translation. These properties are combined into a single generator. The diversity is determined by a latent variable which is randomly sampled from a normal distribution. The scalability is obtained by conditioning the network on the domain attributes. Additionally, we also exploit an attention mechanism that permits the generator to focus on the domain-specific attribute. We empirically demonstrate the performance of the proposed method on face mapping and other datasets beyond faces.
|
|