|
Muhammad Anwer Rao, Fahad Shahbaz Khan, Joost Van de Weijer, & Jorma Laaksonen. (2017). Tex-Nets: Binary Patterns Encoded Convolutional Neural Networks for Texture Recognition. In 19th International Conference on Multimodal Interaction.
Abstract: Recognizing materials and textures in realistic imaging conditions is a challenging computer vision problem. For many years, local features based orderless representations were a dominant approach for texture recognition. Recently deep local features, extracted from the intermediate layers of a Convolutional Neural Network (CNN), are used as filter banks. These dense local descriptors from a deep model, when encoded with Fisher Vectors, have shown to provide excellent results for texture recognition. The CNN models, employed in such approaches, take RGB patches as input and train on a large amount of labeled images. We show that CNN models, which we call TEX-Nets, trained using mapped coded images with explicit texture information provide complementary information to the standard deep models trained on RGB patches. We further investigate two deep architectures, namely early and late fusion, to combine the texture and color information. Experiments on benchmark texture datasets clearly demonstrate that TEX-Nets provide complementary information to standard RGB deep network. Our approach provides a large gain of 4.8%, 3.5%, 2.6% and 4.1% respectively in accuracy on the DTD, KTH-TIPS-2a, KTH-TIPS-2b and Texture-10 datasets, compared to the standard RGB network of the same architecture. Further, our final combination leads to consistent improvements over the state-of-the-art on all four datasets.
Keywords: Convolutional Neural Networks; Texture Recognition; Local Binary Paterns
|
|
|
Dimosthenis Karatzas, Marçal Rusiñol, Coen Antens, & Miquel Ferrer. (2008). Segmentation Robust to the Vignette Effect for Machine Vision Systems. In 19th International Conference on Pattern Recognition.
Abstract: The vignette effect (radial fall-off) is commonly encountered in images obtained through certain image acquisition setups and can seriously hinder automatic analysis processes. In this paper we present a fast and efficient method for dealing with vignetting in the context of object segmentation in an existing industrial inspection setup. The vignette effect is modelled here as a circular, non-linear gradient. The method estimates the gradient parameters and employs them to perform segmentation. Segmentation results on a variety of images indicate that the presented method is able to successfully tackle the vignette effect.
|
|
|
Partha Pratim Roy, Umapada Pal, Josep Llados, & F. Kimura. (2008). Convex Hull based Approach for Multi-oriented Character Recognition form Graphical Documents. In 19th International Conference on Pattern Recognition.
|
|
|
H. Chouaib, Oriol Ramos Terrades, Salvatore Tabbone, F. Cloppet, & N. Vincent. (2008). Feature Selection Combining Genetic Algorithm and Adaboost Classifiers. In 19th International Conference on Pattern Recognition (pp. 1–4).
|
|
|
Salvatore Tabbone, Oriol Ramos Terrades, & S. Barrat. (2008). Histogram of radon transform. A useful descriptor for shape retrieval. In 19th International Conference on Pattern Recognition (pp. 1–4).
|
|
|
Ariel Amato, Mikhail Mozerov, Ivan Huerta, Jordi Gonzalez, & Juan J. Villanueva. (2008). ackground Subtraction Technique Based on Chromaticity and Intensity Patterns. In 19th International Conference on Pattern Recognition, (1–4).
|
|
|
Miquel Ferrer, Ernest Valveny, F. Serratosa, K. Riesen, & Horst Bunke. (2008). An Approximate Algorith for Median Graph Computation using Graph Embedding. In 19th International Conference on Pattern Recognition..
|
|
|
Murad Al Haj, Francisco Javier Orozco, Jordi Gonzalez, & Juan J. Villanueva. (2008). Automatic Face and Facial Features Initialization for Robust and Accurate Tracking. In 19th International Conference on Pattern Recognition. (1– 4).
|
|
|
Patricia Suarez, & Angel Sappa. (2024). A Generative Model for Guided Thermal Image Super-Resolution. In 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications.
Abstract: This paper presents a novel approach for thermal super-resolution based on a fusion prior, low-resolution thermal image and H brightness channel of the corresponding visible spectrum image. The method combines bicubic interpolation of the ×8 scale target image with the brightness component. To enhance the guidance process, the original RGB image is converted to HSV, and the brightness channel is extracted. Bicubic interpolation is then applied to the low-resolution thermal image, resulting in a Bicubic-Brightness channel blend. This luminance-bicubic fusion is used as an input image to help the training process. With this fused image, the cyclic adversarial generative network obtains high-resolution thermal image results. Experimental evaluations show that the proposed approach significantly improves spatial resolution and pixel intensity levels compared to other state-of-the-art techniques, making it a promising method to obtain high-resolution thermal.
|
|
|
Hector Laria Mantecon, Kai Wang, Joost Van de Weijer, Bogdan Raducanu, & Kai Wang. (2024). NeRF-Diffusion for 3D-Consistent Face Generation and Editing. In 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications.
Abstract: Generating high-fidelity 3D-aware images without 3D supervision is a valuable capability in various applications. Current methods based on NeRF features, SDF information, or triplane features have limited variation after training. To address this, we propose a novel approach that combines pretrained models for shape and content generation. Our method leverages a pretrained Neural Radiance Field as a shape prior and a diffusion model for content generation. By conditioning the diffusion model with 3D features, we enhance its ability to generate novel views with 3D awareness. We introduce a consistency token shared between the NeRF module and the diffusion model to maintain 3D consistency during sampling. Moreover, our framework allows for text editing of 3D-aware image generation, enabling users to modify the style over 3D views while preserving semantic content. Our contributions include incorporating 3D awareness into a text-to-image model, addressing identity consistency in 3D view synthesis, and enabling text editing of 3D-aware image generation. We provide detailed explanations, including the shape prior based on the NeRF model and the content generation process using the diffusion model. We also discuss challenges such as shape consistency and sampling saturation. Experimental results demonstrate the effectiveness and visual quality of our approach.
|
|
|
Mohamed Ramzy Ibrahim, Robert Benavente, Daniel Ponsa, & Felipe Lumbreras. (2024). SWViT-RRDB: Shifted Window Vision Transformer Integrating Residual in Residual Dense Block for Remote Sensing Super-Resolution. In 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications.
Abstract: Remote sensing applications, impacted by acquisition season and sensor variety, require high-resolution images. Transformer-based models improve satellite image super-resolution but are less effective than convolutional neural networks (CNNs) at extracting local details, crucial for image clarity. This paper introduces SWViT-RRDB, a new deep learning model for satellite imagery super-resolution. The SWViT-RRDB, combining transformer with convolution and attention blocks, overcomes the limitations of existing models by better representing small objects in satellite images. In this model, a pipeline of residual fusion group (RFG) blocks is used to combine the multi-headed self-attention (MSA) with residual in residual dense block (RRDB). This combines global and local image data for better super-resolution. Additionally, an overlapping cross-attention block (OCAB) is used to enhance fusion and allow interaction between neighboring pixels to maintain long-range pixel dependencies across the image. The SWViT-RRDB model and its larger variants outperform state-of-the-art (SoTA) models on two different satellite datasets in terms of PSNR and SSIM.
|
|
|
Clement Guerin, Christophe Rigaud, Karell Bertet, Jean-Christophe Burie, Arnaud Revel, & Jean-Marc Ogier. (2014). Réduction de l’espace de recherche pour les personnages de bandes dessinées. In 19th National Congress Reconnaissance de Formes et l'Intelligence Artificielle.
Abstract: Les bandes dessinées représentent un patrimoine culturel important dans de nombreux pays et leur numérisation massive offre la possibilité d'effectuer des recherches dans le contenu des images. À ce jour, ce sont principalement les structures des pages et leurs contenus textuels qui ont été étudiés, peu de travaux portent sur le contenu graphique. Nous proposons de nous appuyer sur des éléments déjà étudiés tels que la position des cases et des bulles, pour réduire l'espace de recherche et localiser les personnages en fonction de la queue des bulles. L'évaluation de nos différentes contributions à partir de la base eBDtheque montre un taux de détection des queues de bulle de 81.2%, de localisation des personnages allant jusqu'à 85% et un gain d'espace de recherche de plus de 50%.
Keywords: contextual search; document analysis; comics characters
|
|
|
Agnes Borras, Francesc Tous, Josep Llados, & Maria Vanrell. (2003). High-Level Clothes Description Based on Colour-Texture and Structural Features. In 1rst. Iberian Conference on Pattern Recognition and Image Analysis IbPRIA 2003.
|
|
|
David Lloret, Joan Serrat, Antonio Lopez, & Juan J. Villanueva. (2003). Ultrasound to magnetic resonance volume registration for brain sinking measurement.
|
|
|
J. Chazalon, P. Gomez-Kramer, Jean-Christophe Burie, M.Coustaty, S.Eskenazi, Muhammad Muzzamil Luqman, et al. (2017). SmartDoc 2017 Video Capture: Mobile Document Acquisition in Video Mode. In 1st International Workshop on Open Services and Tools for Document Analysis.
Abstract: As mobile document acquisition using smartphones is getting more and more common, along with the continuous improvement of mobile devices (both in terms of computing power and image quality), we can wonder to which extent mobile phones can replace desktop scanners. Modern applications can cope with perspective distortion and normalize the contrast of a document page captured with a smartphone, and in some cases like bottle labels or posters, smartphones even have the advantage of allowing the acquisition of non-flat or large documents. However, several cases remain hard to handle, such as reflective documents (identity cards, badges, glossy magazine cover, etc.) or large documents for which some regions require an important amount of detail. This paper introduces the SmartDoc 2017 benchmark (named “SmartDoc Video Capture”), which aims at
assessing whether capturing documents using the video mode of a smartphone could solve those issues. The task under evaluation is both a stitching and a reconstruction problem, as the user can move the device over different parts of the document to capture details or try to erase highlights. The material released consists of a dataset, an evaluation method and the associated tool, a sample method, and the tools required to extend the dataset. All the components are released publicly under very permissive licenses, and we particularly cared about maximizing the ease of
understanding, usage and improvement.
|
|