|
Rafael E. Rivadeneira, Henry Velesaca, & Angel Sappa. (2023). Object Detection in Very Low-Resolution Thermal Images through a Guided-Based Super-Resolution Approach. In 17th International Conference on Signal-Image Technology & Internet-Based Systems.
Abstract: This work proposes a novel approach that integrates super-resolution techniques with off-the-shelf object detection methods to tackle the problem of handling very low-resolution thermal images. The suggested approach begins by enhancing the low-resolution (LR) thermal images through a guided super-resolution strategy, leveraging a high-resolution (HR) visible spectrum image. Subsequently, object detection is performed on the high-resolution thermal image. The experimental results demonstrate tremendous improvements in comparison with both scenarios: when object detection is performed on the LR thermal image alone, as well as when object detection is conducted on the up-sampled LR thermal image. Moreover, the proposed approach proves highly valuable in camouflaged scenarios where objects might remain undetected in visible spectrum images.
|
|
|
Patricia Suarez, Angel Sappa, Dario Carpio, Henry Velesaca, Francisca Burgos, & Patricia Urdiales. (2022). Deep Learning Based Shrimp Classification. In 17th International Symposium on Visual Computing (Vol. 13598, 36–45).
Abstract: This work proposes a novel approach based on deep learning to address the classification of shrimp (Pennaeus vannamei) into two classes, according to their level of pigmentation accepted by shrimp commerce. The main goal of this actual study is to support the shrimp industry in terms of price and process. An efficient CNN architecture is proposed to perform image classification through a program that could be set other in mobile devices or in fixed support in the shrimp supply chain. The proposed approach is a lightweight model that uses HSV color space shrimp images. A simple pipeline shows the most important stages performed to determine a pattern that identifies the class to which they belong based on their pigmentation. For the experiments, a database acquired with mobile devices of various brands and models has been used to capture images of shrimp. The results obtained with the images in the RGB and HSV color space allow for testing the effectiveness of the proposed model.
Keywords: Pigmentation; Color space; Light weight network
|
|
|
Andre Litvin, Kamal Nasrollahi, Sergio Escalera, Cagri Ozcinar, Thomas B. Moeslund, & Gholamreza Anbarjafari. (2019). A Novel Deep Network Architecture for Reconstructing RGB Facial Images from Thermal for Face Recognition. MTAP - Multimedia Tools and Applications, 78(18), 25259–25271.
Abstract: This work proposes a fully convolutional network architecture for RGB face image generation from a given input thermal face image to be applied in face recognition scenarios. The proposed method is based on the FusionNet architecture and increases robustness against overfitting using dropout after bridge connections, randomised leaky ReLUs (RReLUs), and orthogonal regularization. Furthermore, we propose to use a decoding block with resize convolution instead of transposed convolution to improve final RGB face image generation. To validate our proposed network architecture, we train a face classifier and compare its face recognition rate on the reconstructed RGB images from the proposed architecture, to those when reconstructing images with the original FusionNet, as well as when using the original RGB images. As a result, we are introducing a new architecture which leads to a more accurate network.
Keywords: Fully convolutional networks; FusionNet; Thermal imaging; Face recognition
|
|
|
Javier Marin, & Sergio Escalera. (2021). SSSGAN: Satellite Style and Structure Generative Adversarial Networks. Remote Sensing, 13(19), 3984.
Abstract: This work presents Satellite Style and Structure Generative Adversarial Network (SSGAN), a generative model of high resolution satellite imagery to support image segmentation. Based on spatially adaptive denormalization modules (SPADE) that modulate the activations with respect to segmentation map structure, in addition to global descriptor vectors that capture the semantic information in a vector with respect to Open Street Maps (OSM) classes, this model is able to produce
consistent aerial imagery. By decoupling the generation of aerial images into a structure map and a carefully defined style vector, we were able to improve the realism and geodiversity of the synthesis with respect to the state-of-the-art baseline. Therefore, the proposed model allows us to control the generation not only with respect to the desired structure, but also with respect to a geographic area.
|
|
|
Ricard Borras, Agata Lapedriza, & Laura Igual. (2012). Depth Information in Human Gait Analysis: An Experimental Study on Gender Recognition. In 9th International Conference on Image Analysis and Recognition (Vol. 7325, pp. 98–105). Springer Berlin Heidelberg.
Abstract: This work presents DGait, a new gait database acquired with a depth camera. This database contains videos from 53 subjects walking in different directions. The intent of this database is to provide a public set to explore whether the depth can be used as an additional information source for gait classification purposes. Each video is labelled according to subject, gender and age. Furthermore, for each subject and view point, we provide initial and final frames of an entire walk cycle. On the other hand, we perform gait-based gender classification experiments with DGait database, in order to illustrate the usefulness of depth information for this purpose. In our experiments, we extract 2D and 3D gait features based on shape descriptors, and compare the performance of these features for gender identification, using a Kernel SVM. The obtained results show that depth can be an information source of great relevance for gait classification problems.
|
|
|
Hugo Bertiche, Meysam Madadi, & Sergio Escalera. (2020). CLOTH3D: Clothed 3D Humans. In 16th European Conference on Computer Vision.
Abstract: This work presents CLOTH3D, the first big scale synthetic dataset of 3D clothed human sequences. CLOTH3D contains a large variability on garment type, topology, shape, size, tightness and fabric. Clothes are simulated on top of thousands of different pose sequences and body shapes, generating realistic cloth dynamics. We provide the dataset with a generative model for cloth generation. We propose a Conditional Variational Auto-Encoder (CVAE) based on graph convolutions (GCVAE) to learn garment latent spaces. This allows for realistic generation of 3D garments on top of SMPL model for any pose and shape.
|
|
|
Daniel Hernandez, Lukas Schneider, P. Cebrian, A. Espinosa, David Vazquez, Antonio Lopez, et al. (2019). Slanted Stixels: A way to represent steep streets. IJCV - International Journal of Computer Vision, 127, 1643–1658.
Abstract: This work presents and evaluates a novel compact scene representation based on Stixels that infers geometric and semantic information. Our approach overcomes the previous rather restrictive geometric assumptions for Stixels by introducing a novel depth model to account for non-flat roads and slanted objects. Both semantic and depth cues are used jointly to infer the scene representation in a sound global energy minimization formulation. Furthermore, a novel approximation scheme is introduced in order to significantly reduce the computational complexity of the Stixel algorithm, and then achieve real-time computation capabilities. The idea is to first perform an over-segmentation of the image, discarding the unlikely Stixel cuts, and apply the algorithm only on the remaining Stixel cuts. This work presents a novel over-segmentation strategy based on a fully convolutional network, which outperforms an approach based on using local extrema of the disparity map. We evaluate the proposed methods in terms of semantic and geometric accuracy as well as run-time on four publicly available benchmark datasets. Our approach maintains accuracy on flat road scene datasets while improving substantially on a novel non-flat road dataset.
|
|
|
Jorge Bernal, F. Javier Sanchez, & Fernando Vilariño. (2011). Integration of Valley Orientation Distribution for Polyp Region Identification in Colonoscopy. In In MICCAI 2011 Workshop on Computational and Clinical Applications in Abdominal Imaging (Vol. 6668, pp. 76–83). Lecture Notes in Computer Science. Springer Link.
Abstract: This work presents a region descriptor based on the integration of the information that the depth of valleys image provides. The depth of valleys image is based on the presence of intensity valleys around polyps due to the image acquisition. Our proposed method consists of defining, for each point, a series of radial sectors around it and then accumulates the maxima of the depth of valleys image only if the orientation of the intensity valley coincides with the orientation of the sector above. We apply our descriptor to a prior segmentation of the images and we present promising results on polyp detection, outperforming other approaches that also integrate depth of valleys information.
|
|
|
Angel Sappa, Patricia Suarez, Henry Velesaca, & Dario Carpio. (2022). Domain Adaptation in Image Dehazing: Exploring the Usage of Images from Virtual Scenarios. In 16th International Conference on Computer Graphics, Visualization, Computer Vision and Image Processing (pp. 85–92).
Abstract: This work presents a novel domain adaptation strategy for deep learning-based approaches to solve the image dehazing
problem. Firstly, a large set of synthetic images is generated by using a realistic 3D graphic simulator; these synthetic
images contain different densities of haze, which are used for training the model that is later adapted to any real scenario.
The adaptation process requires just a few images to fine-tune the model parameters. The proposed strategy allows
overcoming the limitation of training a given model with few images. In other words, the proposed strategy implements
the adaptation of a haze removal model trained with synthetic images to real scenarios. It should be noticed that it is quite
difficult, if not impossible, to have large sets of pairs of real-world images (with and without haze) to train in a supervised
way dehazing algorithms. Experimental results are provided showing the validity of the proposed domain adaptation
strategy.
Keywords: Domain adaptation; Synthetic hazed dataset; Dehazing
|
|
|
Bhaskar Chakraborty, Andrew Bagdanov, & Jordi Gonzalez. (2009). Towards Real-Time Human Action Recognition. In 4th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 5524). LNCS. Springer Berlin Heidelberg.
Abstract: This work presents a novel approach to human detection based action-recognition in real-time. To realize this goal our method first detects humans in different poses using a correlation-based approach. Recognition of actions is done afterward based on the change of the angular values subtended by various body parts. Real-time human detection and action recognition are very challenging, and most state-of-the-art approaches employ complex feature extraction and classification techniques, which ultimately becomes a handicap for real-time recognition. Our correlation-based method, on the other hand, is computationally efficient and uses very simple gradient-based features. For action recognition angular features of body parts are extracted using a skeleton technique. Results for action recognition are comparable with the present state-of-the-art.
|
|
|
Agnes Borras, & Josep Llados. (2007). Similarity-Based Object Retrieval Using Appearance and Geometric Feature Combination. In 3rd Iberian Conference on Pattern Recognition and Image Analysis (IbPRIA 2007), J. Marti et al. (Eds.) LNCS 4477:113–120 (Vol. 4478, 33–39).
Abstract: This work presents a content-based image retrieval system of general purpose that deals with cluttered scenes containing a given query object. The system is flexible enough to handle with a single image of an object despite its rotation, translation and scale variations. The image content is divided in parts that are described with a combination of features based on geometrical and color properties. The idea behind the feature combination is to benefit from a fuzzy similarity computation that provides robustness and tolerance to the retrieval process. The features can be independently computed and the image parts can be easily indexed by using a table structure on every feature value. Finally a process inspired in the alignment strategies is used to check the coherence of the object parts found in a scene. Our work presents a system of easy implementation that uses an open set of features and can suit a wide variety of applications.
|
|
|
Agnes Borras, Francesc Tous, Josep Llados, & Maria Vanrell. (2003). High-Level Clothes Description Based on Color-Texture and Structural Features. In Lecture Notes in Computer Science (Vol. 2652, 108–116).
Abstract: This work is a part of a surveillance system where content- based image retrieval is done in terms of people appearance. Given an image of a person, our work provides an automatic description of his clothing according to the colour, texture and structural composition of its garments. We present a two-stage process composed by image segmentation and a region-based interpretation. We segment an image by modelling it due to an attributed graph and applying a hybrid method that follows a split-and-merge strategy. We propose the interpretation of five cloth combinations that are modelled in a graph structure in terms of region features. The interpretation is viewed as a graph matching with an associated cost between the segmentation and the cloth models. Fi- nally, we have tested the process with a ground-truth of one hundred images.
|
|
|
Pau Riba, Sounak Dey, Ali Furkan Biten, & Josep Llados. (2021). Localizing Infinity-shaped fishes: Sketch-guided object localization in the wild.
Abstract: This work investigates the problem of sketch-guided object localization (SGOL), where human sketches are used as queries to conduct the object localization in natural images. In this cross-modal setting, we first contribute with a tough-to-beat baseline that without any specific SGOL training is able to outperform the previous works on a fixed set of classes. The baseline is useful to analyze the performance of SGOL approaches based on available simple yet powerful methods. We advance prior arts by proposing a sketch-conditioned DETR (DEtection TRansformer) architecture which avoids a hard classification and alleviates the domain gap between sketches and images to localize object instances. Although the main goal of SGOL is focused on object detection, we explored its natural extension to sketch-guided instance segmentation. This novel task allows to move towards identifying the objects at pixel level, which is of key importance in several applications. We experimentally demonstrate that our model and its variants significantly advance over previous state-of-the-art results. All training and testing code of our model will be released to facilitate future researchhttps://github.com/priba/sgol_wild.
|
|
|
Dani Rowe, Jordi Gonzalez, Marco Pedersoli, & Juan J. Villanueva. (2010). On Tracking Inside Groups. MVA - Machine Vision and Applications, 21(2), 113–127.
Abstract: This work develops a new architecture for multiple-target tracking in unconstrained dynamic scenes, which consists of a detection level which feeds a two-stage tracking system. A remarkable characteristic of the system is its ability to track several targets while they group and split, without using 3D information. Thus, special attention is given to the feature-selection and appearance-computation modules, and to those modules involved in tracking through groups. The system aims to work as a stand-alone application in complex and dynamic scenarios. No a-priori knowledge about either the scene or the targets, based on a previous training period, is used. Hence, the scenario is completely unknown beforehand. Successful tracking has been demonstrated in well-known databases of both indoor and outdoor scenarios. Accurate and robust localisations have been yielded during long-term target merging and occlusions.
|
|
|
Antonio Esteban Lansaque. (2014). 3D reconstruction and recognition using structured ligth (Vol. 179). Master's thesis, , .
Abstract: This work covers the problem of 3D reconstruction, recognition and 6DOF pose estimation. The goal of this project is to reconstruct a 3D scene and to align an object model of the industrial pieces onto the reconstructed scene. The reconstruction algorithm is based on stereo techniques and the recognition algorithm is based on SHOT descriptors computed on a set of uniform keypoints. Correspondences are used to estimate a first 6DOF transformation that maps the model onto the scene and then ICP algorithm is used to refine the transformation. In order to check the effectiveness of the proposed algorithm, several experiments were performed. These experiments were conducted on a lab environment in order to get results under the same conditions in all of them. Although obtained results are not real time results, the proposed algorithm ends up with high rates of object recognition.
|
|