Cristhian A. Aguilera-Carrasco, Angel Sappa, & Ricardo Toledo. (2015). LGHD: a Feature Descriptor for Matching Across Non-Linear Intensity Variations. In 22th IEEE International Conference on Image Processing (pp. 178–181).
|
Esteve Cervantes, Long Long Yu, Andrew Bagdanov, Marc Masana, & Joost Van de Weijer. (2016). Hierarchical Part Detection with Deep Neural Networks. In 23rd IEEE International Conference on Image Processing.
Abstract: Part detection is an important aspect of object recognition. Most approaches apply object proposals to generate hundreds of possible part bounding box candidates which are then evaluated by part classifiers. Recently several methods have investigated directly regressing to a limited set of bounding boxes from deep neural network representation. However, for object parts such methods may be unfeasible due to their relatively small size with respect to the image. We propose a hierarchical method for object and part detection. In a single network we first detect the object and then regress to part location proposals based only on the feature representation inside the object. Experiments show that our hierarchical approach outperforms a network which directly regresses the part locations. We also show that our approach obtains part detection accuracy comparable or better than state-of-the-art on the CUB-200 bird and Fashionista clothing item datasets with only a fraction of the number of part proposals.
Keywords: Object Recognition; Part Detection; Convolutional Neural Networks
|
Victor Vaquero, German Ros, Francesc Moreno-Noguer, Antonio Lopez, & Alberto Sanfeliu. (2017). Joint coarse-and-fine reasoning for deep optical flow. In 24th International Conference on Image Processing (pp. 2558–2562).
Abstract: We propose a novel representation for dense pixel-wise estimation tasks using CNNs that boosts accuracy and reduces training time, by explicitly exploiting joint coarse-and-fine reasoning. The coarse reasoning is performed over a discrete classification space to obtain a general rough solution, while the fine details of the solution are obtained over a continuous regression space. In our approach both components are jointly estimated, which proved to be beneficial for improving estimation accuracy. Additionally, we propose a new network architecture, which combines coarse and fine components by treating the fine estimation as a refinement built on top of the coarse solution, and therefore adding details to the general prediction. We apply our approach to the challenging problem of optical flow estimation and empirically validate it against state-of-the-art CNN-based solutions trained from scratch and tested on large optical flow datasets.
|
Andrei Polzounov, Artsiom Ablavatski, Sergio Escalera, Shijian Lu, & Jianfei Cai. (2017). WordFences: Text Localization and Recognition. In 24th International Conference on Image Processing.
|
Maedeh Aghaei, Mariella Dimiccoli, & Petia Radeva. (2017). All the people around me: face clustering in egocentric photo streams. In 24th International Conference on Image Processing.
Abstract: arxiv1703.01790
Given an unconstrained stream of images captured by a wearable photo-camera (2fpm), we propose an unsupervised bottom-up approach for automatic clustering appearing faces into the individual identities present in these data. The problem is challenging since images are acquired under real world conditions; hence the visible appearance of the people in the images undergoes intensive variations. Our proposed pipeline consists of first arranging the photo-stream into events, later, localizing the appearance of multiple people in them, and
finally, grouping various appearances of the same person across different events. Experimental results performed on a dataset acquired by wearing a photo-camera during one month, demonstrate the effectiveness of the proposed approach for the considered purpose.
Keywords: face discovery; face clustering; deepmatching; bag-of-tracklets; egocentric photo-streams
|
Marco Buzzelli, Joost Van de Weijer, & Raimondo Schettini. (2018). Learning Illuminant Estimation from Object Recognition. In 25th International Conference on Image Processing (pp. 3234–3238).
Abstract: In this paper we present a deep learning method to estimate the illuminant of an image. Our model is not trained with illuminant annotations, but with the objective of improving performance on an auxiliary task such as object recognition. To the best of our knowledge, this is the first example of a deep
learning architecture for illuminant estimation that is trained without ground truth illuminants. We evaluate our solution on standard datasets for color constancy, and compare it with state of the art methods. Our proposal is shown to outperform most deep learning methods in a cross-dataset evaluation
setup, and to present competitive results in a comparison with parametric solutions.
Keywords: Illuminant estimation; computational color constancy; semi-supervised learning; deep learning; convolutional neural networks
|
Patricia Suarez, Angel Sappa, Boris X. Vintimilla, & Riad I. Hammoud. (2018). Near InfraRed Imagery Colorization. In 25th International Conference on Image Processing (pp. 2237–2241).
Abstract: This paper proposes a stacked conditional Generative Adversarial Network-based method for Near InfraRed (NIR) imagery colorization. We propose a variant architecture of Generative Adversarial Network (GAN) that uses multiple
loss functions over a conditional probabilistic generative model. We show that this new architecture/loss-function yields better generalization and representation of the generated colored IR images. The proposed approach is evaluated on a large test dataset and compared to recent state of the art methods using standard metrics.
Keywords: Convolutional Neural Networks (CNN), Generative Adversarial Network (GAN), Infrared Imagery colorization
|
Emanuel Sanchez Aimar, Petia Radeva, & Mariella Dimiccoli. (2019). Social Relation Recognition in Egocentric Photostreams. In 26th International Conference on Image Processing (pp. 3227–3231).
Abstract: This paper proposes an approach to automatically categorize the social interactions of a user wearing a photo-camera (2fpm), by relying solely on what the camera is seeing. The problem is challenging due to the overwhelming complexity of social life and the extreme intra-class variability of social interactions captured under unconstrained conditions. We adopt the formalization proposed in Bugental's social theory, that groups human relations into five social domains with related categories. Our method is a new deep learning architecture that exploits the hierarchical structure of the label space and relies on a set of social attributes estimated at frame level to provide a semantic representation of social interactions. Experimental results on the new EgoSocialRelation dataset demonstrate the effectiveness of our proposal.
|
Patricia Suarez, Angel Sappa, Boris X. Vintimilla, & Riad I. Hammoud. (2021). Cycle Generative Adversarial Network: Towards A Low-Cost Vegetation Index Estimation. In 28th IEEE International Conference on Image Processing (pp. 19–22).
Abstract: This paper presents a novel unsupervised approach to estimate the Normalized Difference Vegetation Index (NDVI). The NDVI is obtained as the ratio between information from the visible and near infrared spectral bands; in the current work, the NDVI is estimated just from an image of the visible spectrum through a Cyclic Generative Adversarial Network (CyclicGAN). This unsupervised architecture learns to estimate the NDVI index by means of an image translation between the red channel of a given RGB image and the NDVI unpaired index’s image. The translation is obtained by means of a ResNET architecture and a multiple loss function. Experimental results obtained with this unsupervised scheme show the validity of the implemented model. Additionally, comparisons with the state of the art approaches are provided showing improvements with the proposed approach.
|
Aitor Alvarez-Gila, Joost Van de Weijer, Yaxing Wang, & Estibaliz Garrote. (2022). MVMO: A Multi-Object Dataset for Wide Baseline Multi-View Semantic Segmentation. In 29th IEEE International Conference on Image Processing.
Abstract: We present MVMO (Multi-View, Multi-Object dataset): a synthetic dataset of 116,000 scenes containing randomly placed objects of 10 distinct classes and captured from 25 camera locations in the upper hemisphere. MVMO comprises photorealistic, path-traced image renders, together with semantic segmentation ground truth for every view. Unlike existing multi-view datasets, MVMO features wide baselines between cameras and high density of objects, which lead to large disparities, heavy occlusions and view-dependent object appearance. Single view semantic segmentation is hindered by self and inter-object occlusions that could benefit from additional viewpoints. Therefore, we expect that MVMO will propel research in multi-view semantic segmentation and cross-view semantic transfer. We also provide baselines that show that new research is needed in such fields to exploit the complementary information of multi-view setups 1 .
Keywords: multi-view; cross-view; semantic segmentation; synthetic dataset
|
Ahmed M. A. Salih, Ilaria Boscolo Galazzo, Federica Cruciani, Lorenza Brusini, & Petia Radeva. (2022). Investigating Explainable Artificial Intelligence for MRI-based Classification of Dementia: a New Stability Criterion for Explainable Methods. In 29th IEEE International Conference on Image Processing.
Abstract: Individuals diagnosed with Mild Cognitive Impairment (MCI) have shown an increased risk of developing Alzheimer’s Disease (AD). As such, early identification of dementia represents a key prognostic element, though hampered by complex disease patterns. Increasing efforts have focused on Machine Learning (ML) to build accurate classification models relying on a multitude of clinical/imaging variables. However, ML itself does not provide sensible explanations related to the model mechanism and feature contribution. Explainable Artificial Intelligence (XAI) represents the enabling technology in this framework, allowing to understand ML outcomes and derive human-understandable explanations. In this study, we aimed at exploring ML combined with MRI-based features and XAI to solve this classification problem and interpret the outcome. In particular, we propose a new method to assess the robustness of feature rankings provided by XAI methods, especially when multicollinearity exists. Our findings indicate that our method was able to disentangle the list of the informative features underlying dementia, with important implications for aiding personalized monitoring plans.
Keywords: Image processing; Stability criteria; Machine learning; Robustness; Alzheimer's disease; Monitoring
|
Chengyi Zou, Shuai Wan, Marta Mrak, Marc Gorriz Blanch, Luis Herranz, & Tiannan Ji. (2022). Towards Lightweight Neural Network-based Chroma Intra Prediction for Video Coding. In 29th IEEE International Conference on Image Processing.
Abstract: In video compression the luma channel can be useful for predicting chroma channels (Cb, Cr), as has been demonstrated with the Cross-Component Linear Model (CCLM) used in Versatile Video Coding (VVC) standard. More recently, it has been shown that neural networks can even better capture the relationship among different channels. In this paper, a new attention-based neural network is proposed for cross-component intra prediction. With the goal to simplify neural network design, the new framework consists of four branches: boundary branch and luma branch for extracting features from reference samples, attention branch for fusing the first two branches, and prediction branch for computing the predicted chroma samples. The proposed scheme is integrated into VVC test model together with one additional binary block-level syntax flag which indicates whether a given block makes use of the proposed method. Experimental results demonstrate 0.31%/2.36%/2.00% BD-rate reductions on Y/Cb/Cr components, respectively, on top of the VVC Test Model (VTM) 7.0 which uses CCLM.
Keywords: Video coding; Quantization (signal); Computational modeling; Neural networks; Predictive models; Video compression; Syntactics
|
Arnau Ramisa, Ramon Lopez de Mantaras, & Ricardo Toledo. (2007). Comparing Combinations of Feature Regions for Panoramic VSLAM. In 4th International Conference on Informatics in Control, Automation and Robotics (292–297).
|
Bogdan Raducanu, & Fadi Dornaika. (2011). A Discriminative Non-Linear Manifold Learning Technique for Face Recognition. In Informatics Engineering and Information Science (Vol. 254, pp. 339–353). Springer Berlin Heidelberg.
Abstract: In this paper we propose a novel non-linear discriminative analysis technique for manifold learning. The proposed approach is a discriminant version of Laplacian Eigenmaps which takes into account the class label information in order to guide the procedure of non-linear dimensionality reduction. By following the large margin concept, the graph Laplacian is split in two components: within-class graph and between-class graph to better characterize the discriminant property of the data.
Our approach has been tested on several challenging face databases and it has been conveniently compared with other linear and non-linear techniques. The experimental results confirm that our method outperforms, in general, the existing ones. Although we have concentrated in this paper on the face recognition problem, the proposed approach could also be applied to other category of objects characterized by large variance in their appearance.
|
Fernando Alonso, Xavier Baro, Sergio Escalera, Jordi Gonzalez, Martha Mackay, & Anna Serrahima. (2016). CARE RESPITE: TAKING CARE OF THE CAREGIVERS, Theme 5 The Strategic use of Mobile and Digital Health and Care Solutions. In 16th International Conference for Integrated Care.
|