|
Marcos V Conde, Florin Vasluianu, Javier Vazquez, & Radu Timofte. (2023). Perceptual image enhancement for smartphone real-time applications. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 1848–1858).
Abstract: Recent advances in camera designs and imaging pipelines allow us to capture high-quality images using smartphones. However, due to the small size and lens limitations of the smartphone cameras, we commonly find artifacts or degradation in the processed images. The most common unpleasant effects are noise artifacts, diffraction artifacts, blur, and HDR overexposure. Deep learning methods for image restoration can successfully remove these artifacts. However, most approaches are not suitable for real-time applications on mobile devices due to their heavy computation and memory requirements. In this paper, we propose LPIENet, a lightweight network for perceptual image enhancement, with the focus on deploying it on smartphones. Our experiments show that, with much fewer parameters and operations, our model can deal with the mentioned artifacts and achieve competitive performance compared with state-of-the-art methods on standard benchmarks. Moreover, to prove the efficiency and reliability of our approach, we deployed the model directly on commercial smartphones and evaluated its performance. Our model can process 2K resolution images under 1 second in mid-level commercial smartphones.
|
|
|
Marco Pedersoli, Andrea Vedaldi, Jordi Gonzalez, & Xavier Roca. (2015). A coarse-to-fine approach for fast deformable object detection. PR - Pattern Recognition, 48(5), 1844–1853.
Abstract: We present a method that can dramatically accelerate object detection with part based models. The method is based on the observation that the cost of detection is likely to be dominated by the cost of matching each part to the image, and not by the cost of computing the optimal configuration of the parts as commonly assumed. Therefore accelerating detection requires minimizing the number of
part-to-image comparisons. To this end we propose a multiple-resolutions hierarchical part based model and a corresponding coarse-to-fine inference procedure that recursively eliminates from the search space unpromising part
placements. The method yields a ten-fold speedup over the standard dynamic programming approach and is complementary to the cascade-of-parts approach of [9]. Compared to the latter, our method does not have parameters to be determined empirically, which simplifies its use during the training of the model. Most importantly, the two techniques can be combined to obtain a very significant speedup, of two orders of magnitude in some cases. We evaluate our method extensively on the PASCAL VOC and INRIA datasets, demonstrating a very high increase in the detection speed with little degradation of the accuracy.
|
|
|
Debora Gil, Aura Hernandez-Sabate, Mireia Brunat, Steven Jansen, & Jordi Martinez-Vilalta. (2011). Structure-preserving smoothing of biomedical images. PR - Pattern Recognition, 44(9), 1842–1851.
Abstract: Smoothing of biomedical images should preserve gray-level transitions between adjacent tissues, while restoring contours consistent with anatomical structures. Anisotropic diffusion operators are based on image appearance discontinuities (either local or contextual) and might fail at weak inter-tissue transitions. Meanwhile, the output of block-wise and morphological operations is prone to present a block structure due to the shape and size of the considered pixel neighborhood. In this contribution, we use differential geometry concepts to define a diffusion operator that restricts to image consistent level-sets. In this manner, the final state is a non-uniform intensity image presenting homogeneous inter-tissue transitions along anatomical structures, while smoothing intra-structure texture. Experiments on different types of medical images (magnetic resonance, computerized tomography) illustrate its benefit on a further process (such as segmentation) of images.
Keywords: Non-linear smoothing; Differential geometry; Anatomical structures; segmentation; Cardiac magnetic resonance; Computerized tomography
|
|
|
Lichao Zhang, Abel Gonzalez-Garcia, Joost Van de Weijer, Martin Danelljan, & Fahad Shahbaz Khan. (2019). Synthetic Data Generation for End-to-End Thermal Infrared Tracking. TIP - IEEE Transactions on Image Processing, 28(4), 1837–1850.
Abstract: The usage of both off-the-shelf and end-to-end trained deep networks have significantly improved the performance of visual tracking on RGB videos. However, the lack of large labeled datasets hampers the usage of convolutional neural networks for tracking in thermal infrared (TIR) images. Therefore, most state-of-the-art methods on tracking for TIR data are still based on handcrafted features. To address this problem, we propose to use image-to-image translation models. These models allow us to translate the abundantly available labeled RGB data to synthetic TIR data. We explore both the usage of paired and unpaired image translation models for this purpose. These methods provide us with a large labeled dataset of synthetic TIR sequences, on which we can train end-to-end optimal features for tracking. To the best of our knowledge, we are the first to train end-to-end features for TIR tracking. We perform extensive experiments on the VOT-TIR2017 dataset. We show that a network trained on a large dataset of synthetic TIR data obtains better performance than one trained on the available real TIR data. Combining both data sources leads to further improvement. In addition, when we combine the network with motion features, we outperform the state of the art with a relative gain of over 10%, clearly showing the efficiency of using synthetic data to train end-to-end TIR trackers.
|
|
|
Santiago Segui, Michal Drozdzal, Ekaterina Zaytseva, Fernando Azpiroz, Petia Radeva, & Jordi Vitria. (2014). Detection of wrinkle frames in endoluminal videos using betweenness centrality measures for images. TITB - IEEE Transactions on Information Technology in Biomedicine, 18(6), 1831–1838.
Abstract: Intestinal contractions are one of the most important events to diagnose motility pathologies of the small intestine. When visualized by wireless capsule endoscopy (WCE), the sequence of frames that represents a contraction is characterized by a clear wrinkle structure in the central frames that corresponds to the folding of the intestinal wall. In this paper we present a new method to robustly detect wrinkle frames in full WCE videos by using a new mid-level image descriptor that is based on a centrality measure proposed for graphs. We present an extended validation, carried out in a very large database, that shows that the proposed method achieves state of the art performance for this task.
Keywords: Wireless Capsule Endoscopy; Small Bowel Motility Dysfunction; Contraction Detection; Structured Prediction; Betweenness Centrality
|
|
|
Matej Kristan, Jiri Matas, Martin Danelljan, Michael Felsberg, Hyung Jin Chang, Luka Cehovin Zajc, et al. (2023). The First Visual Object Tracking Segmentation VOTS2023 Challenge Results. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops (pp. 1796–1818).
Abstract: The Visual Object Tracking Segmentation VOTS2023 challenge is the eleventh annual tracker benchmarking activity of the VOT initiative. This challenge is the first to merge short-term and long-term as well as single-target and multiple-target tracking with segmentation masks as the only target location specification. A new dataset was created; the ground truth has been withheld to prevent overfitting. New performance measures and evaluation protocols have been created along with a new toolkit and an evaluation server. Results of the presented 47 trackers indicate that modern tracking frameworks are well-suited to deal with convergence of short-term and long-term tracking and that multiple and single target tracking can be considered a single problem. A leaderboard, with participating trackers details, the source code, the datasets, and the evaluation kit are publicly available at the challenge website\footnote https://www.votchallenge.net/vots2023/.
|
|
|
R. Valenti, & Theo Gevers. (2012). Accurate Eye Center Location through Invariant Isocentric Patterns. TPAMI - IEEE Transaction on Pattern Analysis and Machine Intelligence, 34(9), 1785–1798.
Abstract: Impact factor 2010: 5.308
Impact factor 2011/12?: 5.96
Locating the center of the eyes allows for valuable information to be captured and used in a wide range of applications. Accurate eye center location can be determined using commercial eye-gaze trackers, but additional constraints and expensive hardware make these existing solutions unattractive and impossible to use on standard (i.e., visible wavelength), low-resolution images of eyes. Systems based solely on appearance are proposed in the literature, but their accuracy does not allow us to accurately locate and distinguish eye centers movements in these low-resolution settings. Our aim is to bridge this gap by locating the center of the eye within the area of the pupil on low-resolution images taken from a webcam or a similar device. The proposed method makes use of isophote properties to gain invariance to linear lighting changes (contrast and brightness), to achieve in-plane rotational invariance, and to keep low-computational costs. To further gain scale invariance, the approach is applied to a scale space pyramid. In this paper, we extensively test our approach for its robustness to changes in illumination, head pose, scale, occlusion, and eye rotation. We demonstrate that our system can achieve a significant improvement in accuracy over state-of-the-art techniques for eye center location in standard low-resolution imagery.
|
|
|
Bhaskar Chakraborty, Michael Holte, Thomas B. Moeslund, Jordi Gonzalez, & Xavier Roca. (2011). A Selective Spatio-Temporal Interest Point Detector for Human Action Recognition in Complex Scenes. In 13th IEEE International Conference on Computer Vision (pp. 1776–1783).
Abstract: Recent progress in the field of human action recognition points towards the use of Spatio-Temporal Interest Points (STIPs) for local descriptor-based recognition strategies. In this paper we present a new approach for STIP detection by applying surround suppression combined with local and temporal constraints. Our method is significantly different from existing STIP detectors and improves the performance by detecting more repeatable, stable and distinctive STIPs for human actors, while suppressing unwanted background STIPs. For action representation we use a bag-of-visual words (BoV) model of local N-jet features to build a vocabulary of visual-words. To this end, we introduce a novel vocabulary building strategy by combining spatial pyramid and vocabulary compression techniques, resulting in improved performance and efficiency. Action class specific Support Vector Machine (SVM) classifiers are trained for categorization of human actions. A comprehensive set of experiments on existing benchmark datasets, and more challenging datasets of complex scenes, validate our approach and show state-of-the-art performance.
|
|
|
Katerine Diaz, Francesc J. Ferri, & W. Diaz. (2015). Incremental Generalized Discriminative Common Vectors for Image Classification. TNNLS - IEEE Transactions on Neural Networks and Learning Systems, 26(8), 1761–1775.
Abstract: Subspace-based methods have become popular due to their ability to appropriately represent complex data in such a way that both dimensionality is reduced and discriminativeness is enhanced. Several recent works have concentrated on the discriminative common vector (DCV) method and other closely related algorithms also based on the concept of null space. In this paper, we present a generalized incremental formulation of the DCV methods, which allows the update of a given model by considering the addition of new examples even from unseen classes. Having efficient incremental formulations of well-behaved batch algorithms allows us to conveniently adapt previously trained classifiers without the need of recomputing them from scratch. The proposed generalized incremental method has been empirically validated in different case studies from different application domains (faces, objects, and handwritten digits) considering several different scenarios in which new data are continuously added at different rates starting from an initial model.
|
|
|
E. Provenzi, Carlo Gatta, M. Fierro, & A. Rizzi. (2008). A Spatially Variant White-Patch and Gray-World Method for Color Image Enhancement Driven by Local Constant. TPAMI - IEEE Transactions on Pattern Analysis and Machine Intelligence, 1757–1770.
|
|
|
O. Fors, J. Nuñez, Xavier Otazu, A. Prades, & Robert D. Cardinal. (2010). Improving the Ability of Image Sensors to Detect Faint Stars and Moving Objects Using Image Deconvolution Techniques. SENS - Sensors, 10(3), 1743–1752.
Abstract: Abstract: In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors.
Keywords: image processing; image deconvolution; faint stars; space debris; wavelet transform
|
|
|
Zhaocheng Liu, Luis Herranz, Fei Yang, Saiping Zhang, Shuai Wan, Marta Mrak, et al. (2022). Slimmable Video Codec. In CVPR 2022 Workshop and Challenge on Learned Image Compression (CLIC 2022, 5th Edition) (pp. 1742–1746).
Abstract: Neural video compression has emerged as a novel paradigm combining trainable multilayer neural net-works and machine learning, achieving competitive rate-distortion (RD) performances, but still remaining impractical due to heavy neural architectures, with large memory and computational demands. In addition, models are usually optimized for a single RD tradeoff. Recent slimmable image codecs can dynamically adjust their model capacity to gracefully reduce the memory and computation requirements, without harming RD performance. In this paper we propose a slimmable video codec (SlimVC), by integrating a slimmable temporal entropy model in a slimmable autoencoder. Despite a significantly more complex architecture, we show that slimming remains a powerful mechanism to control rate, memory footprint, computational cost and latency, all being important requirements for practical video compression.
|
|
|
Reza Azad, Maryam Asadi-Aghbolaghi, Shohreh Kasaei, & Sergio Escalera. (2019). Dynamic 3D Hand Gesture Recognition by Learning Weighted Depth Motion Maps. TCSVT - IEEE Transactions on Circuits and Systems for Video Technology, 29(6), 1729–1740.
Abstract: Hand gesture recognition from sequences of depth maps is a challenging computer vision task because of the low inter-class and high intra-class variability, different execution rates of each gesture, and the high articulated nature of human hand. In this paper, a multilevel temporal sampling (MTS) method is first proposed that is based on the motion energy of key-frames of depth sequences. As a result, long, middle, and short sequences are generated that contain the relevant gesture information. The MTS results in increasing the intra-class similarity while raising the inter-class dissimilarities. The weighted depth motion map (WDMM) is then proposed to extract the spatio-temporal information from generated summarized sequences by an accumulated weighted absolute difference of consecutive frames. The histogram of gradient (HOG) and local binary pattern (LBP) are exploited to extract features from WDMM. The obtained results define the current state-of-the-art on three public benchmark datasets of: MSR Gesture 3D, SKIG, and MSR Action 3D, for 3D hand gesture recognition. We also achieve competitive results on NTU action dataset.
Keywords: Hand gesture recognition; Multilevel temporal sampling; Weighted depth motion map; Spatio-temporal description; VLAD encoding
|
|
|
F. Moreso, D. Seron, Jordi Vitria, J.M. Grinyo, F.M. Colome-Serra, N. Pares, et al. (1994). Quantification of Interstitial Chronic Renal Damage by means of Texture Analysis. Kidney International, 46(6), 1721–1727.
|
|
|
Jaume Amores, & Petia Radeva. (2005). Registration and Retrieval of Highly Elastic Bodies using Contextual Information. PRL - Pattern Recognition Letters, 26(11), 1720–1731.
|
|