Kaida Xiao, Chenyang Fu, Dimosthenis Karatzas, & Sophie Wuerger. (2011). Visual Gamma Correction for LCD Displays. DIS - Displays, 32(1), 17–23.
Abstract: An improved method for visual gamma correction is developed for LCD displays to increase the accuracy of digital colour reproduction. Rather than utilising a photometric measurement device, we use observ- ers’ visual luminance judgements for gamma correction. Eight half tone patterns were designed to gen- erate relative luminances from 1/9 to 8/9 for each colour channel. A psychophysical experiment was conducted on an LCD display to find the digital signals corresponding to each relative luminance by visually matching the half-tone background to a uniform colour patch. Both inter- and intra-observer vari- ability for the eight luminance matches in each channel were assessed and the luminance matches proved to be consistent across observers (DE00 < 3.5) and repeatable (DE00 < 2.2). Based on the individual observer judgements, the display opto-electronic transfer function (OETF) was estimated by using either a 3rd order polynomial regression or linear interpolation for each colour channel. The performance of the proposed method is evaluated by predicting the CIE tristimulus values of a set of coloured patches (using the observer-based OETFs) and comparing them to the expected CIE tristimulus values (using the OETF obtained from spectro-radiometric luminance measurements). The resulting colour differences range from 2 to 4.6 DE00. We conclude that this observer-based method of visual gamma correction is useful to estimate the OETF for LCD displays. Its major advantage is that no particular functional relationship between digital inputs and luminance outputs has to be assumed.
Keywords: Display calibration; Psychophysics ; Perceptual; Visual gamma correction; Luminance matching; Observer-based calibration
|
Wenlong Deng, Yongli Mou, Takahiro Kashiwa, Sergio Escalera, Kohei Nagai, Kotaro Nakayama, et al. (2020). Vision based Pixel-level Bridge Structural Damage Detection Using a Link ASPP Network. AC - Automation in Construction, 110, 102973.
Abstract: Structural Health Monitoring (SHM) has greatly benefited from computer vision. Recently, deep learning approaches are widely used to accurately estimate the state of deterioration of infrastructure. In this work, we focus on the problem of bridge surface structural damage detection, such as delamination and rebar exposure. It is well known that the quality of a deep learning model is highly dependent on the quality of the training dataset. Bridge damage detection, our application domain, has the following main challenges: (i) labeling the damages requires knowledgeable civil engineering professionals, which makes it difficult to collect a large annotated dataset; (ii) the damage area could be very small, whereas the background area is large, which creates an unbalanced training environment; (iii) due to the difficulty to exactly determine the extension of the damage, there is often a variation among different labelers who perform pixel-wise labeling. In this paper, we propose a novel model for bridge structural damage detection to address the first two challenges. This paper follows the idea of an atrous spatial pyramid pooling (ASPP) module that is designed as a novel network for bridge damage detection. Further, we introduce the weight balanced Intersection over Union (IoU) loss function to achieve accurate segmentation on a highly unbalanced small dataset. The experimental results show that (i) the IoU loss function improves the overall performance of damage detection, as compared to cross entropy loss or focal loss, and (ii) the proposed model has a better ability to detect a minority class than other light segmentation networks.
Keywords: Semantic image segmentation; Deep learning
|
Guillermo Torres, Debora Gil, Antoni Rosell, S. Mena, & Carles Sanchez. (2023). Virtual Radiomics Biopsy for the Histological Diagnosis of Pulmonary Nodules – Intermediate Results of the RadioLung Project. IJCARS - International Journal of Computer Assisted Radiology and Surgery, .
|
David Vazquez, Javier Marin, Antonio Lopez, Daniel Ponsa, & David Geronimo. (2014). Virtual and Real World Adaptation for Pedestrian Detection. TPAMI - IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(4), 797–809.
Abstract: Pedestrian detection is of paramount interest for many applications. Most promising detectors rely on discriminatively learnt classifiers, i.e., trained with annotated samples. However, the annotation step is a human intensive and subjective task worth to be minimized. By using virtual worlds we can automatically obtain precise and rich annotations. Thus, we face the question: can a pedestrian appearance model learnt in realistic virtual worlds work successfully for pedestrian detection in realworld images?. Conducted experiments show that virtual-world based training can provide excellent testing accuracy in real world, but it can also suffer the dataset shift problem as real-world based training does. Accordingly, we have designed a domain adaptation framework, V-AYLA, in which we have tested different techniques to collect a few pedestrian samples from the target domain (real world) and combine them with the many examples of the source domain (virtual world) in order to train a domain adapted pedestrian classifier that will operate in the target domain. V-AYLA reports the same detection accuracy than when training with many human-provided pedestrian annotations and testing with real-world images of the same domain. To the best of our knowledge, this is the first work demonstrating adaptation of virtual and real worlds for developing an object detector.
Keywords: Domain Adaptation; Pedestrian Detection
|
Razieh Rastgoo, Kourosh Kiani, & Sergio Escalera. (2020). Video-based Isolated Hand Sign Language Recognition Using a Deep Cascaded Model. MTAP - Multimedia Tools and Applications, 79, 22965–22987.
Abstract: In this paper, we propose an efficient cascaded model for sign language recognition taking benefit from spatio-temporal hand-based information using deep learning approaches, especially Single Shot Detector (SSD), Convolutional Neural Network (CNN), and Long Short Term Memory (LSTM), from videos. Our simple yet efficient and accurate model includes two main parts: hand detection and sign recognition. Three types of spatial features, including hand features, Extra Spatial Hand Relation (ESHR) features, and Hand Pose (HP) features, have been fused in the model to feed to LSTM for temporal features extraction. We train SSD model for hand detection using some videos collected from five online sign dictionaries. Our model is evaluated on our proposed dataset (Rastgoo et al., Expert Syst Appl 150: 113336, 2020), including 10’000 sign videos for 100 Persian sign using 10 contributors in 10 different backgrounds, and isoGD dataset. Using the 5-fold cross-validation method, our model outperforms state-of-the-art alternatives in sign language recognition
|
Javier Selva, Anders S. Johansen, Sergio Escalera, Kamal Nasrollahi, Thomas B. Moeslund, & Albert Clapes. (2023). Video transformers: A survey. TPAMI - IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(11), 12922–12943.
Abstract: Transformer models have shown great success handling long-range interactions, making them a promising tool for modeling video. However, they lack inductive biases and scale quadratically with input length. These limitations are further exacerbated when dealing with the high dimensionality introduced by the temporal dimension. While there are surveys analyzing the advances of Transformers for vision, none focus on an in-depth analysis of video-specific designs. In this survey, we analyze the main contributions and trends of works leveraging Transformers to model video. Specifically, we delve into how videos are handled at the input level first. Then, we study the architectural changes made to deal with video more efficiently, reduce redundancy, re-introduce useful inductive biases, and capture long-term temporal dynamics. In addition, we provide an overview of different training regimes and explore effective self-supervised learning strategies for video. Finally, we conduct a performance comparison on the most common benchmark for Video Transformers (i.e., action classification), finding them to outperform 3D ConvNets even with less computational complexity.
Keywords: Artificial Intelligence; Computer Vision; Self-Attention; Transformers; Video Representations
|
Ferran Diego, Daniel Ponsa, Joan Serrat, & Antonio Lopez. (2011). Video Alignment for Change Detection. TIP - IEEE Transactions on Image Processing, 20(7), 1858–1869.
Abstract: In this work, we address the problem of aligning two video sequences. Such alignment refers to synchronization, i.e., the establishment of temporal correspondence between frames of the first and second video, followed by spatial registration of all the temporally corresponding frames. Video synchronization and alignment have been attempted before, but most often in the relatively simple cases of fixed or rigidly attached cameras and simultaneous acquisition. In addition, restrictive assumptions have been applied, including linear time correspondence or the knowledge of the complete trajectories of corresponding scene points; to some extent, these assumptions limit the practical applicability of any solutions developed. We intend to solve the more general problem of aligning video sequences recorded by independently moving cameras that follow similar trajectories, based only on the fusion of image intensity and GPS information. The novelty of our approach is to pose the synchronization as a MAP inference problem on a Bayesian network including the observations from these two sensor types, which have been proved complementary. Alignment results are presented in the context of videos recorded from vehicles driving along the same track at different times, for different road types. In addition, we explore two applications of the proposed video alignment method, both based on change detection between aligned videos. One is the detection of vehicles, which could be of use in ADAS. The other is online difference spotting videos of surveillance rounds.
Keywords: video alignment
|
Cristina Cañero, & Petia Radeva. (2003). Vesselness enhancement diffusion. PRL - Pattern Recognition Letters, 24(16), 3141–3151.
|
Daniel Ponsa, & Antonio Lopez. (2009). Variance reduction techniques in particle-based visual contour Tracking. PR - Pattern Recognition, 42(11), 2372–2391.
Abstract: This paper presents a comparative study of three different strategies to improve the performance of particle filters, in the context of visual contour tracking: the unscented particle filter, the Rao-Blackwellized particle filter, and the partitioned sampling technique. The tracking problem analyzed is the joint estimation of the global and local transformation of the outline of a given target, represented following the active shape model approach. The main contributions of the paper are the novel adaptations of the considered techniques on this generic problem, and the quantitative assessment of their performance in extensive experimental work done.
Keywords: Contour tracking; Active shape models; Kalman filter; Particle filter; Importance sampling; Unscented particle filter; Rao-Blackwellization; Partitioned sampling
|
Fei Yang, Luis Herranz, Joost Van de Weijer, Jose Antonio Iglesias, Antonio Lopez, & Mikhail Mozerov. (2020). Variable Rate Deep Image Compression with Modulated Autoencoder. SPL - IEEE Signal Processing Letters, 27, 331–335.
Abstract: Variable rate is a requirement for flexible and adaptable image and video compression. However, deep image compression methods (DIC) are optimized for a single fixed rate-distortion (R-D) tradeoff. While this can be addressed by training multiple models for different tradeoffs, the memory requirements increase proportionally to the number of models. Scaling the bottleneck representation of a shared autoencoder can provide variable rate compression with a single shared autoencoder. However, the R-D performance using this simple mechanism degrades in low bitrates, and also shrinks the effective range of bitrates. To address these limitations, we formulate the problem of variable R-D optimization for DIC, and propose modulated autoencoders (MAEs), where the representations of a shared autoencoder are adapted to the specific R-D tradeoff via a modulation network. Jointly training this modulated autoencoder and the modulation network provides an effective way to navigate the R-D operational curve. Our experiments show that the proposed method can achieve almost the same R-D performance of independent models with significantly fewer parameters.
|
Jaume Garcia, Debora Gil, Sandra Pujades, & Francesc Carreras. (2008). Valoracion de la Funcion del Ventriculo Izquierdo mediante Modelos Regionales Hiperparametricos. Revista Española de Cardiologia, 61(3), 79.
Abstract: La mayoría de la enfermedades cardiovasculares afectan a las propiedades contráctiles de la banda ventricular helicoidal. Esto se refleja en una variación del comportamiento normal de la función ventricular. Parámetros locales tales como los strains, o la deformación experimentada por el tejido, son indicadores capaces de detectar anomalías funcionales en territorios específicos. A menudo, dichos parámetros son considerados de forma separada. En este trabajo presentamos un marco computacional (el Dominio Paramétrico Normalizado, DPN) que permite integrarlos en hiperparámetros funcionales y estudiar sus rangos de normalidad. Dichos rangos permiten valorar de forma objetiva la función regional de cualquier nuevo paciente. Para ello, consideramos secuencias de resonancia magnética etiquetada a nivel basal, medio y apical. Los hiperparámetros se obtienen a partir del movimiento intramural del VI estimado mediante el método Harmonic Phase Flow. El DPN se define a partir de en una parametrización del Ventrículo Izquierdo (VI) en sus coordenadas radiales y circunferencial basada en criterios anatómicos. El paso de los hiperparámetros al DPN hace posible la comparación entre distintos pacientes. Los rangos de normalidad se definen mediante análisis estadístico de valores de voluntarios sanos en 45 regiones del DPN a lo largo de 9 fases sistólicas. Se ha usado un conjunto de 19 (14 H; E: 30.7±7.5) voluntarios sanos para crear los patrones de normalidad y se han validado usando 2 controles sanos y 3 pacientes afectados de contractilidad global reducida. Para los controles los resultados regionales se han ajustado dentro de la normalidad, mientras que para los pacientes se han obtenido valores anormales en las zonas descritas, localizando y cuantificando así el diagnóstico empírico.
|
Oriol Rodriguez-Leor, J. Mauri, Eduard Fernandez-Nofrerias, Antonio Tovar, Vicente del Valle, Aura Hernandez-Sabate, et al. (2004). Utilización de la Estructura de los Campos Vectoriales para la Detección de la Adventicia en Imágenes de Ecografía Intracoronaria. Revista Internacional de Enfermedades Cardiovasculares Revista Española de Cardiología, 57(2), 100.
|
Jose Antonio Rodriguez, Florent Perronnin, Gemma Sanchez, & Josep Llados. (2010). Unsupervised writer adaptation of whole-word HMMs with application to word-spotting. PRL - Pattern Recognition Letters, 31(8), 742–749.
Abstract: In this paper we propose a novel approach for writer adaptation in a handwritten word-spotting task. The method exploits the fact that the semi-continuous hidden Markov model separates the word model parameters into (i) a codebook of shapes and (ii) a set of word-specific parameters.
Our main contribution is to employ this property to derive writer-specific word models by statistically adapting an initial universal codebook to each document. This process is unsupervised and does not even require the appearance of the keyword(s) in the searched document. Experimental results show an increase in performance when this adaptation technique is applied. To the best of our knowledge, this is the first work dealing with adaptation for word-spotting. The preliminary version of this paper obtained an IBM Best Student Paper Award at the 19th International Conference on Pattern Recognition.
Keywords: Word-spotting; Handwriting recognition; Writer adaptation; Hidden Markov model; Document analysis
|
Adriana Romero, Carlo Gatta, & Gustavo Camps-Valls. (2016). Unsupervised Deep Feature Extraction for Remote Sensing Image Classification. TGRS - IEEE Transaction on Geoscience and Remote Sensing, 54(3), 1349–1362.
Abstract: This paper introduces the use of single-layer and deep convolutional networks for remote sensing data analysis. Direct application to multi- and hyperspectral imagery of supervised (shallow or deep) convolutional networks is very challenging given the high input data dimensionality and the relatively small amount of available labeled data. Therefore, we propose the use of greedy layerwise unsupervised pretraining coupled with a highly efficient algorithm for unsupervised learning of sparse features. The algorithm is rooted on sparse representations and enforces both population and lifetime sparsity of the extracted features, simultaneously. We successfully illustrate the expressive power of the extracted representations in several scenarios: classification of aerial scenes, as well as land-use classification in very high resolution or land-cover classification from multi- and hyperspectral images. The proposed algorithm clearly outperforms standard principal component analysis (PCA) and its kernel counterpart (kPCA), as well as current state-of-the-art algorithms of aerial classification, while being extremely computationally efficient at learning representations of data. Results show that single-layer convolutional networks can extract powerful discriminative features only when the receptive field accounts for neighboring pixels and are preferred when the classification requires high resolution and detailed results. However, deep architectures significantly outperform single-layer variants, capturing increasing levels of abstraction and complexity throughout the feature hierarchy.
|
Kaida Xiao, Chenyang Fu, D.Mylonas, Dimosthenis Karatzas, & S. Wuerger. (2013). Unique Hue Data for Colour Appearance Models. Part ii: Chromatic Adaptation Transform. CRA - Color Research & Application, 38(1), 22–29.
Abstract: Unique hue settings of 185 observers under three room-lighting conditions were used to evaluate the accuracy of full and mixed chromatic adaptation transform models of CIECAM02 in terms of unique hue reproduction. Perceptual hue shifts in CIECAM02 were evaluated for both models with no clear difference using the current Commission Internationale de l'Éclairage (CIE) recommendation for mixed chromatic adaptation ratio. Using our large dataset of unique hue data as a benchmark, an optimised parameter is proposed for chromatic adaptation under mixed illumination conditions that produces more accurate results in unique hue reproduction. © 2011 Wiley Periodicals, Inc. Col Res Appl, 2013
|