Records |
Author |
Wenlong Deng; Yongli Mou; Takahiro Kashiwa; Sergio Escalera; Kohei Nagai; Kotaro Nakayama; Yutaka Matsuo; Helmut Prendinger |
Title |
Vision based Pixel-level Bridge Structural Damage Detection Using a Link ASPP Network |
Type |
Journal Article |
Year |
2020 |
Publication |
Automation in Construction |
Abbreviated Journal |
AC |
Volume |
110 |
Issue |
|
Pages |
102973 |
Keywords |
Semantic image segmentation; Deep learning |
Abstract |
Structural Health Monitoring (SHM) has greatly benefited from computer vision. Recently, deep learning approaches are widely used to accurately estimate the state of deterioration of infrastructure. In this work, we focus on the problem of bridge surface structural damage detection, such as delamination and rebar exposure. It is well known that the quality of a deep learning model is highly dependent on the quality of the training dataset. Bridge damage detection, our application domain, has the following main challenges: (i) labeling the damages requires knowledgeable civil engineering professionals, which makes it difficult to collect a large annotated dataset; (ii) the damage area could be very small, whereas the background area is large, which creates an unbalanced training environment; (iii) due to the difficulty to exactly determine the extension of the damage, there is often a variation among different labelers who perform pixel-wise labeling. In this paper, we propose a novel model for bridge structural damage detection to address the first two challenges. This paper follows the idea of an atrous spatial pyramid pooling (ASPP) module that is designed as a novel network for bridge damage detection. Further, we introduce the weight balanced Intersection over Union (IoU) loss function to achieve accurate segmentation on a highly unbalanced small dataset. The experimental results show that (i) the IoU loss function improves the overall performance of damage detection, as compared to cross entropy loss or focal loss, and (ii) the proposed model has a better ability to detect a minority class than other light segmentation networks. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
HuPBA; no proj |
Approved |
no |
Call Number |
Admin @ si @ DMK2020 |
Serial |
3314 |
Permanent link to this record |
|
|
|
Author |
Guillermo Torres; Debora Gil; Antoni Rosell; S. Mena; Carles Sanchez |
Title |
Virtual Radiomics Biopsy for the Histological Diagnosis of Pulmonary Nodules – Intermediate Results of the RadioLung Project |
Type |
Journal Article |
Year |
2023 |
Publication |
International Journal of Computer Assisted Radiology and Surgery |
Abbreviated Journal |
IJCARS |
Volume |
|
Issue |
|
Pages |
|
Keywords |
|
Abstract |
|
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
IAM |
Approved |
no |
Call Number |
Admin @ si @ TGM2023 |
Serial |
3830 |
Permanent link to this record |
|
|
|
Author |
David Vazquez; Javier Marin; Antonio Lopez; Daniel Ponsa; David Geronimo |
Title |
Virtual and Real World Adaptation for Pedestrian Detection |
Type |
Journal Article |
Year |
2014 |
Publication |
IEEE Transactions on Pattern Analysis and Machine Intelligence |
Abbreviated Journal |
TPAMI |
Volume |
36 |
Issue |
4 |
Pages |
797-809 |
Keywords |
Domain Adaptation; Pedestrian Detection |
Abstract |
Pedestrian detection is of paramount interest for many applications. Most promising detectors rely on discriminatively learnt classifiers, i.e., trained with annotated samples. However, the annotation step is a human intensive and subjective task worth to be minimized. By using virtual worlds we can automatically obtain precise and rich annotations. Thus, we face the question: can a pedestrian appearance model learnt in realistic virtual worlds work successfully for pedestrian detection in realworld images?. Conducted experiments show that virtual-world based training can provide excellent testing accuracy in real world, but it can also suffer the dataset shift problem as real-world based training does. Accordingly, we have designed a domain adaptation framework, V-AYLA, in which we have tested different techniques to collect a few pedestrian samples from the target domain (real world) and combine them with the many examples of the source domain (virtual world) in order to train a domain adapted pedestrian classifier that will operate in the target domain. V-AYLA reports the same detection accuracy than when training with many human-provided pedestrian annotations and testing with real-world images of the same domain. To the best of our knowledge, this is the first work demonstrating adaptation of virtual and real worlds for developing an object detector. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
0162-8828 |
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
ADAS; 600.057; 600.054; 600.076 |
Approved |
no |
Call Number |
ADAS @ adas @ VML2014 |
Serial |
2275 |
Permanent link to this record |
|
|
|
Author |
Razieh Rastgoo; Kourosh Kiani; Sergio Escalera |
Title |
Video-based Isolated Hand Sign Language Recognition Using a Deep Cascaded Model |
Type |
Journal Article |
Year |
2020 |
Publication |
Multimedia Tools and Applications |
Abbreviated Journal |
MTAP |
Volume |
79 |
Issue |
|
Pages |
22965–22987 |
Keywords |
|
Abstract |
In this paper, we propose an efficient cascaded model for sign language recognition taking benefit from spatio-temporal hand-based information using deep learning approaches, especially Single Shot Detector (SSD), Convolutional Neural Network (CNN), and Long Short Term Memory (LSTM), from videos. Our simple yet efficient and accurate model includes two main parts: hand detection and sign recognition. Three types of spatial features, including hand features, Extra Spatial Hand Relation (ESHR) features, and Hand Pose (HP) features, have been fused in the model to feed to LSTM for temporal features extraction. We train SSD model for hand detection using some videos collected from five online sign dictionaries. Our model is evaluated on our proposed dataset (Rastgoo et al., Expert Syst Appl 150: 113336, 2020), including 10’000 sign videos for 100 Persian sign using 10 contributors in 10 different backgrounds, and isoGD dataset. Using the 5-fold cross-validation method, our model outperforms state-of-the-art alternatives in sign language recognition |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
HuPBA; no menciona |
Approved |
no |
Call Number |
Admin @ si @ RKE2020b |
Serial |
3442 |
Permanent link to this record |
|
|
|
Author |
Javier Selva; Anders S. Johansen; Sergio Escalera; Kamal Nasrollahi; Thomas B. Moeslund; Albert Clapes |
Title |
Video transformers: A survey |
Type |
Journal Article |
Year |
2023 |
Publication |
IEEE Transactions on Pattern Analysis and Machine Intelligence |
Abbreviated Journal |
TPAMI |
Volume |
45 |
Issue |
11 |
Pages |
12922-12943 |
Keywords |
Artificial Intelligence; Computer Vision; Self-Attention; Transformers; Video Representations |
Abstract |
Transformer models have shown great success handling long-range interactions, making them a promising tool for modeling video. However, they lack inductive biases and scale quadratically with input length. These limitations are further exacerbated when dealing with the high dimensionality introduced by the temporal dimension. While there are surveys analyzing the advances of Transformers for vision, none focus on an in-depth analysis of video-specific designs. In this survey, we analyze the main contributions and trends of works leveraging Transformers to model video. Specifically, we delve into how videos are handled at the input level first. Then, we study the architectural changes made to deal with video more efficiently, reduce redundancy, re-introduce useful inductive biases, and capture long-term temporal dynamics. In addition, we provide an overview of different training regimes and explore effective self-supervised learning strategies for video. Finally, we conduct a performance comparison on the most common benchmark for Video Transformers (i.e., action classification), finding them to outperform 3D ConvNets even with less computational complexity. |
Address |
1 Nov. 2023 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
HUPBA; no menciona |
Approved |
no |
Call Number |
Admin @ si @ SJE2023 |
Serial |
3823 |
Permanent link to this record |
|
|
|
Author |
Ferran Diego; Daniel Ponsa; Joan Serrat; Antonio Lopez |
Title |
Video Alignment for Change Detection |
Type |
Journal Article |
Year |
2011 |
Publication |
IEEE Transactions on Image Processing |
Abbreviated Journal |
TIP |
Volume |
20 |
Issue |
7 |
Pages |
1858-1869 |
Keywords |
video alignment |
Abstract |
In this work, we address the problem of aligning two video sequences. Such alignment refers to synchronization, i.e., the establishment of temporal correspondence between frames of the first and second video, followed by spatial registration of all the temporally corresponding frames. Video synchronization and alignment have been attempted before, but most often in the relatively simple cases of fixed or rigidly attached cameras and simultaneous acquisition. In addition, restrictive assumptions have been applied, including linear time correspondence or the knowledge of the complete trajectories of corresponding scene points; to some extent, these assumptions limit the practical applicability of any solutions developed. We intend to solve the more general problem of aligning video sequences recorded by independently moving cameras that follow similar trajectories, based only on the fusion of image intensity and GPS information. The novelty of our approach is to pose the synchronization as a MAP inference problem on a Bayesian network including the observations from these two sensor types, which have been proved complementary. Alignment results are presented in the context of videos recorded from vehicles driving along the same track at different times, for different road types. In addition, we explore two applications of the proposed video alignment method, both based on change detection between aligned videos. One is the detection of vehicles, which could be of use in ADAS. The other is online difference spotting videos of surveillance rounds. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
ADAS; IF |
Approved |
no |
Call Number |
DPS 2011; ADAS @ adas @ dps2011 |
Serial |
1705 |
Permanent link to this record |
|
|
|
Author |
Cristina Cañero; Petia Radeva |
Title |
Vesselness enhancement diffusion |
Type |
Journal Article |
Year |
2003 |
Publication |
Pattern Recognition Letters |
Abbreviated Journal |
PRL |
Volume |
24 |
Issue |
16 |
Pages |
3141–3151 |
Keywords |
|
Abstract |
IF: 0.809 |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
MILAB |
Approved |
no |
Call Number |
BCNPCL @ bcnpcl @ CaR2003 |
Serial |
371 |
Permanent link to this record |
|
|
|
Author |
Daniel Ponsa; Antonio Lopez |
Title |
Variance reduction techniques in particle-based visual contour Tracking |
Type |
Journal Article |
Year |
2009 |
Publication |
Pattern Recognition |
Abbreviated Journal |
PR |
Volume |
42 |
Issue |
11 |
Pages |
2372–2391 |
Keywords |
Contour tracking; Active shape models; Kalman filter; Particle filter; Importance sampling; Unscented particle filter; Rao-Blackwellization; Partitioned sampling |
Abstract |
This paper presents a comparative study of three different strategies to improve the performance of particle filters, in the context of visual contour tracking: the unscented particle filter, the Rao-Blackwellized particle filter, and the partitioned sampling technique. The tracking problem analyzed is the joint estimation of the global and local transformation of the outline of a given target, represented following the active shape model approach. The main contributions of the paper are the novel adaptations of the considered techniques on this generic problem, and the quantitative assessment of their performance in extensive experimental work done. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
ADAS |
Approved |
no |
Call Number |
ADAS @ adas @ PoL2009a |
Serial |
1168 |
Permanent link to this record |
|
|
|
Author |
Fei Yang; Luis Herranz; Joost Van de Weijer; Jose Antonio Iglesias; Antonio Lopez; Mikhail Mozerov |
Title |
Variable Rate Deep Image Compression with Modulated Autoencoder |
Type |
Journal Article |
Year |
2020 |
Publication |
IEEE Signal Processing Letters |
Abbreviated Journal |
SPL |
Volume |
27 |
Issue |
|
Pages |
331-335 |
Keywords |
|
Abstract |
Variable rate is a requirement for flexible and adaptable image and video compression. However, deep image compression methods (DIC) are optimized for a single fixed rate-distortion (R-D) tradeoff. While this can be addressed by training multiple models for different tradeoffs, the memory requirements increase proportionally to the number of models. Scaling the bottleneck representation of a shared autoencoder can provide variable rate compression with a single shared autoencoder. However, the R-D performance using this simple mechanism degrades in low bitrates, and also shrinks the effective range of bitrates. To address these limitations, we formulate the problem of variable R-D optimization for DIC, and propose modulated autoencoders (MAEs), where the representations of a shared autoencoder are adapted to the specific R-D tradeoff via a modulation network. Jointly training this modulated autoencoder and the modulation network provides an effective way to navigate the R-D operational curve. Our experiments show that the proposed method can achieve almost the same R-D performance of independent models with significantly fewer parameters. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
LAMP; ADAS; 600.141; 600.120; 600.118 |
Approved |
no |
Call Number |
Admin @ si @ YHW2020 |
Serial |
3346 |
Permanent link to this record |
|
|
|
Author |
Jaume Garcia; Debora Gil; Sandra Pujades; Francesc Carreras |
Title |
Valoracion de la Funcion del Ventriculo Izquierdo mediante Modelos Regionales Hiperparametricos |
Type |
Journal Article |
Year |
2008 |
Publication |
Revista Española de Cardiologia |
Abbreviated Journal |
|
Volume |
61 |
Issue |
3 |
Pages |
79 |
Keywords |
|
Abstract |
La mayoría de la enfermedades cardiovasculares afectan a las propiedades contráctiles de la banda ventricular helicoidal. Esto se refleja en una variación del comportamiento normal de la función ventricular. Parámetros locales tales como los strains, o la deformación experimentada por el tejido, son indicadores capaces de detectar anomalías funcionales en territorios específicos. A menudo, dichos parámetros son considerados de forma separada. En este trabajo presentamos un marco computacional (el Dominio Paramétrico Normalizado, DPN) que permite integrarlos en hiperparámetros funcionales y estudiar sus rangos de normalidad. Dichos rangos permiten valorar de forma objetiva la función regional de cualquier nuevo paciente. Para ello, consideramos secuencias de resonancia magnética etiquetada a nivel basal, medio y apical. Los hiperparámetros se obtienen a partir del movimiento intramural del VI estimado mediante el método Harmonic Phase Flow. El DPN se define a partir de en una parametrización del Ventrículo Izquierdo (VI) en sus coordenadas radiales y circunferencial basada en criterios anatómicos. El paso de los hiperparámetros al DPN hace posible la comparación entre distintos pacientes. Los rangos de normalidad se definen mediante análisis estadístico de valores de voluntarios sanos en 45 regiones del DPN a lo largo de 9 fases sistólicas. Se ha usado un conjunto de 19 (14 H; E: 30.7±7.5) voluntarios sanos para crear los patrones de normalidad y se han validado usando 2 controles sanos y 3 pacientes afectados de contractilidad global reducida. Para los controles los resultados regionales se han ajustado dentro de la normalidad, mientras que para los pacientes se han obtenido valores anormales en las zonas descritas, localizando y cuantificando así el diagnóstico empírico. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
IAM; |
Approved |
no |
Call Number |
IAM @ iam @ GRP2008 |
Serial |
1032 |
Permanent link to this record |
|
|
|
Author |
Oriol Rodriguez-Leor; J. Mauri; Eduard Fernandez-Nofrerias; Antonio Tovar; Vicente del Valle; Aura Hernandez-Sabate; Debora Gil; Petia Radeva |
Title |
Utilización de la Estructura de los Campos Vectoriales para la Detección de la Adventicia en Imágenes de Ecografía Intracoronaria |
Type |
Journal Article |
Year |
2004 |
Publication |
Revista Internacional de Enfermedades Cardiovasculares Revista Española de Cardiología |
Abbreviated Journal |
|
Volume |
57 |
Issue |
2 |
Pages |
100 |
Keywords |
|
Abstract |
|
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
SEC |
Notes |
IAM;MILAB |
Approved |
no |
Call Number |
IAM @ iam @ RMF2004 |
Serial |
1642 |
Permanent link to this record |
|
|
|
Author |
Jose Antonio Rodriguez; Florent Perronnin; Gemma Sanchez; Josep Llados |
Title |
Unsupervised writer adaptation of whole-word HMMs with application to word-spotting |
Type |
Journal Article |
Year |
2010 |
Publication |
Pattern Recognition Letters |
Abbreviated Journal |
PRL |
Volume |
31 |
Issue |
8 |
Pages |
742–749 |
Keywords |
Word-spotting; Handwriting recognition; Writer adaptation; Hidden Markov model; Document analysis |
Abstract |
In this paper we propose a novel approach for writer adaptation in a handwritten word-spotting task. The method exploits the fact that the semi-continuous hidden Markov model separates the word model parameters into (i) a codebook of shapes and (ii) a set of word-specific parameters.
Our main contribution is to employ this property to derive writer-specific word models by statistically adapting an initial universal codebook to each document. This process is unsupervised and does not even require the appearance of the keyword(s) in the searched document. Experimental results show an increase in performance when this adaptation technique is applied. To the best of our knowledge, this is the first work dealing with adaptation for word-spotting. The preliminary version of this paper obtained an IBM Best Student Paper Award at the 19th International Conference on Pattern Recognition. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
Elsevier |
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
DAG |
Approved |
no |
Call Number |
DAG @ dag @ RPS2010 |
Serial |
1290 |
Permanent link to this record |
|
|
|
Author |
Adriana Romero; Carlo Gatta; Gustavo Camps-Valls |
Title |
Unsupervised Deep Feature Extraction for Remote Sensing Image Classification |
Type |
Journal Article |
Year |
2016 |
Publication |
IEEE Transaction on Geoscience and Remote Sensing |
Abbreviated Journal |
TGRS |
Volume |
54 |
Issue |
3 |
Pages |
1349 - 1362 |
Keywords |
|
Abstract |
This paper introduces the use of single-layer and deep convolutional networks for remote sensing data analysis. Direct application to multi- and hyperspectral imagery of supervised (shallow or deep) convolutional networks is very challenging given the high input data dimensionality and the relatively small amount of available labeled data. Therefore, we propose the use of greedy layerwise unsupervised pretraining coupled with a highly efficient algorithm for unsupervised learning of sparse features. The algorithm is rooted on sparse representations and enforces both population and lifetime sparsity of the extracted features, simultaneously. We successfully illustrate the expressive power of the extracted representations in several scenarios: classification of aerial scenes, as well as land-use classification in very high resolution or land-cover classification from multi- and hyperspectral images. The proposed algorithm clearly outperforms standard principal component analysis (PCA) and its kernel counterpart (kPCA), as well as current state-of-the-art algorithms of aerial classification, while being extremely computationally efficient at learning representations of data. Results show that single-layer convolutional networks can extract powerful discriminative features only when the receptive field accounts for neighboring pixels and are preferred when the classification requires high resolution and detailed results. However, deep architectures significantly outperform single-layer variants, capturing increasing levels of abstraction and complexity throughout the feature hierarchy. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
0196-2892 |
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
LAMP; 600.079;MILAB |
Approved |
no |
Call Number |
Admin @ si @ RGC2016 |
Serial |
2723 |
Permanent link to this record |
|
|
|
Author |
Kaida Xiao; Chenyang Fu; D.Mylonas; Dimosthenis Karatzas; S. Wuerger |
Title |
Unique Hue Data for Colour Appearance Models. Part ii: Chromatic Adaptation Transform |
Type |
Journal Article |
Year |
2013 |
Publication |
Color Research & Application |
Abbreviated Journal |
CRA |
Volume |
38 |
Issue |
1 |
Pages |
22-29 |
Keywords |
|
Abstract |
Unique hue settings of 185 observers under three room-lighting conditions were used to evaluate the accuracy of full and mixed chromatic adaptation transform models of CIECAM02 in terms of unique hue reproduction. Perceptual hue shifts in CIECAM02 were evaluated for both models with no clear difference using the current Commission Internationale de l'Éclairage (CIE) recommendation for mixed chromatic adaptation ratio. Using our large dataset of unique hue data as a benchmark, an optimised parameter is proposed for chromatic adaptation under mixed illumination conditions that produces more accurate results in unique hue reproduction. © 2011 Wiley Periodicals, Inc. Col Res Appl, 2013 |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
DAG |
Approved |
no |
Call Number |
Admin @ si @ XFM2013 |
Serial |
1822 |
Permanent link to this record |
|
|
|
Author |
Kaida Xiao; Sophie Wuerger; Chenyang Fu; Dimosthenis Karatzas |
Title |
Unique Hue Data for Colour Appearance Models. Part i: Loci of Unique Hues and Hue Uniformity |
Type |
Journal Article |
Year |
2011 |
Publication |
Color Research & Application |
Abbreviated Journal |
CRA |
Volume |
36 |
Issue |
5 |
Pages |
316-323 |
Keywords |
unique hues; colour appearance models; CIECAM02; hue uniformity |
Abstract |
Psychophysical experiments were conducted to assess unique hues on a CRT display for a large sample of colour-normal observers (n 1⁄4 185). These data were then used to evaluate the most commonly used colour appear- ance model, CIECAM02, by transforming the CIEXYZ tris- timulus values of the unique hues to the CIECAM02 colour appearance attributes, lightness, chroma and hue angle. We report two findings: (1) the hue angles derived from our unique hue data are inconsistent with the commonly used Natural Color System hues that are incorporated in the CIECAM02 model. We argue that our predicted unique hue angles (derived from our large dataset) provide a more reliable standard for colour management applications when the precise specification of these salient colours is im- portant. (2) We test hue uniformity for CIECAM02 in all four unique hues and show significant disagreements for all hues, except for unique red which seems to be invariant under lightness changes. Our dataset is useful to improve the CIECAM02 model as it provides reliable data for benchmarking. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
Wiley Periodicals Inc |
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
DAG |
Approved |
no |
Call Number |
Admin @ si @ XWF2011 |
Serial |
1816 |
Permanent link to this record |