Home | [11–20] << 21 22 23 24 25 26 27 28 29 30 >> [31–40] |
Records | |||||
---|---|---|---|---|---|
Author | Estefania Talavera; Maria Leyva-Vallina; Md. Mostafa Kamal Sarker; Domenec Puig; Nicolai Petkov; Petia Radeva | ||||
Title | Hierarchical approach to classify food scenes in egocentric photo-streams | Type | Journal Article | ||
Year | 2020 | Publication | IEEE Journal of Biomedical and Health Informatics | Abbreviated Journal | J-BHI |
Volume | 24 | Issue | 3 | Pages | 866 - 877 |
Keywords | |||||
Abstract | Recent studies have shown that the environment where people eat can affect their nutritional behaviour. In this work, we provide automatic tools for a personalised analysis of a person's health habits by the examination of daily recorded egocentric photo-streams. Specifically, we propose a new automatic approach for the classification of food-related environments, that is able to classify up to 15 such scenes. In this way, people can monitor the context around their food intake in order to get an objective insight into their daily eating routine. We propose a model that classifies food-related scenes organized in a semantic hierarchy. Additionally, we present and make available a new egocentric dataset composed of more than 33000 images recorded by a wearable camera, over which our proposed model has been tested. Our approach obtains an accuracy and F-score of 56\% and 65\%, respectively, clearly outperforming the baseline methods. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB; no proj | Approved | no | ||
Call Number | Admin @ si @ TLM2020 | Serial | 3380 | ||
Permanent link to this record | |||||
Author | Gabriel Villalonga; Joost Van de Weijer; Antonio Lopez | ||||
Title | Recognizing new classes with synthetic data in the loop: application to traffic sign recognition | Type | Journal Article | ||
Year | 2020 | Publication | Sensors | Abbreviated Journal | SENS |
Volume | 20 | Issue | 3 | Pages | 583 |
Keywords | |||||
Abstract | On-board vision systems may need to increase the number of classes that can be recognized in a relatively short period. For instance, a traffic sign recognition system may suddenly be required to recognize new signs. Since collecting and annotating samples of such new classes may need more time than we wish, especially for uncommon signs, we propose a method to generate these samples by combining synthetic images and Generative Adversarial Network (GAN) technology. In particular, the GAN is trained on synthetic and real-world samples from known classes to perform synthetic-to-real domain adaptation, but applied to synthetic samples of the new classes. Using the Tsinghua dataset with a synthetic counterpart, SYNTHIA-TS, we have run an extensive set of experiments. The results show that the proposed method is indeed effective, provided that we use a proper Convolutional Neural Network (CNN) to perform the traffic sign recognition (classification) task as well as a proper GAN to transform the synthetic images. Here, a ResNet101-based classifier and domain adaptation based on CycleGAN performed extremely well for a ratio∼ 1/4 for new/known classes; even for more challenging ratios such as∼ 4/1, the results are also very positive. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | LAMP; ADAS; 600.118; 600.120 | Approved | no | ||
Call Number | Admin @ si @ VWL2020 | Serial | 3405 | ||
Permanent link to this record | |||||
Author | Mohamed Ali Souibgui; Y.Kessentini | ||||
Title | DE-GAN: A Conditional Generative Adversarial Network for Document Enhancement | Type | Journal Article | ||
Year | 2022 | Publication | IEEE Transactions on Pattern Analysis and Machine Intelligence | Abbreviated Journal | TPAMI |
Volume | 44 | Issue | 3 | Pages | 1180-1191 |
Keywords | |||||
Abstract | Documents often exhibit various forms of degradation, which make it hard to be read and substantially deteriorate the performance of an OCR system. In this paper, we propose an effective end-to-end framework named Document Enhancement Generative Adversarial Networks (DE-GAN) that uses the conditional GANs (cGANs) to restore severely degraded document images. To the best of our knowledge, this practice has not been studied within the context of generative adversarial deep networks. We demonstrate that, in different tasks (document clean up, binarization, deblurring and watermark removal), DE-GAN can produce an enhanced version of the degraded document with a high quality. In addition, our approach provides consistent improvements compared to state-of-the-art methods over the widely used DIBCO 2013, DIBCO 2017 and H-DIBCO 2018 datasets, proving its ability to restore a degraded document image to its ideal condition. The obtained results on a wide variety of degradation reveal the flexibility of the proposed model to be exploited in other document enhancement problems. | ||||
Address | 1 March 2022 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG; 602.230; 600.121; 600.140 | Approved | no | ||
Call Number | Admin @ si @ SoK2022 | Serial | 3454 | ||
Permanent link to this record | |||||
Author | Xim Cerda-Company; Olivier Penacchio; Xavier Otazu | ||||
Title | Chromatic Induction in Migraine | Type | Journal | ||
Year | 2021 | Publication | VISION | Abbreviated Journal | |
Volume | 5 | Issue | 3 | Pages | 37 |
Keywords | migraine; vision; colour; colour perception; chromatic induction; psychophysics | ||||
Abstract | The human visual system is not a colorimeter. The perceived colour of a region does not only depend on its colour spectrum, but also on the colour spectra and geometric arrangement of neighbouring regions, a phenomenon called chromatic induction. Chromatic induction is thought to be driven by lateral interactions: the activity of a central neuron is modified by stimuli outside its classical receptive field through excitatory–inhibitory mechanisms. As there is growing evidence of an excitation/inhibition imbalance in migraine, we compared chromatic induction in migraine and control groups. As hypothesised, we found a difference in the strength of induction between the two groups, with stronger induction effects in migraine. On the other hand, given the increased prevalence of visual phenomena in migraine with aura, we also hypothesised that the difference between migraine and control would be more important in migraine with aura than in migraine without aura. Our experiments did not support this hypothesis. Taken together, our results suggest a link between excitation/inhibition imbalance and increased induction effects. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | NEUROBIT; no proj | Approved | no | ||
Call Number | Admin @ si @ CPO2021 | Serial | 3589 | ||
Permanent link to this record | |||||
Author | Guillermo Torres; Sonia Baeza; Carles Sanchez; Ignasi Guasch; Antoni Rosell; Debora Gil | ||||
Title | An Intelligent Radiomic Approach for Lung Cancer Screening | Type | Journal Article | ||
Year | 2022 | Publication | Applied Sciences | Abbreviated Journal | APPLSCI |
Volume | 12 | Issue | 3 | Pages | 1568 |
Keywords | Lung cancer; Early diagnosis; Screening; Neural networks; Image embedding; Architecture optimization | ||||
Abstract | The efficiency of lung cancer screening for reducing mortality is hindered by the high rate of false positives. Artificial intelligence applied to radiomics could help to early discard benign cases from the analysis of CT scans. The available amount of data and the fact that benign cases are a minority, constitutes a main challenge for the successful use of state of the art methods (like deep learning), which can be biased, over-fitted and lack of clinical reproducibility. We present an hybrid approach combining the potential of radiomic features to characterize nodules in CT scans and the generalization of the feed forward networks. In order to obtain maximal reproducibility with minimal training data, we propose an embedding of nodules based on the statistical significance of radiomic features for malignancy detection. This representation space of lesions is the input to a feed
forward network, which architecture and hyperparameters are optimized using own-defined metrics of the diagnostic power of the whole system. Results of the best model on an independent set of patients achieve 100% of sensitivity and 83% of specificity (AUC = 0.94) for malignancy detection. |
||||
Address | Jan 2022 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | IAM; 600.139; 600.145 | Approved | no | ||
Call Number | Admin @ si @ TBS2022 | Serial | 3699 | ||
Permanent link to this record | |||||
Author | Niki Aifanti; Angel Sappa; N. Grammalidis; Sotiris Malassiotis | ||||
Title | Advances in Tracking and Recognition of Human Motion | Type | Book Chapter | ||
Year | 2009 | Publication | Encyclopedia of Information Science and Technology | Abbreviated Journal | |
Volume | I | Issue | 2nd edition | Pages | 65–71 |
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ADAS | Approved | no | ||
Call Number | ADAS @ adas @ ASG2009 | Serial | 1143 | ||
Permanent link to this record | |||||
Author | Diego Velazquez; Pau Rodriguez; Josep M. Gonfaus; Xavier Roca; Jordi Gonzalez | ||||
Title | A Closer Look at Embedding Propagation for Manifold Smoothing | Type | Journal Article | ||
Year | 2022 | Publication | Journal of Machine Learning Research | Abbreviated Journal | JMLR |
Volume | 23 | Issue | 252 | Pages | 1-27 |
Keywords | Regularization; emi-supervised learning; self-supervised learning; adversarial robustness; few-shot classification | ||||
Abstract | Supervised training of neural networks requires a large amount of manually annotated data and the resulting networks tend to be sensitive to out-of-distribution (OOD) data.
Self- and semi-supervised training schemes reduce the amount of annotated data required during the training process. However, OOD generalization remains a major challenge for most methods. Strategies that promote smoother decision boundaries play an important role in out-of-distribution generalization. For example, embedding propagation (EP) for manifold smoothing has recently shown to considerably improve the OOD performance for few-shot classification. EP achieves smoother class manifolds by building a graph from sample embeddings and propagating information through the nodes in an unsupervised manner. In this work, we extend the original EP paper providing additional evidence and experiments showing that it attains smoother class embedding manifolds and improves results in settings beyond few-shot classification. Concretely, we show that EP improves the robustness of neural networks against multiple adversarial attacks as well as semi- and self-supervised learning performance. |
||||
Address | 9/2022 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | Admin @ si @ VRG2022 | Serial | 3762 | ||
Permanent link to this record | |||||
Author | T.Chauhan; E.Perales; Kaida Xiao; E.Hird ; Dimosthenis Karatzas; Sophie Wuerger | ||||
Title | The achromatic locus: Effect of navigation direction in color space | Type | Journal Article | ||
Year | 2014 | Publication | Journal of Vision | Abbreviated Journal | VSS |
Volume | 14 (1) | Issue | 25 | Pages | 1-11 |
Keywords | achromatic; unique hues; color constancy; luminance; color space | ||||
Abstract | 5Y Impact Factor: 2.99 / 1st (Ophthalmology)
An achromatic stimulus is defined as a patch of light that is devoid of any hue. This is usually achieved by asking observers to adjust the stimulus such that it looks neither red nor green and at the same time neither yellow nor blue. Despite the theoretical and practical importance of the achromatic locus, little is known about the variability in these settings. The main purpose of the current study was to evaluate whether achromatic settings were dependent on the task of the observers, namely the navigation direction in color space. Observers could either adjust the test patch along the two chromatic axes in the CIE u*v* diagram or, alternatively, navigate along the unique-hue lines. Our main result is that the navigation method affects the reliability of these achromatic settings. Observers are able to make more reliable achromatic settings when adjusting the test patch along the directions defined by the four unique hues as opposed to navigating along the main axes in the commonly used CIE u*v* chromaticity plane. This result holds across different ambient viewing conditions (Dark, Daylight, Cool White Fluorescent) and different test luminance levels (5, 20, and 50 cd/m2). The reduced variability in the achromatic settings is consistent with the idea that internal color representations are more aligned with the unique-hue lines than the u* and v* axes. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG; 600.077 | Approved | no | ||
Call Number | Admin @ si @ CPX2014 | Serial | 2418 | ||
Permanent link to this record | |||||
Author | Qingshan Chen; Zhenzhen Quan; Yujun Li; Chao Zhai; Mikhail Mozerov | ||||
Title | An Unsupervised Domain Adaption Approach for Cross-Modality RGB-Infrared Person Re-Identification | Type | Journal Article | ||
Year | 2023 | Publication | IEEE Sensors Journal | Abbreviated Journal | IEEE-SENS |
Volume | 23 | Issue | 24 | Pages | |
Keywords | Q. Chen, Z. Quan, Y. Li, C. Zhai and M. G. Mozerov | ||||
Abstract | Dual-camera systems commonly employed in surveillance serve as the foundation for RGB-infrared (IR) cross-modality person re-identification (ReID). However, significant modality differences give rise to inferior performance compared to single-modality scenarios. Furthermore, most existing studies in this area rely on supervised training with meticulously labeled datasets. Labeling RGB-IR image pairs is more complex than labeling conventional image data, and deploying pretrained models on unlabeled datasets can lead to catastrophic performance degradation. In contrast to previous solutions that focus solely on cross-modality or domain adaptation issues, this article presents an end-to-end unsupervised domain adaptation (UDA) framework for the cross-modality person ReID, which can simultaneously address both of these challenges. This model employs source domain classes, target domain clusters, and unclustered instance samples for the training, maximizing the comprehensive use of the dataset. Moreover, it addresses the problem of mismatched clustering labels between the two modalities in the target domain by incorporating a label matching module that reassigns reliable clusters with labels, ensuring correspondence between different modality labels. We construct the loss function by incorporating distinctiveness loss and multiplicity loss, both of which are determined by the similarity of neighboring features in the predicted feature space and the difference between distant features. This approach enables efficient feature clustering and cluster class assignment to occur concurrently. Eight UDA cross-modality person ReID experiments are conducted on three real datasets and six synthetic datasets. The experimental results unequivocally demonstrate that the proposed model outperforms the existing state-of-the-art algorithms to a significant degree. Notably, in RegDB → RegDB_light, the Rank-1 accuracy exhibits a remarkable improvement of 8.24%. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | LAMP | Approved | no | ||
Call Number | Admin @ si @ CQL2023 | Serial | 3884 | ||
Permanent link to this record | |||||
Author | Hannes Mueller; Andre Groeger; Jonathan Hersh; Andrea Matranga; Joan Serrat | ||||
Title | Monitoring war destruction from space using machine learning | Type | Journal Article | ||
Year | 2021 | Publication | Proceedings of the National Academy of Sciences of the United States of America | Abbreviated Journal | PNAS |
Volume | 118 | Issue | 23 | Pages | e2025400118 |
Keywords | |||||
Abstract | Existing data on building destruction in conflict zones rely on eyewitness reports or manual detection, which makes it generally scarce, incomplete, and potentially biased. This lack of reliable data imposes severe limitations for media reporting, humanitarian relief efforts, human-rights monitoring, reconstruction initiatives, and academic studies of violent conflict. This article introduces an automated method of measuring destruction in high-resolution satellite images using deep-learning techniques combined with label augmentation and spatial and temporal smoothing, which exploit the underlying spatial and temporal structure of destruction. As a proof of concept, we apply this method to the Syrian civil war and reconstruct the evolution of damage in major cities across the country. Our approach allows generating destruction data with unprecedented scope, resolution, and frequency—and makes use of the ever-higher frequency at which satellite imagery becomes available. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ADAS; 600.118 | Approved | no | ||
Call Number | Admin @ si @ MGH2021 | Serial | 3584 | ||
Permanent link to this record | |||||
Author | Anastasios Doulamis; Nikolaos Doulamis; Marco Bertini; Jordi Gonzalez; Thomas B. Moeslund | ||||
Title | Introduction to the Special Issue on the Analysis and Retrieval of Events/Actions and Workflows in Video Streams | Type | Journal Article | ||
Year | 2016 | Publication | Multimedia Tools and Applications | Abbreviated Journal | MTAP |
Volume | 75 | Issue | 22 | Pages | 14985-14990 |
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ISE; HUPBA | Approved | no | ||
Call Number | Admin @ si @ DDB2016 | Serial | 2934 | ||
Permanent link to this record | |||||
Author | Pau Rodriguez; Diego Velazquez; Guillem Cucurull; Josep M. Gonfaus; Xavier Roca; Seiichi Ozawa; Jordi Gonzalez | ||||
Title | Personality Trait Analysis in Social Networks Based on Weakly Supervised Learning of Shared Images | Type | Journal Article | ||
Year | 2020 | Publication | Applied Sciences | Abbreviated Journal | APPLSCI |
Volume | 10 | Issue | 22 | Pages | 8170 |
Keywords | sentiment analysis, personality trait analysis; weakly-supervised learning; visual classification; OCEAN model; social networks | ||||
Abstract | Social networks have attracted the attention of psychologists, as the behavior of users can be used to assess personality traits, and to detect sentiments and critical mental situations such as depression or suicidal tendencies. Recently, the increasing amount of image uploads to social networks has shifted the focus from text to image-based personality assessment. However, obtaining the ground-truth requires giving personality questionnaires to the users, making the process very costly and slow, and hindering research on large populations. In this paper, we demonstrate that it is possible to predict which images are most associated with each personality trait of the OCEAN personality model, without requiring ground-truth personality labels. Namely, we present a weakly supervised framework which shows that the personality scores obtained using specific images textually associated with particular personality traits are highly correlated with scores obtained using standard text-based personality questionnaires. We trained an OCEAN trait model based on Convolutional Neural Networks (CNNs), learned from 120K pictures posted with specific textual hashtags, to infer whether the personality scores from the images uploaded by users are consistent with those scores obtained from text. In order to validate our claims, we performed a personality test on a heterogeneous group of 280 human subjects, showing that our model successfully predicts which kind of image will match a person with a given level of a trait. Looking at the results, we obtained evidence that personality is not only correlated with text, but with image content too. Interestingly, different visual patterns emerged from those images most liked by persons with a particular personality trait: for instance, pictures most associated with high conscientiousness usually contained healthy food, while low conscientiousness pictures contained injuries, guns, and alcohol. These findings could pave the way to complement text-based personality questionnaires with image-based questions. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ISE; 600.119 | Approved | no | ||
Call Number | Admin @ si @ RVC2020b | Serial | 3553 | ||
Permanent link to this record | |||||
Author | Dorota Kaminska; Kadir Aktas; Davit Rizhinashvili; Danila Kuklyanov; Abdallah Hussein Sham; Sergio Escalera; Kamal Nasrollahi; Thomas B. Moeslund; Gholamreza Anbarjafari | ||||
Title | Two-stage Recognition and Beyond for Compound Facial Emotion Recognition | Type | Journal Article | ||
Year | 2021 | Publication | Electronics | Abbreviated Journal | ELEC |
Volume | 10 | Issue | 22 | Pages | 2847 |
Keywords | compound emotion recognition; facial expression recognition; dominant and complementary emotion recognition; deep learning | ||||
Abstract | Facial emotion recognition is an inherently complex problem due to individual diversity in facial features and racial and cultural differences. Moreover, facial expressions typically reflect the mixture of people’s emotional statuses, which can be expressed using compound emotions. Compound facial emotion recognition makes the problem even more difficult because the discrimination between dominant and complementary emotions is usually weak. We have created a database that includes 31,250 facial images with different emotions of 115 subjects whose gender distribution is almost uniform to address compound emotion recognition. In addition, we have organized a competition based on the proposed dataset, held at FG workshop 2020. This paper analyzes the winner’s approach—a two-stage recognition method (1st stage, coarse recognition; 2nd stage, fine recognition), which enhances the classification of symmetrical emotion labels. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | HUPBA; no proj | Approved | no | ||
Call Number | Admin @ si @ KAR2021 | Serial | 3642 | ||
Permanent link to this record | |||||
Author | Sergio Escalera; Jordi Gonzalez; Hugo Jair Escalante; Xavier Baro; Isabelle Guyon | ||||
Title | Looking at People Special Issue | Type | Journal Article | ||
Year | 2018 | Publication | International Journal of Computer Vision | Abbreviated Journal | IJCV |
Volume | 126 | Issue | 2-4 | Pages | 141-143 |
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | HUPBA; ISE; 600.119 | Approved | no | ||
Call Number | Admin @ si @ EGJ2018 | Serial | 3093 | ||
Permanent link to this record | |||||
Author | Arjan Gijsenij; Theo Gevers; Joost Van de Weijer | ||||
Title | Generalized Gamut Mapping using Image Derivative Structures for Color Constancy | Type | Journal Article | ||
Year | 2010 | Publication | International Journal of Computer Vision | Abbreviated Journal | IJCV |
Volume | 86 | Issue | 2-3 | Pages | 127-139 |
Keywords | |||||
Abstract | The gamut mapping algorithm is one of the most promising methods to achieve computational color constancy. However, so far, gamut mapping algorithms are restricted to the use of pixel values to estimate the illuminant. Therefore, in this paper, gamut mapping is extended to incorporate the statistical nature of images. It is analytically shown that the proposed gamut mapping framework is able to include any linear filter output. The main focus is on the local n-jet describing the derivative structure of an image. It is shown that derivatives have the advantage over pixel values to be invariant to disturbing effects (i.e. deviations of the diagonal model) such as saturated colors and diffuse light. Further, as the n-jet based gamut mapping has the ability to use more information than pixel values alone, the combination of these algorithms are more stable than the regular gamut mapping algorithm. Different methods of combining are proposed. Based on theoretical and experimental results conducted on large scale data sets of hyperspectral, laboratory and realworld scenes, it can be derived that (1) in case of deviations of the diagonal model, the derivative-based approach outperforms the pixel-based gamut mapping, (2) state-of-the-art algorithms are outperformed by the n-jet based gamut mapping, (3) the combination of the different n-jet based gamut | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Kluwer Academic Publishers Hingham, MA, USA | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0920-5691 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ISE | Approved | no | ||
Call Number | CAT @ cat @ GGW2010 | Serial | 1274 | ||
Permanent link to this record |