|   | 
Details
   web
Records
Author Naila Murray; Luca Marchesotti; Florent Perronnin
Title Learning to Rank Images using Semantic and Aesthetic Labels Type Conference Article
Year 2012 Publication 23rd British Machine Vision Conference Abbreviated Journal
Volume Issue Pages (up) 110.1-110.10
Keywords
Abstract Most works on image retrieval from text queries have addressed the problem of retrieving semantically relevant images. However, the ability to assess the aesthetic quality of an image is an increasingly important differentiating factor for search engines. In this work, given a semantic query, we are interested in retrieving images which are semantically relevant and score highly in terms of aesthetics/visual quality. We use large-margin classifiers and rankers to learn statistical models capable of ordering images based on the aesthetic and semantic information. In particular, we compare two families of approaches: while the first one attempts to learn a single ranker which takes into account both semantic and aesthetic information, the second one learns separate semantic and aesthetic models. We carry out a quantitative and qualitative evaluation on a recently-published large-scale dataset and we show that the second family of techniques significantly outperforms the first one.
Address Guildford, London
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 1-901725-46-4 Medium
Area Expedition Conference BMVC
Notes CIC Approved no
Call Number Admin @ si @ MMP2012b Serial 2027
Permanent link to this record
 

 
Author Antonio Lopez; J. Hilgenstock; A. Busse; Ramon Baldrich; Felipe Lumbreras; Joan Serrat
Title Nightime Vehicle Detecion for Intelligent Headlight Control Type Conference Article
Year 2008 Publication Advanced Concepts for Intelligent Vision Systems, 10th International Conference, Proceedings, Abbreviated Journal
Volume 5259 Issue Pages (up) 113–124
Keywords Intelligent Headlights; vehicle detection
Abstract
Address Juan-les-Pins, France
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ACIVS
Notes ADAS;CIC Approved no
Call Number ADAS @ adas @ LHB2008a Serial 1098
Permanent link to this record
 

 
Author Jaime Moreno; Xavier Otazu; Maria Vanrell
Title Contribution of CIWaM in JPEG2000 Quantization for Color Images Type Conference Article
Year 2010 Publication Proceedings of The CREATE 2010 Conference Abbreviated Journal
Volume Issue Pages (up) 132–136
Keywords
Abstract The aim of this work is to explain how to apply perceptual concepts to define a perceptual pre-quantizer and to improve JPEG2000 compressor. The approach consists in quantizing wavelet transform coefficients using some of the human visual system behavior properties. Noise is fatal to image compression performance, because it can be both annoying for the observer and consumes excessive bandwidth when the imagery is transmitted. Perceptual pre-quantization reduces unperceivable details and thus improve both visual impression and transmission properties. The comparison between JPEG2000 without and with perceptual pre-quantization shows that the latter is not favorable in PSNR, but the recovered image is more compressed at the same or even better visual quality measured with a weighted PSNR. Perceptual criteria were taken from the CIWaM(ChromaticInductionWaveletModel).
Address Gjovik (Norway)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CREATE
Notes CIC Approved no
Call Number CAT @ cat @ MOV2010b Serial 1308
Permanent link to this record
 

 
Author Jordi Roca; C. Alejandro Parraga; Maria Vanrell
Title Predicting categorical colour perception in successive colour constancy Type Abstract
Year 2012 Publication Perception Abbreviated Journal PER
Volume 41 Issue Pages (up) 138
Keywords
Abstract Colour constancy is a perceptual mechanism that seeks to keep the colour of objects relatively stable under an illumination shift. Experiments haveshown that its effects depend on the number of colours present in the scene. We
studied categorical colour changes under different adaptation states, in particular, whether the colour categories seen under a chromatically neutral illuminant are the same after a shift in the chromaticity of the illumination. To do this, we developed the chromatic setting paradigm (2011 Journal of Vision11 349), which is as an extension of achromatic setting to colour categories. The paradigm exploits the ability of subjects to reliably reproduce the most representative examples of each category, adjusting multiple test patches embedded in a coloured Mondrian. Our experiments were run on a CRT monitor (inside a dark room) under various simulated illuminants and restricting the number of colours of the Mondrian background to three, thus weakening the adaptation effect. Our results show a change in the colour categories present before (under neutral illumination) and after adaptation (under coloured illuminants) with a tendency for adapted colours to be less saturated than before adaptation. This behaviour was predicted by a simple
affine matrix model, adjusted to the chromatic setting results.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0301-0066 ISBN Medium
Area Expedition Conference
Notes CIC Approved no
Call Number Admin @ si @ RPV2012 Serial 2188
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Joost Van de Weijer; Sadiq Ali; Michael Felsberg
Title Evaluating the impact of color on texture recognition Type Conference Article
Year 2013 Publication 15th International Conference on Computer Analysis of Images and Patterns Abbreviated Journal
Volume 8047 Issue Pages (up) 154-162
Keywords Color; Texture; image representation
Abstract State-of-the-art texture descriptors typically operate on grey scale images while ignoring color information. A common way to obtain a joint color-texture representation is to combine the two visual cues at the pixel level. However, such an approach provides sub-optimal results for texture categorisation task.
In this paper we investigate how to optimally exploit color information for texture recognition. We evaluate a variety of color descriptors, popular in image classification, for texture categorisation. In addition we analyze different fusion approaches to combine color and texture cues. Experiments are conducted on the challenging scenes and 10 class texture datasets. Our experiments clearly suggest that in all cases color names provide the best performance. Late fusion is the best strategy to combine color and texture. By selecting the best color descriptor with optimal fusion strategy provides a gain of 5% to 8% compared to texture alone on scenes and texture datasets.
Address York; UK; August 2013
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-642-40260-9 Medium
Area Expedition Conference CAIP
Notes CIC; 600.048 Approved no
Call Number Admin @ si @ KWA2013 Serial 2263
Permanent link to this record
 

 
Author Jordi Roca; A.Owen; G.Jordan; Y.Ling; C. Alejandro Parraga; A.Hurlbert
Title Inter-individual Variations in Color Naming and the Structure of 3D Color Space Type Abstract
Year 2011 Publication Journal of Vision Abbreviated Journal VSS
Volume 12 Issue 2 Pages (up) 166
Keywords
Abstract 36.307
Many everyday behavioural uses of color vision depend on color naming ability, which is neither measured nor predicted by most standardized tests of color vision, for either normal or anomalous color vision. Here we demonstrate a new method to quantify color naming ability by deriving a compact computational description of individual 3D color spaces. Methods: Individual observers underwent standardized color vision diagnostic tests (including anomaloscope testing) and a series of custom-made color naming tasks using 500 distinct color samples, either CRT stimuli (“light”-based) or Munsell chips (“surface”-based), with both forced- and free-choice color naming paradigms. For each subject, we defined his/her color solid as the set of 3D convex hulls computed for each basic color category from the relevant collection of categorised points in perceptually uniform CIELAB space. From the parameters of the convex hulls, we derived several indices to characterise the 3D structure of the color solid and its inter-individual variations. Using a reference group of 25 normal trichromats (NT), we defined the degree of normality for the shape, location and overlap of each color region, and the extent of “light”-“surface” agreement. Results: Certain features of color perception emerge from analysis of the average NT color solid, e.g.: (1) the white category is slightly shifted towards blue; and (2) the variability in category border location across NT subjects is asymmetric across color space, with least variability in the blue/green region. Comparisons between individual and average NT indices reveal specific naming “deficits”, e.g.: (1) Category volumes for white, green, brown and grey are expanded for anomalous trichromats and dichromats; and (2) the focal structure of color space is disrupted more in protanopia than other forms of anomalous color vision. The indices both capture the structure of subjective color spaces and allow us to quantify inter-individual differences in color naming ability.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1534-7362 ISBN Medium
Area Expedition Conference
Notes CIC Approved no
Call Number Admin @ si @ ROJ2011 Serial 1758
Permanent link to this record
 

 
Author Naila Murray; Sandra Skaff; Luca Marchesotti; Florent Perronnin
Title Towards Automatic Concept Transfer Type Conference Article
Year 2011 Publication Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Non-Photorealistic Animation and Rendering Abbreviated Journal
Volume Issue Pages (up) 167.176
Keywords chromatic modeling, color concepts, color transfer, concept transfer
Abstract This paper introduces a novel approach to automatic concept transfer; examples of concepts are “romantic”, “earthy”, and “luscious”. The approach modifies the color content of an input image given only a concept specified by a user in natural language, thereby requiring minimal user input. This approach is particularly useful for users who are aware of the message they wish to convey in the transferred image while being unsure of the color combination needed to achieve the corresponding transfer. The user may adjust the intensity level of the concept transfer to his/her liking with a single parameter. The proposed approach uses a convex clustering algorithm, with a novel pruning mechanism, to automatically set the complexity of models of chromatic content. It also uses the Earth-Mover's Distance to compute a mapping between the models of the input image and the target chromatic concept. Results show that our approach yields transferred images which effectively represent concepts, as confirmed by a user study.
Address
Corporate Author Thesis
Publisher ACM Press Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-1-4503-0907-3 Medium
Area Expedition Conference NPAR
Notes CIC Approved no
Call Number Admin @ si @ MSM2011 Serial 1866
Permanent link to this record
 

 
Author Javier Vazquez; C. Alejandro Parraga; Maria Vanrell
Title Ordinal pairwise method for natural images comparison Type Journal Article
Year 2009 Publication Perception Abbreviated Journal PER
Volume 38 Issue Pages (up) 180
Keywords
Abstract 38(Suppl.)ECVP Abstract Supplement
We developed a new psychophysical method to compare different colour appearance models when applied to natural scenes. The method was as follows: two images (processed by different algorithms) were displayed on a CRT monitor and observers were asked to select the most natural of them. The original images were gathered by means of a calibrated trichromatic digital camera and presented one on top of the other on a calibrated screen. The selection was made by pressing on a 6-button IR box, which allowed observers to consider not only the most natural but to rate their selection. The rating system allowed observers to register how much more natural was their chosen image (eg, much more, definitely more, slightly more), which gave us valuable extra information on the selection process. The results were analysed considering both the selection as a binary choice (using Thurstone's law of comparative judgement) and using Bradley-Terry method for ordinal comparison. Our results show a significant difference in the rating scales obtained. Although this method has been used in colour constancy algorithm comparisons, its uses are much wider, eg to compare algorithms of image compression, rendering, recolouring, etc.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes CIC Approved no
Call Number CAT @ cat @ VPV2009b Serial 1191
Permanent link to this record
 

 
Author C. Alejandro Parraga; Javier Vazquez; Maria Vanrell
Title A new cone activation-based natural images dataset Type Journal Article
Year 2009 Publication Perception Abbreviated Journal PER
Volume 36 Issue Pages (up) 180
Keywords
Abstract We generated a new dataset of digital natural images where each colour plane corresponds to the human LMS (long-, medium-, short-wavelength) cone activations. The images were chosen to represent five different visual environments (eg forest, seaside, mountain snow, urban, motorways) and were taken under natural illumination at different times of day. At the bottom-left corner of each picture there was a matte grey ball of approximately constant spectral reflectance (across the camera's response spectrum,) and nearly Lambertian reflective properties, which allows to compute (and remove, if necessary) the illuminant's colour and intensity. The camera (Sigma Foveon SD10) was calibrated by measuring its sensor's spectral responses using a set of 31 spectrally narrowband interference filters. This allowed conversion of the final camera-dependent RGB colour space into the Smith and Pokorny (1975) cone activation space by means of a polynomial transformation, optimised for a set of 1269 Munsell chip reflectances. This new method is an improvement over the usual 3 × 3 matrix transformation which is only accurate for spectrally-narrowband colours. The camera-to-LMS transformation can be recalculated to consider other non-human visual systems. The dataset is available to download from our website.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes CIC Approved no
Call Number CAT @ cat @ PVV2009 Serial 1193
Permanent link to this record
 

 
Author Jordi Roca; C. Alejandro Parraga; Maria Vanrell
Title Categorical Focal Colours are Structurally Invariant Under Illuminant Changes Type Conference Article
Year 2011 Publication European Conference on Visual Perception Abbreviated Journal
Volume Issue Pages (up) 196
Keywords
Abstract The visual system perceives the colour of surfaces approximately constant under changes of illumination. In this work, we investigate how stable is the perception of categorical \“focal\” colours and their interrelations with varying illuminants and simple chromatic backgrounds. It has been proposed that best examples of colour categories across languages cluster in small regions of the colour space and are restricted to a set of 11 basic terms (Kay and Regier, 2003 Proceedings of the National Academy of Sciences of the USA 100 9085\–9089). Following this, we developed a psychophysical paradigm that exploits the ability of subjects to reliably reproduce the most representative examples of each category, adjusting multiple test patches embedded in a coloured Mondrian. The experiment was run on a CRT monitor (inside a dark room) under various simulated illuminants. We modelled the recorded data for each subject and adapted state as a 3D interconnected structure (graph) in Lab space. The graph nodes were the subject\’s focal colours at each adaptation state. The model allowed us to get a better distance measure between focal structures under different illuminants. We found that perceptual focal structures tend to be preserved better than the structures of the physical \“ideal\” colours under illuminant changes.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Perception 40 Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ECVP
Notes CIC Approved no
Call Number Admin @ si @ RPV2011 Serial 1867
Permanent link to this record
 

 
Author Olivier Penacchio; C. Alejandro Parraga
Title What is the best criterion for an efficient design of retinal photoreceptor mosaics? Type Journal Article
Year 2011 Publication Perception Abbreviated Journal PER
Volume 40 Issue Pages (up) 197
Keywords
Abstract The proportions of L, M and S photoreceptors in the primate retina are arguably determined by evolutionary pressure and the statistics of the visual environment. Two information theory-based approaches have been recently proposed for explaining the asymmetrical spatial densities of photoreceptors in humans. In the first approach Garrigan et al (2010 PLoS ONE 6 e1000677), a model for computing the information transmitted by cone arrays which considers the differential blurring produced by the long-wavelength accommodation of the eye’s lens is proposed. Their results explain the sparsity of S-cones but the optimum depends weakly on the L:M cone ratio. In the second approach (Penacchio et al, 2010 Perception 39 ECVP Supplement, 101), we show that human cone arrays make the visual representation scale-invariant, allowing the total entropy of the signal to be preserved while decreasing individual neurons’ entropy in further retinotopic representations. This criterion provides a thorough description of the distribution of L:M cone ratios and does not depend on differential blurring of the signal by the lens. Here, we investigate the similarities and differences of both approaches when applied to the same database. Our results support a 2-criteria optimization in the space of cone ratios whose components are arguably important and mostly unrelated.
[This work was partially funded by projects TIN2010-21771-C02-1 and Consolider-Ingenio 2010-CSD2007-00018 from the Spanish MICINN. CAP was funded by grant RYC-2007-00484]
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes CIC Approved no
Call Number Admin @ si @ PeP2011a Serial 1719
Permanent link to this record
 

 
Author Alicia Fornes; Xavier Otazu; Josep Llados
Title Show through cancellation and image enhancement by multiresolution contrast processing Type Conference Article
Year 2013 Publication 12th International Conference on Document Analysis and Recognition Abbreviated Journal
Volume Issue Pages (up) 200-204
Keywords
Abstract Historical documents suffer from different types of degradation and noise such as background variation, uneven illumination or dark spots. In case of double-sided documents, another common problem is that the back side of the document usually interferes with the front side because of the transparency of the document or ink bleeding. This effect is called the show through phenomenon. Many methods are developed to solve these problems, and in the case of show-through, by scanning and matching both the front and back sides of the document. In contrast, our approach is designed to use only one side of the scanned document. We hypothesize that show-trough are low contrast components, while foreground components are high contrast ones. A Multiresolution Contrast (MC) decomposition is presented in order to estimate the contrast of features at different spatial scales. We cancel the show-through phenomenon by thresholding these low contrast components. This decomposition is also able to enhance the image removing shadowed areas by weighting spatial scales. Results show that the enhanced images improve the readability of the documents, allowing scholars both to recover unreadable words and to solve ambiguities.
Address Washington; USA; August 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1520-5363 ISBN Medium
Area Expedition Conference ICDAR
Notes DAG; 602.006; 600.045; 600.061; 600.052;CIC Approved no
Call Number Admin @ si @ FOL2013 Serial 2241
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Muhammad Anwer Rao; Joost Van de Weijer; Andrew Bagdanov; Antonio Lopez; Michael Felsberg
Title Coloring Action Recognition in Still Images Type Journal Article
Year 2013 Publication International Journal of Computer Vision Abbreviated Journal IJCV
Volume 105 Issue 3 Pages (up) 205-221
Keywords
Abstract In this article we investigate the problem of human action recognition in static images. By action recognition we intend a class of problems which includes both action classification and action detection (i.e. simultaneous localization and classification). Bag-of-words image representations yield promising results for action classification, and deformable part models perform very well object detection. The representations for action recognition typically use only shape cues and ignore color information. Inspired by the recent success of color in image classification and object detection, we investigate the potential of color for action classification and detection in static images. We perform a comprehensive evaluation of color descriptors and fusion approaches for action recognition. Experiments were conducted on the three datasets most used for benchmarking action recognition in still images: Willow, PASCAL VOC 2010 and Stanford-40. Our experiments demonstrate that incorporating color information considerably improves recognition performance, and that a descriptor based on color names outperforms pure color descriptors. Our experiments demonstrate that late fusion of color and shape information outperforms other approaches on action recognition. Finally, we show that the different color–shape fusion approaches result in complementary information and combining them yields state-of-the-art performance for action classification.
Address
Corporate Author Thesis
Publisher Springer US Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0920-5691 ISBN Medium
Area Expedition Conference
Notes CIC; ADAS; 600.057; 600.048 Approved no
Call Number Admin @ si @ KRW2013 Serial 2285
Permanent link to this record
 

 
Author Olivier Penacchio; Laura Dempere-Marco; Xavier Otazu
Title Switching off brightness induction through induction-reversed images Type Abstract
Year 2012 Publication Perception Abbreviated Journal PER
Volume 41 Issue Pages (up) 208
Keywords
Abstract Brightness induction is the modulation of the perceived intensity of an
area by the luminance of surrounding areas. Although V1 is traditionally regarded as
an area mostly responsive to retinal information, neurophysiological evidence
suggests that it may explicitly represent brightness information. In this work, we
investigate possible neural mechanisms underlying brightness induction. To this end,
we consider the model by Z Li (1999 Computation and Neural Systems10187-212)
which is constrained by neurophysiological data and focuses on the part of V1
responsible for contextual influences. This model, which has proven to account for
phenomena such as contour detection and preattentive segmentation, shares with
brightness induction the relevant effect of contextual influences. Importantly, the
input to our network model derives from a complete multiscale and multiorientation
wavelet decomposition, which makes it possible to recover an image reflecting the
perceived luminance and successfully accounts for well known psychophysical
effects for both static and dynamic contexts. By further considering inverse problem
techniques we define induction-reversed images: given a target image, we build an
image whose perceived luminance matches the actual luminance of the original
stimulus, thus effectively canceling out brightness induction effects. We suggest that
induction-reversed images may help remove undesired perceptual effects and can
find potential applications in fields such as radiological image interpretation
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes CIC Approved no
Call Number Admin @ si @ PDO2012a Serial 2180
Permanent link to this record
 

 
Author C. Alejandro Parraga; Robert Benavente; Maria Vanrell; Ramon Baldrich
Title Modelling Inter-Colour Regions of Colour Naming Space Type Conference Article
Year 2008 Publication 4th European Conference on Colour in Graphics, Imaging and Vision Proceedings Abbreviated Journal
Volume Issue Pages (up) 218–222
Keywords
Abstract
Address Terrassa (Spain)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CGIV08
Notes CAT;CIC Approved no
Call Number CAT @ cat @ PBV2008 Serial 969
Permanent link to this record
 

 
Author Ivet Rafegas; Maria Vanrell
Title Color spaces emerging from deep convolutional networks Type Conference Article
Year 2016 Publication 24th Color and Imaging Conference Abbreviated Journal
Volume Issue Pages (up) 225-230
Keywords
Abstract Award for the best interactive session
Defining color spaces that provide a good encoding of spatio-chromatic properties of color surfaces is an open problem in color science [8, 22]. Related to this, in computer vision the fusion of color with local image features has been studied and evaluated [16]. In human vision research, the cells which are selective to specific color hues along the visual pathway are also a focus of attention [7, 14]. In line with these research aims, in this paper we study how color is encoded in a deep Convolutional Neural Network (CNN) that has been trained on more than one million natural images for object recognition. These convolutional nets achieve impressive performance in computer vision, and rival the representations in human brain. In this paper we explore how color is represented in a CNN architecture that can give some intuition about efficient spatio-chromatic representations. In convolutional layers the activation of a neuron is related to a spatial filter, that combines spatio-chromatic representations. We use an inverted version of it to explore the properties. Using a series of unsupervised methods we classify different type of neurons depending on the color axes they define and we propose an index of color-selectivity of a neuron. We estimate the main color axes that emerge from this trained net and we prove that colorselectivity of neurons decreases from early to deeper layers.
Address San Diego; USA; November 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CIC
Notes CIC Approved no
Call Number Admin @ si @ RaV2016a Serial 2894
Permanent link to this record
 

 
Author C. Alejandro Parraga; Jordi Roca; Dimosthenis Karatzas; Sophie Wuerger
Title Limitations of visual gamma corrections in LCD displays Type Journal Article
Year 2014 Publication Displays Abbreviated Journal Dis
Volume 35 Issue 5 Pages (up) 227–239
Keywords Display calibration; Psychophysics; Perceptual; Visual gamma correction; Luminance matching; Observer-based calibration
Abstract A method for estimating the non-linear gamma transfer function of liquid–crystal displays (LCDs) without the need of a photometric measurement device was described by Xiao et al. (2011) [1]. It relies on observer’s judgments of visual luminance by presenting eight half-tone patterns with luminances from 1/9 to 8/9 of the maximum value of each colour channel. These half-tone patterns were distributed over the screen both over the vertical and horizontal viewing axes. We conducted a series of photometric and psychophysical measurements (consisting in the simultaneous presentation of half-tone patterns in each trial) to evaluate whether the angular dependency of the light generated by three different LCD technologies would bias the results of these gamma transfer function estimations. Our results show that there are significant differences between the gamma transfer functions measured and produced by observers at different viewing angles. We suggest appropriate modifications to the Xiao et al. paradigm to counterbalance these artefacts which also have the advantage of shortening the amount of time spent in collecting the psychophysical measurements.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes CIC; DAG; 600.052; 600.077; 600.074 Approved no
Call Number Admin @ si @ PRK2014 Serial 2511
Permanent link to this record
 

 
Author Alicia Fornes; Josep Llados; Gemma Sanchez; Xavier Otazu; Horst Bunke
Title A Combination of Features for Symbol-Independent Writer Identification in Old Music Scores Type Journal Article
Year 2010 Publication International Journal on Document Analysis and Recognition Abbreviated Journal IJDAR
Volume 13 Issue 4 Pages (up) 243-259
Keywords
Abstract The aim of writer identification is determining the writer of a piece of handwriting from a set of writers. In this paper, we present an architecture for writer identification in old handwritten music scores. Even though an important amount of music compositions contain handwritten text, the aim of our work is to use only music notation to determine the author. The main contribution is therefore the use of features extracted from graphical alphabets. Our proposal consists in combining the identification results of two different approaches, based on line and textural features. The steps of the ensemble architecture are the following. First of all, the music sheet is preprocessed for removing the staff lines. Then, music lines and texture images are generated for computing line features and textural features. Finally, the classification results are combined for identifying the writer. The proposed method has been tested on a database of old music scores from the seventeenth to nineteenth centuries, achieving a recognition rate of about 92% with 20 writers.
Address
Corporate Author Thesis
Publisher Springer-Verlag Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1433-2833 ISBN Medium
Area Expedition Conference
Notes DAG; CAT;CIC Approved no
Call Number FLS2010b Serial 1319
Permanent link to this record