Home | << 1 2 3 4 5 6 7 8 9 10 >> [11–11] |
![]() |
Records | |||||
---|---|---|---|---|---|
Author | Olivier Penacchio; C. Alejandro Parraga | ||||
Title | What is the best criterion for an efficient design of retinal photoreceptor mosaics? | Type | Journal Article | ||
Year | 2011 | Publication | Perception | Abbreviated Journal | PER |
Volume | 40 | Issue | Pages | 197 | |
Keywords | |||||
Abstract | The proportions of L, M and S photoreceptors in the primate retina are arguably determined by evolutionary pressure and the statistics of the visual environment. Two information theory-based approaches have been recently proposed for explaining the asymmetrical spatial densities of photoreceptors in humans. In the first approach Garrigan et al (2010 PLoS ONE 6 e1000677), a model for computing the information transmitted by cone arrays which considers the differential blurring produced by the long-wavelength accommodation of the eye’s lens is proposed. Their results explain the sparsity of S-cones but the optimum depends weakly on the L:M cone ratio. In the second approach (Penacchio et al, 2010 Perception 39 ECVP Supplement, 101), we show that human cone arrays make the visual representation scale-invariant, allowing the total entropy of the signal to be preserved while decreasing individual neurons’ entropy in further retinotopic representations. This criterion provides a thorough description of the distribution of L:M cone ratios and does not depend on differential blurring of the signal by the lens. Here, we investigate the similarities and differences of both approaches when applied to the same database. Our results support a 2-criteria optimization in the space of cone ratios whose components are arguably important and mostly unrelated.
[This work was partially funded by projects TIN2010-21771-C02-1 and Consolider-Ingenio 2010-CSD2007-00018 from the Spanish MICINN. CAP was funded by grant RYC-2007-00484] |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN ![]() |
Medium | |||
Area | Expedition | Conference | |||
Notes | CIC | Approved | no | ||
Call Number | Admin @ si @ PeP2011a | Serial | 1719 | ||
Permanent link to this record | |||||
Author | C. Alejandro Parraga; Olivier Penacchio; Maria Vanrell | ||||
Title | Retinal Filtering Matches Natural Image Statistics at Low Luminance Levels | Type | Journal Article | ||
Year | 2011 | Publication | Perception | Abbreviated Journal | PER |
Volume | 40 | Issue | Pages | 96 | |
Keywords | |||||
Abstract | The assumption that the retina’s main objective is to provide a minimum entropy representation to higher visual areas (ie efficient coding principle) allows to predict retinal filtering in space–time and colour (Atick, 1992 Network 3 213–251). This is achieved by considering the power spectra of natural images (which is proportional to 1/f2) and the suppression of retinal and image noise. However, most studies consider images within a limited range of lighting conditions (eg near noon) whereas the visual system’s spatial filtering depends on light intensity and the spatiochromatic properties of natural scenes depend of the time of the day. Here, we explore whether the dependence of visual spatial filtering on luminance match the changes in power spectrum of natural scenes at different times of the day. Using human cone-activation based naturalistic stimuli (from the Barcelona Calibrated Images Database), we show that for a range of luminance levels, the shape of the retinal CSF reflects the slope of the power spectrum at low spatial frequencies. Accordingly, the retina implements the filtering which best decorrelates the input signal at every luminance level. This result is in line with the body of work that places efficient coding as a guiding neural principle. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN ![]() |
Medium | |||
Area | Expedition | Conference | |||
Notes | CIC | Approved | no | ||
Call Number | Admin @ si @ PPV2011 | Serial | 1720 | ||
Permanent link to this record | |||||
Author | Olivier Penacchio | ||||
Title | Mixed Hodge Structures and Equivariant Sheaves on the Projective Plane | Type | Journal Article | ||
Year | 2011 | Publication | Mathematische Nachrichten | Abbreviated Journal | MN |
Volume | 284 | Issue | 4 | Pages | 526-542 |
Keywords | Mixed Hodge structures, equivariant sheaves, MSC (2010) Primary: 14C30, Secondary: 14F05, 14M25 | ||||
Abstract | We describe an equivalence of categories between the category of mixed Hodge structures and a category of equivariant vector bundles on a toric model of the complex projective plane which verify some semistability condition. We then apply this correspondence to define an invariant which generalizes the notion of R-split mixed Hodge structure and give calculations for the first group of cohomology of possibly non smooth or non-complete curves of genus 0 and 1. Finally, we describe some extension groups of mixed Hodge structures in terms of equivariant extensions of coherent sheaves. © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | WILEY-VCH Verlag | Place of Publication | Editor | R. Mennicken | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1522-2616 | ISBN ![]() |
Medium | ||
Area | Expedition | Conference | |||
Notes | CIC | Approved | no | ||
Call Number | Admin @ si @ Pen2011 | Serial | 1721 | ||
Permanent link to this record | |||||
Author | Carles Fernandez; Pau Baiget; Xavier Roca; Jordi Gonzalez | ||||
Title | Determining the Best Suited Semantic Events for Cognitive Surveillance | Type | Journal Article | ||
Year | 2011 | Publication | Expert Systems with Applications | Abbreviated Journal | EXSY |
Volume | 38 | Issue | 4 | Pages | 4068–4079 |
Keywords | Cognitive surveillance; Event modeling; Content-based video retrieval; Ontologies; Advanced user interfaces | ||||
Abstract | State-of-the-art systems on cognitive surveillance identify and describe complex events in selected domains, thus providing end-users with tools to easily access the contents of massive video footage. Nevertheless, as the complexity of events increases in semantics and the types of indoor/outdoor scenarios diversify, it becomes difficult to assess which events describe better the scene, and how to model them at a pixel level to fulfill natural language requests. We present an ontology-based methodology that guides the identification, step-by-step modeling, and generalization of the most relevant events to a specific domain. Our approach considers three steps: (1) end-users provide textual evidence from surveilled video sequences; (2) transcriptions are analyzed top-down to build the knowledge bases for event description; and (3) the obtained models are used to generalize event detection to different image sequences from the surveillance domain. This framework produces user-oriented knowledge that improves on existing advanced interfaces for video indexing and retrieval, by determining the best suited events for video understanding according to end-users. We have conducted experiments with outdoor and indoor scenes showing thefts, chases, and vandalism, demonstrating the feasibility and generalization of this proposal. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Elsevier | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN ![]() |
Medium | |||
Area | Expedition | Conference | |||
Notes | ISE | Approved | no | ||
Call Number | Admin @ si @ FBR2011a | Serial | 1722 | ||
Permanent link to this record | |||||
Author | Carles Fernandez; Pau Baiget; Xavier Roca; Jordi Gonzalez | ||||
Title | Augmenting Video Surveillance Footage with Virtual Agents for Incremental Event Evaluation | Type | Journal Article | ||
Year | 2011 | Publication | Pattern Recognition Letters | Abbreviated Journal | PRL |
Volume | 32 | Issue | 6 | Pages | 878–889 |
Keywords | |||||
Abstract | The fields of segmentation, tracking and behavior analysis demand for challenging video resources to test, in a scalable manner, complex scenarios like crowded environments or scenes with high semantics. Nevertheless, existing public databases cannot scale the presence of appearing agents, which would be useful to study long-term occlusions and crowds. Moreover, creating these resources is expensive and often too particularized to specific needs. We propose an augmented reality framework to increase the complexity of image sequences in terms of occlusions and crowds, in a scalable and controllable manner. Existing datasets can be increased with augmented sequences containing virtual agents. Such sequences are automatically annotated, thus facilitating evaluation in terms of segmentation, tracking, and behavior recognition. In order to easily specify the desired contents, we propose a natural language interface to convert input sentences into virtual agent behaviors. Experimental tests and validation in indoor, street, and soccer environments are provided to show the feasibility of the proposed approach in terms of robustness, scalability, and semantics. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Elsevier | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN ![]() |
Medium | |||
Area | Expedition | Conference | |||
Notes | ISE | Approved | no | ||
Call Number | Admin @ si @ FBR2011b | Serial | 1723 | ||
Permanent link to this record | |||||
Author | Arjan Gijsenij; Theo Gevers | ||||
Title | Color Constancy Using Natural Image Statistics and Scene Semantics | Type | Journal Article | ||
Year | 2011 | Publication | IEEE Transactions on Pattern Analysis and Machine Intelligence | Abbreviated Journal | TPAMI |
Volume | 33 | Issue | 4 | Pages | 687-698 |
Keywords | |||||
Abstract | Existing color constancy methods are all based on specific assumptions such as the spatial and spectral characteristics of images. As a consequence, no algorithm can be considered as universal. However, with the large variety of available methods, the question is how to select the method that performs best for a specific image. To achieve selection and combining of color constancy algorithms, in this paper natural image statistics are used to identify the most important characteristics of color images. Then, based on these image characteristics, the proper color constancy algorithm (or best combination of algorithms) is selected for a specific image. To capture the image characteristics, the Weibull parameterization (e.g., grain size and contrast) is used. It is shown that the Weibull parameterization is related to the image attributes to which the used color constancy methods are sensitive. An MoG-classifier is used to learn the correlation and weighting between the Weibull-parameters and the image attributes (number of edges, amount of texture, and SNR). The output of the classifier is the selection of the best performing color constancy method for a certain image. Experimental results show a large improvement over state-of-the-art single algorithms. On a data set consisting of more than 11,000 images, an increase in color constancy performance up to 20 percent (median angular error) can be obtained compared to the best-performing single algorithm. Further, it is shown that for certain scene categories, one specific color constancy algorithm can be used instead of the classifier considering several algorithms. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0162-8828 | ISBN ![]() |
Medium | ||
Area | Expedition | Conference | |||
Notes | ISE | Approved | no | ||
Call Number | Admin @ si @ GiG2011 | Serial | 1724 | ||
Permanent link to this record | |||||
Author | Albert Ali Salah; Theo Gevers; Nicu Sebe; Alessandro Vinciarelli | ||||
Title | Computer Vision for Ambient Intelligence | Type | Journal Article | ||
Year | 2011 | Publication | Journal of Ambient Intelligence and Smart Environments | Abbreviated Journal | JAISE |
Volume | 3 | Issue | 3 | Pages | 187-191 |
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN ![]() |
Medium | |||
Area | Expedition | Conference | |||
Notes | ISE | Approved | no | ||
Call Number | Admin @ si @ SGS2011a | Serial | 1725 | ||
Permanent link to this record | |||||
Author | Arnau Ramisa; Alex Goldhoorn; David Aldavert; Ricardo Toledo; Ramon Lopez de Mantaras | ||||
Title | Combining Invariant Features and the ALV Homing Method for Autonomous Robot Navigation Based on Panoramas | Type | Journal Article | ||
Year | 2011 | Publication | Journal of Intelligent and Robotic Systems | Abbreviated Journal | JIRC |
Volume | 64 | Issue | 3-4 | Pages | 625-649 |
Keywords | |||||
Abstract | Biologically inspired homing methods, such as the Average Landmark Vector, are an interesting solution for local navigation due to its simplicity. However, usually they require a modification of the environment by placing artificial landmarks in order to work reliably. In this paper we combine the Average Landmark Vector with invariant feature points automatically detected in panoramic images to overcome this limitation. The proposed approach has been evaluated first in simulation and, as promising results are found, also in two data sets of panoramas from real world environments. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Springer Netherlands | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0921-0296 | ISBN ![]() |
Medium | ||
Area | Expedition | Conference | |||
Notes | RV;ADAS | Approved | no | ||
Call Number | Admin @ si @ RGA2011 | Serial | 1728 | ||
Permanent link to this record | |||||
Author | Koen E.A. van de Sande; Theo Gevers; Cees G.M. Snoek | ||||
Title | Empowering Visual Categorization with the GPU | Type | Journal Article | ||
Year | 2011 | Publication | IEEE Transactions on Multimedia | Abbreviated Journal | TMM |
Volume | 13 | Issue | 1 | Pages | 60-70 |
Keywords | |||||
Abstract | Visual categorization is important to manage large collections of digital images and video, where textual meta-data is often incomplete or simply unavailable. The bag-of-words model has become the most powerful method for visual categorization of images and video. Despite its high accuracy, a severe drawback of this model is its high computational cost. As the trend to increase computational power in newer CPU and GPU architectures is to increase their level of parallelism, exploiting this parallelism becomes an important direction to handle the computational cost of the bag-of-words approach. When optimizing a system based on the bag-of-words approach, the goal is to minimize the time it takes to process batches of images. Additionally, we also consider power usage as an evaluation metric. In this paper, we analyze the bag-of-words model for visual categorization in terms of computational cost and identify two major bottlenecks: the quantization step and the classification step. We address these two bottlenecks by proposing two efficient algorithms for quantization and classification by exploiting the GPU hardware and the CUDA parallel programming model. The algorithms are designed to (1) keep categorization accuracy intact, (2) decompose the problem and (3) give the same numerical results. In the experiments on large scale datasets it is shown that, by using a parallel implementation on the Geforce GTX260 GPU, classifying unseen images is 4.8 times faster than a quad-core CPU version on the Core i7 920, while giving the exact same numerical results. In addition, we show how the algorithms can be generalized to other applications, such as text retrieval and video retrieval. Moreover, when the obtained speedup is used to process extra video frames in a video retrieval benchmark, the accuracy of visual categorization is improved by 29%. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN ![]() |
Medium | |||
Area | Expedition | Conference | |||
Notes | ISE | Approved | no | ||
Call Number | Admin @ si @ SGS2011b | Serial | 1729 | ||
Permanent link to this record | |||||
Author | Jon Almazan; Ernest Valveny; Alicia Fornes | ||||
Title | Deforming the Blurred Shape Model for Shape Description and Recognition | Type | Conference Article | ||
Year | 2011 | Publication | 5th Iberian Conference on Pattern Recognition and Image Analysis | Abbreviated Journal | |
Volume | 6669 | Issue | Pages | 1-8 | |
Keywords | |||||
Abstract | This paper presents a new model for the description and recognition of distorted shapes, where the image is represented by a pixel density distribution based on the Blurred Shape Model combined with a non-linear image deformation model. This leads to an adaptive structure able to capture elastic deformations in shapes. This method has been evaluated using thee different datasets where deformations are present, showing the robustness and good performance of the new model. Moreover, we show that incorporating deformation and flexibility, the new model outperforms the BSM approach when classifying shapes with high variability of appearance. | ||||
Address | Las Palmas de Gran Canaria. Spain | ||||
Corporate Author | Thesis | ||||
Publisher | Springer-Verlag | Place of Publication | Berlin | Editor | Jordi Vitria; Joao Miguel Raposo; Mario Hernandez |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN ![]() |
Medium | |||
Area | Expedition | Conference | IbPRIA | ||
Notes | DAG; | Approved | no | ||
Call Number | Admin @ si @ AVF2011 | Serial | 1732 | ||
Permanent link to this record | |||||
Author | Michal Drozdzal; Santiago Segui; Carolina Malagelada; Fernando Azpiroz; Jordi Vitria; Petia Radeva | ||||
Title | Interactive Labeling of WCE Images | Type | Conference Article | ||
Year | 2011 | Publication | 5th Iberian Conference on Pattern Recognition and Image Analysis | Abbreviated Journal | |
Volume | 6669 | Issue | Pages | 143-150 | |
Keywords | |||||
Abstract | A high quality labeled training set is necessary for any supervised machine learning algorithm. Labeling of the data can be a very expensive process, specially while dealing with data of high variability and complexity. A good example of such data are the videos from Wireless Capsule Endoscopy. Building a representative WCE data set means many videos to be labeled by an expert. The problem that occurs is the data diversity, in the space of the features, from different WCE studies. That means that when new data arrives it is highly probable that it will not be represented in the training set, thus getting a high probability of performing an error when applying machine learning schemes. In this paper an interactive labeling scheme that allows reducing expert effort in the labeling process is presented. It is shown that the number of human interventions can be significantly reduced. The proposed system allows the annotation of informative/non-informative frames of the WCE video with less than 100 clicks | ||||
Address | Las Palmas de Gran Canaria. Spain | ||||
Corporate Author | Thesis | ||||
Publisher | Springer | Place of Publication | Editor | Vitria, Jordi; Sanches, João Miguel Raposo; Hernández, Mario | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN ![]() |
Medium | |||
Area | Expedition | Conference | IbPRIA | ||
Notes | MILAB;OR;MV | Approved | no | ||
Call Number | Admin @ si @ DSM2011 | Serial | 1734 | ||
Permanent link to this record | |||||
Author | Antonio Hernandez; Carlo Gatta; Laura Igual; Sergio Escalera; Petia Radeva | ||||
Title | Automatic Angiography Segmentation Based on Improved Graph-cut | Type | Conference Article | ||
Year | 2011 | Publication | Jornada TIC Salut Girona | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN ![]() |
Medium | |||
Area | Expedition | Conference | TICGI | ||
Notes | MILAB;HuPBA | Approved | no | ||
Call Number | Admin @ si @ HGI2011 | Serial | 1754 | ||
Permanent link to this record | |||||
Author | Laura Igual; Antonio Hernandez; Sergio Escalera; Miguel Reyes; Josep Moya; Joan Carles Soliva; Jordi Faquet; Oscar Vilarroya; Petia Radeva | ||||
Title | Automatic Techniques for Studying Attention-Deficit/Hyperactivity Disorder | Type | Conference Article | ||
Year | 2011 | Publication | Jornada TIC Salut Girona | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN ![]() |
Medium | |||
Area | Expedition | Conference | TICGI | ||
Notes | MILAB;HuPBA | Approved | no | ||
Call Number | Admin @ si @ IHE2011 | Serial | 1755 | ||
Permanent link to this record | |||||
Author | David Vazquez; Antonio Lopez; Daniel Ponsa; Javier Marin | ||||
Title | Cool world: domain adaptation of virtual and real worlds for human detection using active learning | Type | Conference Article | ||
Year | 2011 | Publication | NIPS Domain Adaptation Workshop: Theory and Application | Abbreviated Journal | NIPS-DA |
Volume | Issue | Pages | |||
Keywords | Pedestrian Detection; Virtual; Domain Adaptation; Active Learning | ||||
Abstract | Image based human detection is of paramount interest for different applications. The most promising human detectors rely on discriminatively learnt classifiers, i.e., trained with labelled samples. However, labelling is a manual intensive task, especially in cases like human detection where it is necessary to provide at least bounding boxes framing the humans for training. To overcome such problem, in Marin et al. we have proposed the use of a virtual world where the labels of the different objects are obtained automatically. This means that the human models (classifiers) are learnt using the appearance of realistic computer graphics. Later, these models are used for human detection in images of the real world. The results of this technique are surprisingly good. However, these are not always as good as the classical approach of training and testing with data coming from the same camera and the same type of scenario. Accordingly, in Vazquez et al. we cast the problem as one of supervised domain adaptation. In doing so, we assume that a small amount of manually labelled samples from real-world images is required. To collect these labelled samples we use an active learning technique. Thus, ultimately our human model is learnt by the combination of virtual- and real-world labelled samples which, to the best of our knowledge, was not done before. Here, we term such combined space cool world. In this extended abstract we summarize our proposal, and include quantitative results from Vazquez et al. showing its validity. | ||||
Address | Granada, Spain | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Granada, Spain | Editor | ||
Language | English | Summary Language | English | Original Title | |
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN ![]() |
Medium | |||
Area | Expedition | Conference | DA-NIPS | ||
Notes | ADAS | Approved | no | ||
Call Number | ADAS @ adas @ VLP2011b | Serial | 1756 | ||
Permanent link to this record | |||||
Author | Jordi Roca; A.Owen; G.Jordan; Y.Ling; C. Alejandro Parraga; A.Hurlbert | ||||
Title | Inter-individual Variations in Color Naming and the Structure of 3D Color Space | Type | Abstract | ||
Year | 2011 | Publication | Journal of Vision | Abbreviated Journal | VSS |
Volume | 12 | Issue | 2 | Pages | 166 |
Keywords | |||||
Abstract | 36.307
Many everyday behavioural uses of color vision depend on color naming ability, which is neither measured nor predicted by most standardized tests of color vision, for either normal or anomalous color vision. Here we demonstrate a new method to quantify color naming ability by deriving a compact computational description of individual 3D color spaces. Methods: Individual observers underwent standardized color vision diagnostic tests (including anomaloscope testing) and a series of custom-made color naming tasks using 500 distinct color samples, either CRT stimuli (“light”-based) or Munsell chips (“surface”-based), with both forced- and free-choice color naming paradigms. For each subject, we defined his/her color solid as the set of 3D convex hulls computed for each basic color category from the relevant collection of categorised points in perceptually uniform CIELAB space. From the parameters of the convex hulls, we derived several indices to characterise the 3D structure of the color solid and its inter-individual variations. Using a reference group of 25 normal trichromats (NT), we defined the degree of normality for the shape, location and overlap of each color region, and the extent of “light”-“surface” agreement. Results: Certain features of color perception emerge from analysis of the average NT color solid, e.g.: (1) the white category is slightly shifted towards blue; and (2) the variability in category border location across NT subjects is asymmetric across color space, with least variability in the blue/green region. Comparisons between individual and average NT indices reveal specific naming “deficits”, e.g.: (1) Category volumes for white, green, brown and grey are expanded for anomalous trichromats and dichromats; and (2) the focal structure of color space is disrupted more in protanopia than other forms of anomalous color vision. The indices both capture the structure of subjective color spaces and allow us to quantify inter-individual differences in color naming ability. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1534-7362 | ISBN ![]() |
Medium | ||
Area | Expedition | Conference | |||
Notes | CIC | Approved | no | ||
Call Number | Admin @ si @ ROJ2011 | Serial | 1758 | ||
Permanent link to this record |