Home | << 1 2 3 4 5 6 7 8 9 10 >> [11–12] |
Records | |||||
---|---|---|---|---|---|
Author | Sandra Jimenez; Xavier Otazu; Valero Laparra; Jesus Malo | ||||
Title | Chromatic induction and contrast masking: similar models, different goals? | Type | Conference Article | ||
Year | 2013 | Publication | Human Vision and Electronic Imaging XVIII | Abbreviated Journal | |
Volume | 8651 | Issue | Pages | ||
Keywords | |||||
Abstract | Normalization of signals coming from linear sensors is an ubiquitous mechanism of neural adaptation.1 Local interaction between sensors tuned to a particular feature at certain spatial position and neighbor sensors explains a wide range of psychophysical facts including (1) masking of spatial patterns, (2) non-linearities of motion sensors, (3) adaptation of color perception, (4) brightness and chromatic induction, and (5) image quality assessment. Although the above models have formal and qualitative similarities, it does not necessarily mean that the mechanisms involved are pursuing the same statistical goal. For instance, in the case of chromatic mechanisms (disregarding spatial information), different parameters in the normalization give rise to optimal discrimination or adaptation, and different non-linearities may give rise to error minimization or component independence. In the case of spatial sensors (disregarding color information), a number of studies have pointed out the benefits of masking in statistical independence terms. However, such statistical analysis has not been performed for spatio-chromatic induction models where chromatic perception depends on spatial configuration. In this work we investigate whether successful spatio-chromatic induction models,6 increase component independence similarly as previously reported for masking models. Mutual information analysis suggests that seeking an efficient chromatic representation may explain the prevalence of induction effects in spatially simple images. © (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only. | ||||
Address | San Francisco CA; USA; February 2013 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | HVEI | ||
Notes | CIC | Approved | no | ||
Call Number | Admin @ si @ JOL2013 | Serial | 2240 | ||
Permanent link to this record | |||||
Author | Bhaskar Chakraborty; Andrew Bagdanov; Jordi Gonzalez; Xavier Roca | ||||
Title | Human Action Recognition Using an Ensemble of Body-Part Detectors | Type | Journal Article | ||
Year | 2013 | Publication | Expert Systems | Abbreviated Journal | EXSY |
Volume | 30 | Issue | 2 | Pages | 101-114 |
Keywords | Human action recognition;body-part detection;hidden Markov model | ||||
Abstract | This paper describes an approach to human action recognition based on a probabilistic optimization model of body parts using hidden Markov model (HMM). Our method is able to distinguish between similar actions by only considering the body parts having major contribution to the actions, for example, legs for walking, jogging and running; arms for boxing, waving and clapping. We apply HMMs to model the stochastic movement of the body parts for action recognition. The HMM construction uses an ensemble of body-part detectors, followed by grouping of part detections, to perform human identification. Three example-based body-part detectors are trained to detect three components of the human body: the head, legs and arms. These detectors cope with viewpoint changes and self-occlusions through the use of ten sub-classifiers that detect body parts over a specific range of viewpoints. Each sub-classifier is a support vector machine trained on features selected for the discriminative power for each particular part/viewpoint combination. Grouping of these detections is performed using a simple geometric constraint model that yields a viewpoint-invariant human detector. We test our approach on three publicly available action datasets: the KTH dataset, Weizmann dataset and HumanEva dataset. Our results illustrate that with a simple and compact representation we can achieve robust recognition of human actions comparable to the most complex, state-of-the-art methods. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ISE | Approved | no | ||
Call Number | Admin @ si @ CBG2013 | Serial | 1809 | ||
Permanent link to this record | |||||
Author | David Roche; Debora Gil; Jesus Giraldo | ||||
Title | Mechanistic analysis of the function of agonists and allosteric modulators: Reconciling two-state and operational models | Type | Journal Article | ||
Year | 2013 | Publication | British Journal of Pharmacology | Abbreviated Journal | BJP |
Volume | 169 | Issue | 6 | Pages | 1189-202 |
Keywords | |||||
Abstract | Two-state and operational models of both agonism and allosterism are compared to identify and characterize common pharmacological parameters. To account for the receptor-dependent basal response, constitutive receptor activity is considered in the operational models. By arranging two-state models as the fraction of active receptors and operational models as the fractional response relative to the maximum effect of the system, a one-by-one correspondence between parameters is found. The comparative analysis allows a better understanding of complex allosteric interactions. In particular, the inclusion of constitutive receptor activity in the operational model of allosterism allows the characterization of modulators able to lower the basal response of the system; that is, allosteric modulators with negative intrinsic efficacy. Theoretical simulations and overall goodness of fit of the models to simulated data suggest that it is feasible to apply the models to experimental data and constitute one step forward in receptor theory formalism. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | IAM; 600.044; 605.203 | Approved | no | ||
Call Number | IAM @ iam @ RGG2013b | Serial | 2195 | ||
Permanent link to this record | |||||
Author | Zeynep Yucel; Albert Ali Salah; Çetin Meriçli; Tekin Meriçli; Roberto Valenti; Theo Gevers | ||||
Title | Joint Attention by Gaze Interpolation and Saliency | Type | Journal | ||
Year | 2013 | Publication | IEEE Transactions on cybernetics | Abbreviated Journal | T-CIBER |
Volume | 43 | Issue | 3 | Pages | 829-842 |
Keywords | |||||
Abstract | Joint attention, which is the ability of coordination of a common point of reference with the communicating party, emerges as a key factor in various interaction scenarios. This paper presents an image-based method for establishing joint attention between an experimenter and a robot. The precise analysis of the experimenter's eye region requires stability and high-resolution image acquisition, which is not always available. We investigate regression-based interpolation of the gaze direction from the head pose of the experimenter, which is easier to track. Gaussian process regression and neural networks are contrasted to interpolate the gaze direction. Then, we combine gaze interpolation with image-based saliency to improve the target point estimates and test three different saliency schemes. We demonstrate the proposed method on a human-robot interaction scenario. Cross-subject evaluations, as well as experiments under adverse conditions (such as dimmed or artificial illumination or motion blur), show that our method generalizes well and achieves rapid gaze estimation for establishing joint attention. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 2168-2267 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ALTRES;ISE | Approved | no | ||
Call Number | Admin @ si @ YSM2013 | Serial | 2363 | ||
Permanent link to this record | |||||
Author | Naila Murray; Maria Vanrell; Xavier Otazu; C. Alejandro Parraga | ||||
Title | Low-level SpatioChromatic Grouping for Saliency Estimation | Type | Journal Article | ||
Year | 2013 | Publication | IEEE Transactions on Pattern Analysis and Machine Intelligence | Abbreviated Journal | TPAMI |
Volume | 35 | Issue | 11 | Pages | 2810-2816 |
Keywords | |||||
Abstract | We propose a saliency model termed SIM (saliency by induction mechanisms), which is based on a low-level spatiochromatic model that has successfully predicted chromatic induction phenomena. In so doing, we hypothesize that the low-level visual mechanisms that enhance or suppress image detail are also responsible for making some image regions more salient. Moreover, SIM adds geometrical grouplets to enhance complex low-level features such as corners, and suppress relatively simpler features such as edges. Since our model has been fitted on psychophysical chromatic induction data, it is largely nonparametric. SIM outperforms state-of-the-art methods in predicting eye fixations on two datasets and using two metrics. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0162-8828 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | CIC; 600.051; 600.052; 605.203 | Approved | no | ||
Call Number | Admin @ si @ MVO2013 | Serial | 2289 | ||
Permanent link to this record | |||||
Author | Ferran Diego; Joan Serrat; Antonio Lopez | ||||
Title | Joint spatio-temporal alignment of sequences | Type | Journal Article | ||
Year | 2013 | Publication | IEEE Transactions on Multimedia | Abbreviated Journal | TMM |
Volume | 15 | Issue | 6 | Pages | 1377-1387 |
Keywords | video alignment | ||||
Abstract | Video alignment is important in different areas of computer vision such as wide baseline matching, action recognition, change detection, video copy detection and frame dropping prevention. Current video alignment methods usually deal with a relatively simple case of fixed or rigidly attached cameras or simultaneous acquisition. Therefore, in this paper we propose a joint video alignment for bringing two video sequences into a spatio-temporal alignment. Specifically, the novelty of the paper is to formulate the video alignment to fold the spatial and temporal alignment into a single alignment framework. This simultaneously satisfies a frame-correspondence and frame-alignment similarity; exploiting the knowledge among neighbor frames by a standard pairwise Markov random field (MRF). This new formulation is able to handle the alignment of sequences recorded at different times by independent moving cameras that follows a similar trajectory, and also generalizes the particular cases that of fixed geometric transformation and/or linear temporal mapping. We conduct experiments on different scenarios such as sequences recorded simultaneously or by moving cameras to validate the robustness of the proposed approach. The proposed method provides the highest video alignment accuracy compared to the state-of-the-art methods on sequences recorded from vehicles driving along the same track at different times. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1520-9210 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS | Approved | no | ||
Call Number | Admin @ si @ DSL2013; ADAS @ adas @ | Serial | 2228 | ||
Permanent link to this record | |||||
Author | Jose Manuel Alvarez; Theo Gevers; Ferran Diego; Antonio Lopez | ||||
Title | Road Geometry Classification by Adaptative Shape Models | Type | Journal Article | ||
Year | 2013 | Publication | IEEE Transactions on Intelligent Transportation Systems | Abbreviated Journal | TITS |
Volume | 14 | Issue | 1 | Pages | 459-468 |
Keywords | road detection | ||||
Abstract | Vision-based road detection is important for different applications in transportation, such as autonomous driving, vehicle collision warning, and pedestrian crossing detection. Common approaches to road detection are based on low-level road appearance (e.g., color or texture) and neglect of the scene geometry and context. Hence, using only low-level features makes these algorithms highly depend on structured roads, road homogeneity, and lighting conditions. Therefore, the aim of this paper is to classify road geometries for road detection through the analysis of scene composition and temporal coherence. Road geometry classification is proposed by building corresponding models from training images containing prototypical road geometries. We propose adaptive shape models where spatial pyramids are steered by the inherent spatial structure of road images. To reduce the influence of lighting variations, invariant features are used. Large-scale experiments show that the proposed road geometry classifier yields a high recognition rate of 73.57% ± 13.1, clearly outperforming other state-of-the-art methods. Including road shape information improves road detection results over existing appearance-based methods. Finally, it is shown that invariant features and temporal information provide robustness against disturbing imaging conditions. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1524-9050 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS;ISE | Approved | no | ||
Call Number | Admin @ si @ AGD2013;; ADAS @ adas @ | Serial | 2269 | ||
Permanent link to this record | |||||
Author | Mohammad Rouhani; Angel Sappa | ||||
Title | The Richer Representation the Better Registration | Type | Journal Article | ||
Year | 2013 | Publication | IEEE Transactions on Image Processing | Abbreviated Journal | TIP |
Volume | 22 | Issue | 12 | Pages | 5036-5049 |
Keywords | |||||
Abstract | In this paper, the registration problem is formulated as a point to model distance minimization. Unlike most of the existing works, which are based on minimizing a point-wise correspondence term, this formulation avoids the correspondence search that is time-consuming. In the first stage, the target set is described through an implicit function by employing a linear least squares fitting. This function can be either an implicit polynomial or an implicit B-spline from a coarse to fine representation. In the second stage, we show how the obtained implicit representation is used as an interface to convert point-to-point registration into point-to-implicit problem. Furthermore, we show that this registration distance is smooth and can be minimized through the Levengberg-Marquardt algorithm. All the formulations presented for both stages are compact and easy to implement. In addition, we show that our registration method can be handled using any implicit representation though some are coarse and others provide finer representations; hence, a tradeoff between speed and accuracy can be set by employing the right implicit function. Experimental results and comparisons in 2D and 3D show the robustness and the speed of convergence of the proposed approach. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1057-7149 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS | Approved | no | ||
Call Number | Admin @ si @ RoS2013 | Serial | 2665 | ||
Permanent link to this record | |||||
Author | David Geronimo; Joan Serrat; Antonio Lopez; Ramon Baldrich | ||||
Title | Traffic sign recognition for computer vision project-based learning | Type | Journal Article | ||
Year | 2013 | Publication | IEEE Transactions on Education | Abbreviated Journal | T-EDUC |
Volume | 56 | Issue | 3 | Pages | 364-371 |
Keywords | traffic signs | ||||
Abstract | This paper presents a graduate course project on computer vision. The aim of the project is to detect and recognize traffic signs in video sequences recorded by an on-board vehicle camera. This is a demanding problem, given that traffic sign recognition is one of the most challenging problems for driving assistance systems. Equally, it is motivating for the students given that it is a real-life problem. Furthermore, it gives them the opportunity to appreciate the difficulty of real-world vision problems and to assess the extent to which this problem can be solved by modern computer vision and pattern classification techniques taught in the classroom. The learning objectives of the course are introduced, as are the constraints imposed on its design, such as the diversity of students' background and the amount of time they and their instructors dedicate to the course. The paper also describes the course contents, schedule, and how the project-based learning approach is applied. The outcomes of the course are discussed, including both the students' marks and their personal feedback. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0018-9359 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS; CIC | Approved | no | ||
Call Number | Admin @ si @ GSL2013; ADAS @ adas @ | Serial | 2160 | ||
Permanent link to this record | |||||
Author | David Roche; Debora Gil; Jesus Giraldo | ||||
Title | Detecting loss of diversity for an efficient termination of EAs | Type | Conference Article | ||
Year | 2013 | Publication | 15th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing | Abbreviated Journal | |
Volume | Issue | Pages | 561 - 566 | ||
Keywords | EA termination; EA population diversity; EA steady state | ||||
Abstract | Termination of Evolutionary Algorithms (EA) at its steady state so that useless iterations are not performed is a main point for its efficient application to black-box problems. Many EA algorithms evolve while there is still diversity in their population and, thus, they could be terminated by analyzing the behavior some measures of EA population diversity. This paper presents a numeric approximation to steady states that can be used to detect the moment EA population has lost its diversity for EA termination. Our condition has been applied to 3 EA paradigms based on diversity and a selection of functions
covering the properties most relevant for EA convergence. Experiments show that our condition works regardless of the search space dimension and function landscape. |
||||
Address | Timisoara; Rumania; | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-4799-3035-7 | Medium | ||
Area | Expedition | Conference | SYNASC | ||
Notes | IAM; 600.044; 600.060; 605.203 | Approved | no | ||
Call Number | Admin @ si @ RGG2013c | Serial | 2299 | ||
Permanent link to this record | |||||
Author | Jiaolong Xu; David Vazquez; Antonio Lopez; Javier Marin; Daniel Ponsa | ||||
Title | Learning a Multiview Part-based Model in Virtual World for Pedestrian Detection | Type | Conference Article | ||
Year | 2013 | Publication | IEEE Intelligent Vehicles Symposium | Abbreviated Journal | |
Volume | Issue | Pages | 467 - 472 | ||
Keywords | Pedestrian Detection; Virtual World; Part based | ||||
Abstract | State-of-the-art deformable part-based models based on latent SVM have shown excellent results on human detection. In this paper, we propose to train a multiview deformable part-based model with automatically generated part examples from virtual-world data. The method is efficient as: (i) the part detectors are trained with precisely extracted virtual examples, thus no latent learning is needed, (ii) the multiview pedestrian detector enhances the performance of the pedestrian root model, (iii) a top-down approach is used for part detection which reduces the searching space. We evaluate our model on Daimler and Karlsruhe Pedestrian Benchmarks with publicly available Caltech pedestrian detection evaluation framework and the result outperforms the state-of-the-art latent SVM V4.0, on both average miss rate and speed (our detector is ten times faster). | ||||
Address | Gold Coast; Australia; June 2013 | ||||
Corporate Author | Thesis | ||||
Publisher | IEEE | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1931-0587 | ISBN | 978-1-4673-2754-1 | Medium | |
Area | Expedition | Conference | IV | ||
Notes | ADAS; 600.054; 600.057 | Approved | no | ||
Call Number | XVL2013; ADAS @ adas @ xvl2013a | Serial | 2214 | ||
Permanent link to this record | |||||
Author | German Ros; J. Guerrero; Angel Sappa; Antonio Lopez | ||||
Title | VSLAM pose initialization via Lie groups and Lie algebras optimization | Type | Conference Article | ||
Year | 2013 | Publication | Proceedings of IEEE International Conference on Robotics and Automation | Abbreviated Journal | |
Volume | Issue | Pages | 5740 - 5747 | ||
Keywords | SLAM | ||||
Abstract | We present a novel technique for estimating initial 3D poses in the context of localization and Visual SLAM problems. The presented approach can deal with noise, outliers and a large amount of input data and still performs in real time in a standard CPU. Our method produces solutions with an accuracy comparable to those produced by RANSAC but can be much faster when the percentage of outliers is high or for large amounts of input data. On the current work we propose to formulate the pose estimation as an optimization problem on Lie groups, considering their manifold structure as well as their associated Lie algebras. This allows us to perform a fast and simple optimization at the same time that conserve all the constraints imposed by the Lie group SE(3). Additionally, we present several key design concepts related with the cost function and its Jacobian; aspects that are critical for the good performance of the algorithm. | ||||
Address | Karlsruhe; Germany; May 2013 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1050-4729 | ISBN | 978-1-4673-5641-1 | Medium | |
Area | Expedition | Conference | ICRA | ||
Notes | ADAS; 600.054; 600.055; 600.057 | Approved | no | ||
Call Number | Admin @ si @ RGS2013a; ADAS @ adas @ | Serial | 2225 | ||
Permanent link to this record | |||||
Author | H. Emrah Tasli; Cevahir Çigla; Theo Gevers; A. Aydin Alatan | ||||
Title | Super pixel extraction via convexity induced boundary adaptation | Type | Conference Article | ||
Year | 2013 | Publication | 14th IEEE International Conference on Multimedia and Expo | Abbreviated Journal | |
Volume | Issue | Pages | 1-6 | ||
Keywords | |||||
Abstract | This study presents an efficient super-pixel extraction algorithm with major contributions to the state-of-the-art in terms of accuracy and computational complexity. Segmentation accuracy is improved through convexity constrained geodesic distance utilization; while computational efficiency is achieved by replacing complete region processing with boundary adaptation idea. Starting from the uniformly distributed rectangular equal-sized super-pixels, region boundaries are adapted to intensity edges iteratively by assigning boundary pixels to the most similar neighboring super-pixels. At each iteration, super-pixel regions are updated and hence progressively converging to compact pixel groups. Experimental results with state-of-the-art comparisons, validate the performance of the proposed technique in terms of both accuracy and speed. | ||||
Address | San Jose; USA; July 2013 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1945-7871 | ISBN | Medium | ||
Area | Expedition | Conference | ICME | ||
Notes | ALTRES;ISE | Approved | no | ||
Call Number | Admin @ si @ TÇG2013 | Serial | 2367 | ||
Permanent link to this record | |||||
Author | Rahat Khan; Joost Van de Weijer; Dimosthenis Karatzas; Damien Muselet | ||||
Title | Towards multispectral data acquisition with hand-held devices | Type | Conference Article | ||
Year | 2013 | Publication | 20th IEEE International Conference on Image Processing | Abbreviated Journal | |
Volume | Issue | Pages | 2053 - 2057 | ||
Keywords | Multispectral; mobile devices; color measurements | ||||
Abstract | We propose a method to acquire multispectral data with handheld devices with front-mounted RGB cameras. We propose to use the display of the device as an illuminant while the camera captures images illuminated by the red, green and
blue primaries of the display. Three illuminants and three response functions of the camera lead to nine response values which are used for reflectance estimation. Results are promising and show that the accuracy of the spectral reconstruction improves in the range from 30-40% over the spectral reconstruction based on a single illuminant. Furthermore, we propose to compute sensor-illuminant aware linear basis by discarding the part of the reflectances that falls in the sensorilluminant null-space. We show experimentally that optimizing reflectance estimation on these new basis functions decreases the RMSE significantly over basis functions that are independent to sensor-illuminant. We conclude that, multispectral data acquisition is potentially possible with consumer hand-held devices such as tablets, mobiles, and laptops, opening up applications which are currently considered to be unrealistic. |
||||
Address | Melbourne; Australia; September 2013 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICIP | ||
Notes | CIC; DAG; 600.048 | Approved | no | ||
Call Number | Admin @ si @ KWK2013b | Serial | 2265 | ||
Permanent link to this record | |||||
Author | Shida Beigpour; Marc Serra; Joost Van de Weijer; Robert Benavente; Maria Vanrell; Olivier Penacchio; Dimitris Samaras | ||||
Title | Intrinsic Image Evaluation On Synthetic Complex Scenes | Type | Conference Article | ||
Year | 2013 | Publication | 20th IEEE International Conference on Image Processing | Abbreviated Journal | |
Volume | Issue | Pages | 285 - 289 | ||
Keywords | |||||
Abstract | Scene decomposition into its illuminant, shading, and reflectance intrinsic images is an essential step for scene understanding. Collecting intrinsic image groundtruth data is a laborious task. The assumptions on which the ground-truth
procedures are based limit their application to simple scenes with a single object taken in the absence of indirect lighting and interreflections. We investigate synthetic data for intrinsic image research since the extraction of ground truth is straightforward, and it allows for scenes in more realistic situations (e.g, multiple illuminants and interreflections). With this dataset we aim to motivate researchers to further explore intrinsic image decomposition in complex scenes. |
||||
Address | Melbourne; Australia; September 2013 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICIP | ||
Notes | CIC; 600.048; 600.052; 600.051 | Approved | no | ||
Call Number | Admin @ si @ BSW2013 | Serial | 2264 | ||
Permanent link to this record |