|
Records |
Links |
|
Author |
David Roche; Debora Gil; Jesus Giraldo |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Assessing agonist efficacy in an uncertain Em world |
Type |
Conference Article |
|
Year |
2012 |
Publication |
40th Keystone Symposia on mollecular and celular biology |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
79 |
|
|
Keywords |
|
|
|
Abstract |
The operational model of agonism has been widely used for the analysis of agonist action since its formulation in 1983. The model includes the Em parameter, which is defined as the maximum response of the system. The methods for Em estimation provide Em values not significantly higher than the maximum responses achieved by full agonists. However, it has been found that that some classes of compounds as, for instance, superagonists and positive allosteric modulators can increase the full agonist maximum response, implying upper limits for Em and thereby posing doubts on the validity of Em estimates. Because of the correlation between Em and operational efficacy, τ, wrong Em estimates will yield wrong τ estimates.
In this presentation, the operational model of agonism and various methods for the simulation of allosteric modulation will be analyzed. Alternatives for curve fitting will be presented and discussed. |
|
|
Address |
Fairmont Banff Springs, Banff, Alberta, Canada |
|
|
Corporate Author |
Keystone Symposia |
Thesis |
|
|
|
Publisher |
Keystone Symposia |
Place of Publication |
|
Editor |
A. Christopoulus and M. Bouvier |
|
|
Language |
english |
Summary Language |
english |
Original Title |
|
|
|
Series Editor |
Keystone Symposia |
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN ![sorted by ISSN field, ascending order (up)](img/sort_asc.gif) |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
KSMCB |
|
|
Notes |
IAM |
Approved |
no |
|
|
Call Number |
IAM @ iam @ RGG2012 |
Serial |
1855 |
|
Permanent link to this record |
|
|
|
|
Author |
S.Grau; Ana Puig; Sergio Escalera; Maria Salamo |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Intelligent Interactive Volume Classification |
Type |
Conference Article |
|
Year |
2013 |
Publication |
Pacific Graphics |
Abbreviated Journal |
|
|
|
Volume |
32 |
Issue |
7 |
Pages |
23-28 |
|
|
Keywords |
|
|
|
Abstract |
This paper defines an intelligent and interactive framework to classify multiple regions of interest from the original data on demand, without requiring any preprocessing or previous segmentation. The proposed intelligent and interactive approach is divided in three stages: visualize, training and testing. First, users visualize and label some samples directly on slices of the volume. Training and testing are based on a framework of Error Correcting Output Codes and Adaboost classifiers that learn to classify each region the user has painted. Later, at the testing stage, each classifier is directly applied on the rest of samples and combined to perform multi-class labeling, being used in the final rendering. We also parallelized the training stage using a GPU-based implementation for
obtaining a rapid interaction and classification. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN ![sorted by ISSN field, ascending order (up)](img/sort_asc.gif) |
|
ISBN |
978-3-905674-50-7 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
PG |
|
|
Notes |
HuPBA; 600.046;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ GPE2013b |
Serial |
2355 |
|
Permanent link to this record |
|
|
|
|
Author |
M. Visani; Oriol Ramos Terrades; Salvatore Tabbone |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
A Protocol to Characterize the Descriptive Power and the Complementarity of Shape Descriptors |
Type |
Journal Article |
|
Year |
2011 |
Publication |
International Journal on Document Analysis and Recognition |
Abbreviated Journal |
IJDAR |
|
|
Volume |
14 |
Issue |
1 |
Pages |
87-100 |
|
|
Keywords |
Document analysis; Shape descriptors; Symbol description; Performance characterization; Complementarity analysis |
|
|
Abstract |
Most document analysis applications rely on the extraction of shape descriptors, which may be grouped into different categories, each category having its own advantages and drawbacks (O.R. Terrades et al. in Proceedings of ICDAR’07, pp. 227–231, 2007). In order to improve the richness of their description, many authors choose to combine multiple descriptors. Yet, most of the authors who propose a new descriptor content themselves with comparing its performance to the performance of a set of single state-of-the-art descriptors in a specific applicative context (e.g. symbol recognition, symbol spotting...). This results in a proliferation of the shape descriptors proposed in the literature. In this article, we propose an innovative protocol, the originality of which is to be as independent of the final application as possible and which relies on new quantitative and qualitative measures. We introduce two types of measures: while the measures of the first type are intended to characterize the descriptive power (in terms of uniqueness, distinctiveness and robustness towards noise) of a descriptor, the second type of measures characterizes the complementarity between multiple descriptors. Characterizing upstream the complementarity of shape descriptors is an alternative to the usual approach where the descriptors to be combined are selected by trial and error, considering the performance characteristics of the overall system. To illustrate the contribution of this protocol, we performed experimental studies using a set of descriptors and a set of symbols which are widely used by the community namely ART and SC descriptors and the GREC 2003 database. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN ![sorted by ISSN field, ascending order (up)](img/sort_asc.gif) |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG; IF 1.091 |
Approved |
no |
|
|
Call Number |
Admin @ si @VRT2011 |
Serial |
1856 |
|
Permanent link to this record |
|
|
|
|
Author |
Carles Sanchez |
![download PDF file pdf](img/file_PDF.gif)
|
|
Title |
Tracheal ring detection in bronchoscopy |
Type |
Report |
|
Year |
2011 |
Publication |
CVC Technical Report |
Abbreviated Journal |
|
|
|
Volume |
168 |
Issue |
|
Pages |
|
|
|
Keywords |
Bronchoscopy, tracheal ring, segmentation |
|
|
Abstract |
Endoscopy is the process in which a camera is introduced inside a human.
Given that endoscopy provides realistic images (in contrast to other modalities) and allows non-invase minimal intervention procedures (which can aid in diagnosis and surgical interventions), its use has spreaded during last decades.
In this project we will focus on bronchoscopic procedures, during which the camera is introduced through the trachea in order to have a diagnostic of the patient. The diagnostic interventions are focused on: degree of stenosis (reduction in tracheal area), prosthesis or early diagnosis of tumors. In the first case, assessment of the luminal area and the calculation of the diameters of the tracheal rings are required. A main limitation is that all the process is done by hand,
which means that the doctor takes all the measurements and decisions just by looking at the screen. As far as we know there is no computational framework for helping the doctors in the diagnosis.
This project will consist of analysing bronchoscopic videos in order to extract useful information for the diagnostic of the degree of stenosis. In particular we will focus on segmentation of the tracheal rings. As a result of this project several strategies (for detecting tracheal rings) had been implemented in order to compare their performance. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
Master's thesis |
|
|
Publisher |
|
Place of Publication |
|
Editor |
Debora Gil, F.Javier Sanchez |
|
|
Language |
english |
Summary Language |
english |
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN ![sorted by ISSN field, ascending order (up)](img/sort_asc.gif) |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
IAM;MV |
Approved |
no |
|
|
Call Number |
IAM @ iam @ San2011 |
Serial |
1841 |
|
Permanent link to this record |
|
|
|
|
Author |
Marcel P. Lucassen; Theo Gevers; Arjan Gijsenij |
![goto web page url](img/www.gif)
|
|
Title |
Texture Affects Color Emotion |
Type |
Journal Article |
|
Year |
2011 |
Publication |
Color Research & Applications |
Abbreviated Journal |
CRA |
|
|
Volume |
36 |
Issue |
6 |
Pages |
426–436 |
|
|
Keywords |
color;texture;color emotion;observer variability;ranking |
|
|
Abstract |
Several studies have recorded color emotions in subjects viewing uniform color (UC) samples. We conduct an experiment to measure and model how these color emotions change when texture is added to the color samples. Using a computer monitor, our subjects arrange samples along four scales: warm–cool, masculine–feminine, hard–soft, and heavy–light. Three sample types of increasing visual complexity are used: UC, grayscale textures, and color textures (CTs). To assess the intraobserver variability, the experiment is repeated after 1 week. Our results show that texture fully determines the responses on the Hard-Soft scale, and plays a role of decreasing weight for the masculine–feminine, heavy–light, and warm–cool scales. Using some 25,000 observer responses, we derive color emotion functions that predict the group-averaged scale responses from the samples' color and texture parameters. For UC samples, the accuracy of our functions is significantly higher (average R2 = 0.88) than that of previously reported functions applied to our data. The functions derived for CT samples have an accuracy of R2 = 0.80. We conclude that when textured samples are used in color emotion studies, the psychological responses may be strongly affected by texture. © 2010 Wiley Periodicals, Inc. Col Res Appl, 2010 |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN ![sorted by ISSN field, ascending order (up)](img/sort_asc.gif) |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ALTRES;ISE |
Approved |
no |
|
|
Call Number |
Admin @ si @ LGG2011 |
Serial |
1844 |
|
Permanent link to this record |
|
|
|
|
Author |
Albert Ali Salah; E. Pauwels; R. Tavenard; Theo Gevers |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
T-Patterns Revisited: Mining for Temporal Patterns in Sensor Data |
Type |
Journal Article |
|
Year |
2010 |
Publication |
Sensors |
Abbreviated Journal |
SENS |
|
|
Volume |
10 |
Issue |
8 |
Pages |
7496-7513 |
|
|
Keywords |
sensor networks; temporal pattern extraction; T-patterns; Lempel-Ziv; Gaussian mixture model; MERL motion data |
|
|
Abstract |
The trend to use large amounts of simple sensors as opposed to a few complex sensors to monitor places and systems creates a need for temporal pattern mining algorithms to work on such data. The methods that try to discover re-usable and interpretable patterns in temporal event data have several shortcomings. We contrast several recent approaches to the problem, and extend the T-Pattern algorithm, which was previously applied for detection of sequential patterns in behavioural sciences. The temporal complexity of the T-pattern approach is prohibitive in the scenarios we consider. We remedy this with a statistical model to obtain a fast and robust algorithm to find patterns in temporal data. We test our algorithm on a recent database collected with passive infrared sensors with millions of events. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN ![sorted by ISSN field, ascending order (up)](img/sort_asc.gif) |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ALTRES;ISE |
Approved |
no |
|
|
Call Number |
Admin @ si @ SPT2010 |
Serial |
1845 |
|
Permanent link to this record |
|
|
|
|
Author |
Francesc Tanarro Marquez; Pau Gratacos Marti; F. Javier Sanchez; Joan Ramon Jimenez Minguell; Coen Antens; Enric Sala i Esteva |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
A device for monitoring condition of a railway supply |
Type |
Patent |
|
Year |
2012 |
Publication |
EP 2 404 777 A1 |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
of a railway supply line when the supply line is in contact with a head of a pantograph of a vehicle in order to power said vehicle . The device includes a camera ( for monitoring parameters indicative of operating capability of said supply line.
The device is intended to monitor condition
tive of operating capability of said supply line. The device includes a reflective element. comprising a pattern , intended to be arranged onto the pantograph head . The camera is intended to be arranged on the vehicle (10) so as to register the pattern position regarding a vertical direction. |
|
|
Address |
|
|
|
Corporate Author |
ALSTOM Transport SA |
Thesis |
|
|
|
Publisher |
European Patent Office |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN ![sorted by ISSN field, ascending order (up)](img/sort_asc.gif) |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MV |
Approved |
no |
|
|
Call Number |
IAM @ iam @ MMS2012 |
Serial |
1854 |
|
Permanent link to this record |
|
|
|
|
Author |
Oriol Ramos Terrades; Alejandro Hector Toselli; Nicolas Serrano; Veronica Romero; Enrique Vidal; Alfons Juan |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Interactive layout analysis and transcription systems for historic handwritten documents |
Type |
Conference Article |
|
Year |
2010 |
Publication |
10th ACM Symposium on Document Engineering |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
219–222 |
|
|
Keywords |
Handwriting recognition; Interactive predictive processing; Partial supervision; Interactive layout analysis |
|
|
Abstract |
The amount of digitized legacy documents has been rising dramatically over the last years due mainly to the increasing number of on-line digital libraries publishing this kind of documents, waiting to be classified and finally transcribed into a textual electronic format (such as ASCII or PDF). Nevertheless, most of the available fully-automatic applications addressing this task are far from being perfect and heavy and inefficient human intervention is often required to check and correct the results of such systems. In contrast, multimodal interactive-predictive approaches may allow the users to participate in the process helping the system to improve the overall performance. With this in mind, two sets of recent advances are introduced in this work: a novel interactive method for text block detection and two multimodal interactive handwritten text transcription systems which use active learning and interactive-predictive technologies in the recognition process. |
|
|
Address |
Manchester, United Kingdom |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN ![sorted by ISSN field, ascending order (up)](img/sort_asc.gif) |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ACM |
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
Admin @ si @RTS2010 |
Serial |
1857 |
|
Permanent link to this record |
|
|
|
|
Author |
Antonio Hernandez; Miguel Angel Bautista; Xavier Perez Sala; Victor Ponce; Sergio Escalera; Xavier Baro; Oriol Pujol; Cecilio Angulo |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Probability-based Dynamic Time Warping and Bag-of-Visual-and-Depth-Words for Human Gesture Recognition in RGB-D |
Type |
Journal Article |
|
Year |
2014 |
Publication |
Pattern Recognition Letters |
Abbreviated Journal |
PRL |
|
|
Volume |
50 |
Issue |
1 |
Pages |
112-121 |
|
|
Keywords |
RGB-D; Bag-of-Words; Dynamic Time Warping; Human Gesture Recognition |
|
|
Abstract |
PATREC5825
We present a methodology to address the problem of human gesture segmentation and recognition in video and depth image sequences. A Bag-of-Visual-and-Depth-Words (BoVDW) model is introduced as an extension of the Bag-of-Visual-Words (BoVW) model. State-of-the-art RGB and depth features, including a newly proposed depth descriptor, are analysed and combined in a late fusion form. The method is integrated in a Human Gesture Recognition pipeline, together with a novel probability-based Dynamic Time Warping (PDTW) algorithm which is used to perform prior segmentation of idle gestures. The proposed DTW variant uses samples of the same gesture category to build a Gaussian Mixture Model driven probabilistic model of that gesture class. Results of the whole Human Gesture Recognition pipeline in a public data set show better performance in comparison to both standard BoVW model and DTW approach. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN ![sorted by ISSN field, ascending order (up)](img/sort_asc.gif) |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HuPBA;MV; 605.203 |
Approved |
no |
|
|
Call Number |
Admin @ si @ HBP2014 |
Serial |
2353 |
|
Permanent link to this record |
|
|
|
|
Author |
G.D. Evangelidis; Ferran Diego; Joan Serrat; Antonio Lopez |
![download PDF file pdf](img/file_PDF.gif)
|
|
Title |
Slice Matching for Accurate Spatio-Temporal Alignment |
Type |
Conference Article |
|
Year |
2011 |
Publication |
In ICCV Workshop on Visual Surveillance |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
video alignment |
|
|
Abstract |
Video synchronization and alignment is a rather recent topic in computer vision. It usually deals with the problem of aligning sequences recorded simultaneously by static, jointly- or independently-moving cameras. In this paper, we investigate the more difficult problem of matching videos captured at different times from independently-moving cameras, whose trajectories are approximately coincident or parallel. To this end, we propose a novel method that pixel-wise aligns videos and allows thus to automatically highlight their differences. This primarily aims at visual surveillance but the method can be adopted as is by other related video applications, like object transfer (augmented reality) or high dynamic range video. We build upon a slice matching scheme to first synchronize the sequences, while we develop a spatio-temporal alignment scheme to spatially register corresponding frames and refine the temporal mapping. We investigate the performance of the proposed method on videos recorded from vehicles driven along different types of roads and compare with related previous works. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN ![sorted by ISSN field, ascending order (up)](img/sort_asc.gif) |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
VS |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ EDS2011; ADAS @ adas @ eds2011a |
Serial |
1861 |
|
Permanent link to this record |
|
|
|
|
Author |
G. Roig; Xavier Boix; F. de la Torre; Joan Serrat; C. Vilella |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Hierarchical CRF with product label spaces for parts-based Models |
Type |
Conference Article |
|
Year |
2011 |
Publication |
IEEE Conference on Automatic Face and Gesture Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
Non-rigid object detection is a challenging an open research problem in computer vision. It is a critical part in many applications such as image search, surveillance, human-computer interaction or image auto-annotation. Most successful approaches to non-rigid object detection make use of part-based models. In particular, Conditional Random Fields (CRF) have been successfully embedded into a discriminative parts-based model framework due to its effectiveness for learning and inference (usually based on a tree structure). However, CRF-based approaches do not incorporate global constraints and only model pairwise interactions. This is especially important when modeling object classes that may have complex parts interactions (e.g. facial features or body articulations), because neglecting them yields an oversimplified model with suboptimal performance. To overcome this limitation, this paper proposes a novel hierarchical CRF (HCRF). The main contribution is to build a hierarchy of part combinations by extending the label set to a hierarchy of product label spaces. In order to keep the inference computation tractable, we propose an effective method to reduce the new label set. We test our method on two applications: facial feature detection on the Multi-PIE database and human pose estimation on the Buffy dataset. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN ![sorted by ISSN field, ascending order (up)](img/sort_asc.gif) |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
FG |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ RBT2011 |
Serial |
1862 |
|
Permanent link to this record |
|
|
|
|
Author |
Albert Andaluz |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Harmonic Phase Flow: User's guide |
Type |
Manual |
|
Year |
2012 |
Publication |
CVC |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
HPF is a plugin for the computation of clinical scores under Osirix.
This manual provides a basic guide for experienced clinical staff. Chapter 1 provides the theoretical background in which this plugin is based.
Next, in chapter 2 we provide basic instructions for installing and uninstalling this plugin. chapter 3we shows a step-by-step scenario to compute clinical scores from tagged-MRI images with HPF. Finally, in chapter 4 we provide a quick guide for plugin developers |
|
|
Address |
Bellaterra, Barcelona (Spain) |
|
|
Corporate Author |
Computer Vision Center |
Thesis |
|
|
|
Publisher |
CVC |
Place of Publication |
Barcelona |
Editor |
|
|
|
Language |
english |
Summary Language |
english |
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN ![sorted by ISSN field, ascending order (up)](img/sort_asc.gif) |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
IAM |
Approved |
no |
|
|
Call Number |
IAM @ iam @ And2012 |
Serial |
1863 |
|
Permanent link to this record |
|
|
|
|
Author |
Fahad Shahbaz Khan; Joost Van de Weijer; Andrew Bagdanov; Maria Vanrell |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Portmanteau Vocabularies for Multi-Cue Image Representation |
Type |
Conference Article |
|
Year |
2011 |
Publication |
25th Annual Conference on Neural Information Processing Systems |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
We describe a novel technique for feature combination in the bag-of-words model of image classification. Our approach builds discriminative compound words from primitive cues learned independently from training images. Our main observation is that modeling joint-cue distributions independently is more statistically robust for typical classification problems than attempting to empirically estimate the dependent, joint-cue distribution directly. We use Information theoretic vocabulary compression to find discriminative combinations of cues and the resulting vocabulary of portmanteau words is compact, has the cue binding property, and supports individual weighting of cues in the final image representation. State-of-the-art results on both the Oxford Flower-102 and Caltech-UCSD Bird-200 datasets demonstrate the effectiveness of our technique compared to other, significantly more complex approaches to multi-cue image representation |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN ![sorted by ISSN field, ascending order (up)](img/sort_asc.gif) |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
NIPS |
|
|
Notes |
CIC |
Approved |
no |
|
|
Call Number |
Admin @ si @ KWB2011 |
Serial |
1865 |
|
Permanent link to this record |
|
|
|
|
Author |
Naila Murray; Sandra Skaff; Luca Marchesotti; Florent Perronnin |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Towards Automatic Concept Transfer |
Type |
Conference Article |
|
Year |
2011 |
Publication |
Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Non-Photorealistic Animation and Rendering |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
167.176 |
|
|
Keywords |
chromatic modeling, color concepts, color transfer, concept transfer |
|
|
Abstract |
This paper introduces a novel approach to automatic concept transfer; examples of concepts are “romantic”, “earthy”, and “luscious”. The approach modifies the color content of an input image given only a concept specified by a user in natural language, thereby requiring minimal user input. This approach is particularly useful for users who are aware of the message they wish to convey in the transferred image while being unsure of the color combination needed to achieve the corresponding transfer. The user may adjust the intensity level of the concept transfer to his/her liking with a single parameter. The proposed approach uses a convex clustering algorithm, with a novel pruning mechanism, to automatically set the complexity of models of chromatic content. It also uses the Earth-Mover's Distance to compute a mapping between the models of the input image and the target chromatic concept. Results show that our approach yields transferred images which effectively represent concepts, as confirmed by a user study. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
ACM Press |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN ![sorted by ISSN field, ascending order (up)](img/sort_asc.gif) |
|
ISBN |
978-1-4503-0907-3 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
NPAR |
|
|
Notes |
CIC |
Approved |
no |
|
|
Call Number |
Admin @ si @ MSM2011 |
Serial |
1866 |
|
Permanent link to this record |
|
|
|
|
Author |
Jordi Roca; C. Alejandro Parraga; Maria Vanrell |
![goto web page url](img/www.gif)
|
|
Title |
Categorical Focal Colours are Structurally Invariant Under Illuminant Changes |
Type |
Conference Article |
|
Year |
2011 |
Publication |
European Conference on Visual Perception |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
196 |
|
|
Keywords |
|
|
|
Abstract |
The visual system perceives the colour of surfaces approximately constant under changes of illumination. In this work, we investigate how stable is the perception of categorical \“focal\” colours and their interrelations with varying illuminants and simple chromatic backgrounds. It has been proposed that best examples of colour categories across languages cluster in small regions of the colour space and are restricted to a set of 11 basic terms (Kay and Regier, 2003 Proceedings of the National Academy of Sciences of the USA 100 9085\–9089). Following this, we developed a psychophysical paradigm that exploits the ability of subjects to reliably reproduce the most representative examples of each category, adjusting multiple test patches embedded in a coloured Mondrian. The experiment was run on a CRT monitor (inside a dark room) under various simulated illuminants. We modelled the recorded data for each subject and adapted state as a 3D interconnected structure (graph) in Lab space. The graph nodes were the subject\’s focal colours at each adaptation state. The model allowed us to get a better distance measure between focal structures under different illuminants. We found that perceptual focal structures tend to be preserved better than the structures of the physical \“ideal\” colours under illuminant changes. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
Perception 40 |
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN ![sorted by ISSN field, ascending order (up)](img/sort_asc.gif) |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ECVP |
|
|
Notes |
CIC |
Approved |
no |
|
|
Call Number |
Admin @ si @ RPV2011 |
Serial |
1867 |
|
Permanent link to this record |