Francesc Tanarro Marquez, Pau Gratacos Marti, F. Javier Sanchez, Joan Ramon Jimenez Minguell, Coen Antens, & Enric Sala i Esteva. (2012). A device for monitoring condition of a railway supply. European Patent Office.
Abstract: of a railway supply line when the supply line is in contact with a head of a pantograph of a vehicle in order to power said vehicle . The device includes a camera ( for monitoring parameters indicative of operating capability of said supply line.
The device is intended to monitor condition
tive of operating capability of said supply line. The device includes a reflective element. comprising a pattern , intended to be arranged onto the pantograph head . The camera is intended to be arranged on the vehicle (10) so as to register the pattern position regarding a vertical direction.
|
David Roche, Debora Gil, & Jesus Giraldo. (2012). Assessing agonist efficacy in an uncertain Em world. In A. Christopoulus and M. Bouvier (Ed.), 40th Keystone Symposia on mollecular and celular biology (79). Keystone Symposia.
Abstract: The operational model of agonism has been widely used for the analysis of agonist action since its formulation in 1983. The model includes the Em parameter, which is defined as the maximum response of the system. The methods for Em estimation provide Em values not significantly higher than the maximum responses achieved by full agonists. However, it has been found that that some classes of compounds as, for instance, superagonists and positive allosteric modulators can increase the full agonist maximum response, implying upper limits for Em and thereby posing doubts on the validity of Em estimates. Because of the correlation between Em and operational efficacy, τ, wrong Em estimates will yield wrong τ estimates.
In this presentation, the operational model of agonism and various methods for the simulation of allosteric modulation will be analyzed. Alternatives for curve fitting will be presented and discussed.
|
M. Visani, Oriol Ramos Terrades, & Salvatore Tabbone. (2011). A Protocol to Characterize the Descriptive Power and the Complementarity of Shape Descriptors. IJDAR - International Journal on Document Analysis and Recognition, 14(1), 87–100.
Abstract: Most document analysis applications rely on the extraction of shape descriptors, which may be grouped into different categories, each category having its own advantages and drawbacks (O.R. Terrades et al. in Proceedings of ICDAR’07, pp. 227–231, 2007). In order to improve the richness of their description, many authors choose to combine multiple descriptors. Yet, most of the authors who propose a new descriptor content themselves with comparing its performance to the performance of a set of single state-of-the-art descriptors in a specific applicative context (e.g. symbol recognition, symbol spotting...). This results in a proliferation of the shape descriptors proposed in the literature. In this article, we propose an innovative protocol, the originality of which is to be as independent of the final application as possible and which relies on new quantitative and qualitative measures. We introduce two types of measures: while the measures of the first type are intended to characterize the descriptive power (in terms of uniqueness, distinctiveness and robustness towards noise) of a descriptor, the second type of measures characterizes the complementarity between multiple descriptors. Characterizing upstream the complementarity of shape descriptors is an alternative to the usual approach where the descriptors to be combined are selected by trial and error, considering the performance characteristics of the overall system. To illustrate the contribution of this protocol, we performed experimental studies using a set of descriptors and a set of symbols which are widely used by the community namely ART and SC descriptors and the GREC 2003 database.
Keywords: Document analysis; Shape descriptors; Symbol description; Performance characterization; Complementarity analysis
|
Oriol Ramos Terrades, Alejandro Hector Toselli, Nicolas Serrano, Veronica Romero, Enrique Vidal, & Alfons Juan. (2010). Interactive layout analysis and transcription systems for historic handwritten documents. In 10th ACM Symposium on Document Engineering (219–222).
Abstract: The amount of digitized legacy documents has been rising dramatically over the last years due mainly to the increasing number of on-line digital libraries publishing this kind of documents, waiting to be classified and finally transcribed into a textual electronic format (such as ASCII or PDF). Nevertheless, most of the available fully-automatic applications addressing this task are far from being perfect and heavy and inefficient human intervention is often required to check and correct the results of such systems. In contrast, multimodal interactive-predictive approaches may allow the users to participate in the process helping the system to improve the overall performance. With this in mind, two sets of recent advances are introduced in this work: a novel interactive method for text block detection and two multimodal interactive handwritten text transcription systems which use active learning and interactive-predictive technologies in the recognition process.
Keywords: Handwriting recognition; Interactive predictive processing; Partial supervision; Interactive layout analysis
|
Alberto Hidalgo, Ferran Poveda, Enric Marti, Debora Gil, Albert Andaluz, Francesc Carreras, et al. (2012). Evidence of continuous helical structure of the cardiac ventricular anatomy assessed by diffusion tensor imaging magnetic resonance multiresolution tractography. ECR - European Radiology, 3(1), 361–362.
Abstract: Deep understanding of myocardial structure linking morphology and func- tion of the heart would unravel crucial knowledge for medical and surgical clinical procedures and studies. Diffusion tensor MRI provides a discrete measurement of the 3D arrangement of myocardial fibres by the observation of local anisotropic
diffusion of water molecules in biological tissues. In this work, we present a multi- scale visualisation technique based on DT-MRI streamlining capable of uncovering additional properties of the architectural organisation of the heart. Methods and Materials: We selected the John Hopkins University (JHU) Canine Heart Dataset, where the long axis cardiac plane is aligned with the scanner’s Z- axis. Their equipment included a 4-element passed array coil emitting a 1.5 T. For DTI acquisition, a 3D-FSE sequence is apply. We used 200 seeds for full-scale tractography, while we applied a MIP mapping technique for simplified tractographic reconstruction. In this case, we reduced each DTI 3D volume dimensions by order- two magnitude before streamlining.
Our simplified tractographic reconstruction method keeps the main geometric features of fibres, allowing for an easier identification of their global morphological disposition, including the ventricular basal ring. Moreover, we noticed a clearly visible helical disposition of the myocardial fibres, in line with the helical myocardial band ventricular structure described by Torrent-Guasp. Finally, our simplified visualisation with single tracts identifies the main segments of the helical ventricular architecture.
DT-MRI makes possible the identification of a continuous helical architecture of the myocardial fibres, which validates Torrent-Guasp’s helical myocardial band ventricular anatomical model.
|
Jürgen Brauer, Wenjuan Gong, Jordi Gonzalez, & Michael Arens. (2011). On the Effect of Temporal Information on Monocular 3D Human Pose Estimation. In 2nd IEEE International Workshop on Analysis and Retrieval of Tracked Events and Motion in Imagery Streams (pp. 906–913).
Abstract: We address the task of estimating 3D human poses from monocular camera sequences. Many works make use of multiple consecutive frames for the estimation of a 3D pose in a frame. Although such an approach should ease the pose estimation task substantially since multiple consecutive frames allow to solve for 2D projection ambiguities in principle, it has not yet been investigated systematically how much we can improve the 3D pose estimates when using multiple consecutive frames opposed to single frame information. In this paper we analyze the difference in quality of 3D pose estimates based on different numbers of consecutive frames from which 2D pose estimates are available. We validate the use of temporal information on two major different approaches for human pose estimation – modeling and learning approaches. The results of our experiments show that both learning and modeling approaches benefit from using multiple frames opposed to single frame input but that the benefit is small when the 2D pose estimates show a high quality in terms of precision.
|
G.D. Evangelidis, Ferran Diego, Joan Serrat, & Antonio Lopez. (2011). Slice Matching for Accurate Spatio-Temporal Alignment. In In ICCV Workshop on Visual Surveillance.
Abstract: Video synchronization and alignment is a rather recent topic in computer vision. It usually deals with the problem of aligning sequences recorded simultaneously by static, jointly- or independently-moving cameras. In this paper, we investigate the more difficult problem of matching videos captured at different times from independently-moving cameras, whose trajectories are approximately coincident or parallel. To this end, we propose a novel method that pixel-wise aligns videos and allows thus to automatically highlight their differences. This primarily aims at visual surveillance but the method can be adopted as is by other related video applications, like object transfer (augmented reality) or high dynamic range video. We build upon a slice matching scheme to first synchronize the sequences, while we develop a spatio-temporal alignment scheme to spatially register corresponding frames and refine the temporal mapping. We investigate the performance of the proposed method on videos recorded from vehicles driven along different types of roads and compare with related previous works.
Keywords: video alignment
|
Gemma Roig, Xavier Boix, F. de la Torre, Joan Serrat, & C. Vilella. (2011). Hierarchical CRF with product label spaces for parts-based Models. In IEEE Conference on Automatic Face and Gesture Recognition (pp. 657–664).
Abstract: Non-rigid object detection is a challenging an open research problem in computer vision. It is a critical part in many applications such as image search, surveillance, human-computer interaction or image auto-annotation. Most successful approaches to non-rigid object detection make use of part-based models. In particular, Conditional Random Fields (CRF) have been successfully embedded into a discriminative parts-based model framework due to its effectiveness for learning and inference (usually based on a tree structure). However, CRF-based approaches do not incorporate global constraints and only model pairwise interactions. This is especially important when modeling object classes that may have complex parts interactions (e.g. facial features or body articulations), because neglecting them yields an oversimplified model with suboptimal performance. To overcome this limitation, this paper proposes a novel hierarchical CRF (HCRF). The main contribution is to build a hierarchy of part combinations by extending the label set to a hierarchy of product label spaces. In order to keep the inference computation tractable, we propose an effective method to reduce the new label set. We test our method on two applications: facial feature detection on the Multi-PIE database and human pose estimation on the Buffy dataset.
Keywords: Shape; Computational modeling; Principal component analysis; Random variables; Color; Upper bound; Facial features
|
Albert Andaluz. (2012). Harmonic Phase Flow: User's guide. Barcelona: CVC.
Abstract: HPF is a plugin for the computation of clinical scores under Osirix.
This manual provides a basic guide for experienced clinical staff. Chapter 1 provides the theoretical background in which this plugin is based.
Next, in chapter 2 we provide basic instructions for installing and uninstalling this plugin. chapter 3we shows a step-by-step scenario to compute clinical scores from tagged-MRI images with HPF. Finally, in chapter 4 we provide a quick guide for plugin developers
|
Fahad Shahbaz Khan, Joost Van de Weijer, & Maria Vanrell. (2012). Modulating Shape Features by Color Attention for Object Recognition. IJCV - International Journal of Computer Vision, 98(1), 49–64.
Abstract: Bag-of-words based image representation is a successful approach for object recognition. Generally, the subsequent stages of the process: feature detection,feature description, vocabulary construction and image representation are performed independent of the intentioned object classes to be detected. In such a framework, it was found that the combination of different image cues, such as shape and color, often obtains below expected results. This paper presents a novel method for recognizing object categories when using ultiple cues by separately processing the shape and color cues and combining them by modulating the shape features by category specific color attention. Color is used to compute bottom up and top-down attention maps. Subsequently, these color attention maps are used to modulate the weights of the shape features. In regions with higher attention shape features are given more weight than in regions with low attention. We compare our approach with existing methods that combine color and shape cues on five data sets containing varied importance of both cues, namely, Soccer (color predominance), Flower (color and hape parity), PASCAL VOC 2007 and 2009 (shape predominance) and Caltech-101 (color co-interference). The experiments clearly demonstrate that in all five data sets our proposed framework significantly outperforms existing methods for combining color and shape information.
|
Fahad Shahbaz Khan, Joost Van de Weijer, Andrew Bagdanov, & Maria Vanrell. (2011). Portmanteau Vocabularies for Multi-Cue Image Representation. In 25th Annual Conference on Neural Information Processing Systems.
Abstract: We describe a novel technique for feature combination in the bag-of-words model of image classification. Our approach builds discriminative compound words from primitive cues learned independently from training images. Our main observation is that modeling joint-cue distributions independently is more statistically robust for typical classification problems than attempting to empirically estimate the dependent, joint-cue distribution directly. We use Information theoretic vocabulary compression to find discriminative combinations of cues and the resulting vocabulary of portmanteau words is compact, has the cue binding property, and supports individual weighting of cues in the final image representation. State-of-the-art results on both the Oxford Flower-102 and Caltech-UCSD Bird-200 datasets demonstrate the effectiveness of our technique compared to other, significantly more complex approaches to multi-cue image representation
|
Naila Murray, Sandra Skaff, Luca Marchesotti, & Florent Perronnin. (2011). Towards Automatic Concept Transfer. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Non-Photorealistic Animation and Rendering (167.176). ACM Press.
Abstract: This paper introduces a novel approach to automatic concept transfer; examples of concepts are “romantic”, “earthy”, and “luscious”. The approach modifies the color content of an input image given only a concept specified by a user in natural language, thereby requiring minimal user input. This approach is particularly useful for users who are aware of the message they wish to convey in the transferred image while being unsure of the color combination needed to achieve the corresponding transfer. The user may adjust the intensity level of the concept transfer to his/her liking with a single parameter. The proposed approach uses a convex clustering algorithm, with a novel pruning mechanism, to automatically set the complexity of models of chromatic content. It also uses the Earth-Mover's Distance to compute a mapping between the models of the input image and the target chromatic concept. Results show that our approach yields transferred images which effectively represent concepts, as confirmed by a user study.
Keywords: chromatic modeling, color concepts, color transfer, concept transfer
|
Jordi Roca, C. Alejandro Parraga, & Maria Vanrell. (2011). Categorical Focal Colours are Structurally Invariant Under Illuminant Changes. In European Conference on Visual Perception (196). Perception 40.
Abstract: The visual system perceives the colour of surfaces approximately constant under changes of illumination. In this work, we investigate how stable is the perception of categorical \“focal\” colours and their interrelations with varying illuminants and simple chromatic backgrounds. It has been proposed that best examples of colour categories across languages cluster in small regions of the colour space and are restricted to a set of 11 basic terms (Kay and Regier, 2003 Proceedings of the National Academy of Sciences of the USA 100 9085\–9089). Following this, we developed a psychophysical paradigm that exploits the ability of subjects to reliably reproduce the most representative examples of each category, adjusting multiple test patches embedded in a coloured Mondrian. The experiment was run on a CRT monitor (inside a dark room) under various simulated illuminants. We modelled the recorded data for each subject and adapted state as a 3D interconnected structure (graph) in Lab space. The graph nodes were the subject\’s focal colours at each adaptation state. The model allowed us to get a better distance measure between focal structures under different illuminants. We found that perceptual focal structures tend to be preserved better than the structures of the physical \“ideal\” colours under illuminant changes.
|
N. Serrano, L. Tarazon, D. Perez, Oriol Ramos Terrades, & S. Juan. (2010). The GIDOC Prototype. In 10th International Workshop on Pattern Recognition in Information Systems (pp. 82–89).
Abstract: Transcription of handwritten text in (old) documents is an important, time-consuming task for digital libraries. It might be carried out by first processing all document images off-line, and then manually supervising system transcriptions to edit incorrect parts. However, current techniques for automatic page layout analysis, text line detection and handwriting recognition are still far from perfect, and thus post-editing system output is not clearly better than simply ignoring it.
A more effective approach to transcribe old text documents is to follow an interactive- predictive paradigm in which both, the system is guided by the user, and the user is assisted by the system to complete the transcription task as efficiently as possible. Following this approach, a system prototype called GIDOC (Gimp-based Interactive transcription of old text DOCuments) has been developed to provide user-friendly, integrated support for interactive-predictive layout analysis, line detection and handwriting transcription.
GIDOC is designed to work with (large) collections of homogeneous documents, that is, of similar structure and writing styles. They are annotated sequentially, by (par- tially) supervising hypotheses drawn from statistical models that are constantly updated with an increasing number of available annotated documents. And this is done at different annotation levels. For instance, at the level of page layout analysis, GIDOC uses a novel text block detection method in which conventional, memoryless techniques are improved with a “history” model of text block positions. Similarly, at the level of text line image transcription, GIDOC includes a handwriting recognizer which is steadily improved with a growing number of (partially) supervised transcriptions.
|
S. Chanda, Umapada Pal, & Oriol Ramos Terrades. (2009). Word-Wise Thai and Roman Script Identification. TALIP - ACM Transactions on Asian Language Information Processing, 1–21.
Abstract: In some Thai documents, a single text line of a printed document page may contain words of both Thai and Roman scripts. For the Optical Character Recognition (OCR) of such a document page it is better to identify, at first, Thai and Roman script portions and then to use individual OCR systems of the respective scripts on these identified portions. In this article, an SVM-based method is proposed for identification of word-wise printed Roman and Thai scripts from a single line of a document page. Here, at first, the document is segmented into lines and then lines are segmented into character groups (words). In the proposed scheme, we identify the script of a character group combining different character features obtained from structural shape, profile behavior, component overlapping information, topological properties, and water reservoir concept, etc. Based on the experiment on 10,000 data (words) we obtained 99.62% script identification accuracy from the proposed scheme.
|