|
Jorge Bernal, F. Javier Sanchez, & Fernando Vilariño. (2012). Towards Automatic Polyp Detection with a Polyp Appearance Model. PR - Pattern Recognition, 45(9), 3166–3182.
Abstract: This work aims at the automatic polyp detection by using a model of polyp appearance in the context of the analysis of colonoscopy videos. Our method consists of three stages: region segmentation, region description and region classification. The performance of our region segmentation method guarantees that if a polyp is present in the image, it will be exclusively and totally contained in a single region. The output of the algorithm also defines which regions can be considered as non-informative. We define as our region descriptor the novel Sector Accumulation-Depth of Valleys Accumulation (SA-DOVA), which provides a necessary but not sufficient condition for the polyp presence. Finally, we classify our segmented regions according to the maximal values of the SA-DOVA descriptor. Our preliminary classification results are promising, especially when classifying those parts of the image that do not contain a polyp inside.
Keywords: Colonoscopy,PolypDetection,RegionSegmentation,SA-DOVA descriptot
|
|
|
Naila Murray, Sandra Skaff, Luca Marchesotti, & Florent Perronnin. (2012). Towards automatic and flexible concept transfer. CG - Computers and Graphics, 36(6), 622–634.
Abstract: This paper introduces a novel approach to automatic, yet flexible, image concepttransfer; examples of concepts are “romantic”, “earthy”, and “luscious”. The presented method modifies the color content of an input image given only a concept specified by a user in natural language, thereby requiring minimal user input. This method is particularly useful for users who are aware of the message they wish to convey in the transferred image while being unsure of the color combination needed to achieve the corresponding transfer. Our framework is flexible for two reasons. First, the user may select one of two modalities to map input image chromaticities to target concept chromaticities depending on the level of photo-realism required. Second, the user may adjust the intensity level of the concepttransfer to his/her liking with a single parameter. The proposed method uses a convex clustering algorithm, with a novel pruning mechanism, to automatically set the complexity of models of chromatic content. Results show that our approach yields transferred images which effectively represent concepts as confirmed by a user study.
|
|
|
Mohammad Ali Bagheri, Qigang Gao, & Sergio Escalera. (2012). Three-Dimensional Design of Error Correcting Output Codes. In 8th International Conference on Machine Learning and Data Mining (pp. 29–).
|
|
|
Marçal Rusiñol, & Josep Llados. (2012). The Role of the Users in Handwritten Word Spotting Applications: Query Fusion and Relevance Feedback. In 13th International Conference on Frontiers in Handwriting Recognition (pp. 55–60).
Abstract: In this paper we present the importance of including the user in the loop in a handwritten word spotting framework. Several off-the-shelf query fusion and relevance feedback strategies have been tested in the handwritten word spotting context. The increase in terms of precision when the user is included in the loop is assessed using two datasets of historical handwritten documents and a baseline word spotting approach based on a bag-of-visual-words model.
|
|
|
David Masip, Alexander Todorov, & Jordi Vitria. (2012). The Role of Facial Regions in Evaluating Social Dime. In Rita Cucchiara V. M. Andrea Fusiello (Ed.), 12th European Conference on Computer Vision – Workshops and Demonstrations (Vol. 7584, pp. 210–219). LNCS. Springer Berlin Heidelberg.
Abstract: Facial trait judgments are an important information cue for people. Recent works in the Psychology field have stated the basis of face evaluation, defining a set of traits that we evaluate from faces (e.g. dominance, trustworthiness, aggressiveness, attractiveness, threatening or intelligence among others). We rapidly infer information from others faces, usually after a short period of time (< 1000ms) we perceive a certain degree of dominance or trustworthiness of another person from the face. Although these perceptions are not necessarily accurate, they influence many important social outcomes (such as the results of the elections or the court decisions). This topic has also attracted the attention of Computer Vision scientists, and recently a computational model to automatically predict trait evaluations from faces has been proposed. These systems try to mimic the human perception by means of applying machine learning classifiers to a set of labeled data. In this paper we perform an experimental study on the specific facial features that trigger the social inferences. Using previous results from the literature, we propose to use simple similarity maps to evaluate which regions of the face influence the most the trait inferences. The correlation analysis is performed using only appearance, and the results from the experiments suggest that each trait is correlated with specific facial characteristics.
Keywords: Workshops and Demonstrations
|
|
|
Aura Hernandez-Sabate, & Debora Gil. (2012). The Benefits of IVUS Dynamics for Retrieving Stable Models of Arteries. In Yasuhiro Honda (Ed.), Intravascular Ultrasound (pp. 185–206). Intech.
|
|
|
Susana Alvarez, & Maria Vanrell. (2012). Texton theory revisited: a bag-of-words approach to combine textons. PR - Pattern Recognition, 45(12), 4312–4325.
Abstract: The aim of this paper is to revisit an old theory of texture perception and
update its computational implementation by extending it to colour. With this in mind we try to capture the optimality of perceptual systems. This is achieved in the proposed approach by sharing well-known early stages of the visual processes and extracting low-dimensional features that perfectly encode adequate properties for a large variety of textures without needing further learning stages. We propose several descriptors in a bag-of-words framework that are derived from different quantisation models on to the feature spaces. Our perceptual features are directly given by the shape and colour attributes of image blobs, which are the textons. In this way we avoid learning visual words and directly build the vocabularies on these lowdimensionaltexton spaces. Main differences between proposed descriptors rely on how co-occurrence of blob attributes is represented in the vocabularies. Our approach overcomes current state-of-art in colour texture description which is proved in several experiments on large texture datasets.
|
|
|
Thanh Ha Do, Salvatore Tabbone, & Oriol Ramos Terrades. (2012). Text/graphic separation using a sparse representation with multi-learned dictionaries. In 21st International Conference on Pattern Recognition.
Abstract: In this paper, we propose a new approach to extract text regions from graphical documents. In our method, we first empirically construct two sequences of learned dictionaries for the text and graphical parts respectively. Then, we compute the sparse representations of all different sizes and non-overlapped document patches in these learned dictionaries. Based on these representations, each patch can be classified into the text or graphic category by comparing its reconstruction errors. Same-sized patches in one category are then merged together to define the corresponding text or graphic layers which are combined to createfinal text/graphic layer. Finally, in a post-processing step, text regions are further filtered out by using some learned thresholds.
Keywords: Graphics Recognition; Layout Analysis; Document Understandin
|
|
|
Partha Pratim Roy, Umapada Pal, & Josep Llados. (2012). Text line extraction in graphical documents using background and foreground. IJDAR - International Journal on Document Analysis and Recognition, 15(3), 227–241.
Abstract: 0,405 JCR
In graphical documents (e.g., maps, engineering drawings), artistic documents etc., the text lines are annotated in multiple orientations or curvilinear way to illustrate different locations or symbols. For the optical character recognition of such documents, individual text lines from the documents need to be extracted. In this paper, we propose a novel method to segment such text lines and the method is based on the foreground and background information of the text components. To effectively utilize the background information, a water reservoir concept is used here. In the proposed scheme, at first, individual components are detected and grouped into character clusters in a hierarchical way using size and positional information. Next, the clusters are extended in two extreme sides to determine potential candidate regions. Finally, with the help of these candidate regions,
individual lines are extracted. The experimental results are presented on different datasets of graphical documents, camera-based warped documents, noisy images containing seals, etc. The results demonstrate that our approach is robust and invariant to size and orientation of the text lines present in
the document.
|
|
|
Michal Drozdzal, Petia Radeva, Santiago Segui, Laura Igual, Carolina Malagelada, Fernando Azpiroz, et al. (2012). System and Method for Improving a Discriminative Model.
|
|
|
Michal Drozdzal, Petia Radeva, Santiago Segui, Laura Igual, Carolina Malagelada, Fernando Azpiroz, et al. (2012). System and method for automatic detection of in vivo contraction video sequences.
Abstract: Publication date: 2012/3/8
|
|
|
Cesar Isaza, Joaquin Salas, & Bogdan Raducanu. (2012). Synthetic ground truth dataset to detect shadow cast by static objects in outdoor. In 1st International Workshop on Visual Interfaces for Ground Truth Collection in Computer Vision Applications (art. 11). ACM.
Abstract: In this paper, we propose a precise synthetic ground truth dataset to study the problem of detection of the shadows cast by static objects in outdoor environments during extended periods of time (days). For our dataset, we have created a virtual scenario using a rendering software. To increase the realism of the simulated environment, we have defined the scenario in a precise geographical location. In our dataset the sun is by far the main illumination source. The sun position during the simulation time takes into consideration factors related to the geographical location, such as the latitude, longitude, elevation above sea level, and precise image capturing day and time. In our simulation the camera remains fixed. The dataset consists of seven days of simulation, from 10:00am to 5:00pm. Images are captured every 10 seconds. The shadows' ground truth is automatically computed by the rendering software.
|
|
|
Olivier Penacchio, Laura Dempere-Marco, & Xavier Otazu. (2012). Switching off brightness induction through induction-reversed images. In Perception (Vol. 41, 208).
Abstract: Brightness induction is the modulation of the perceived intensity of an
area by the luminance of surrounding areas. Although V1 is traditionally regarded as
an area mostly responsive to retinal information, neurophysiological evidence
suggests that it may explicitly represent brightness information. In this work, we
investigate possible neural mechanisms underlying brightness induction. To this end,
we consider the model by Z Li (1999 Computation and Neural Systems10187-212)
which is constrained by neurophysiological data and focuses on the part of V1
responsible for contextual influences. This model, which has proven to account for
phenomena such as contour detection and preattentive segmentation, shares with
brightness induction the relevant effect of contextual influences. Importantly, the
input to our network model derives from a complete multiscale and multiorientation
wavelet decomposition, which makes it possible to recover an image reflecting the
perceived luminance and successfully accounts for well known psychophysical
effects for both static and dynamic contexts. By further considering inverse problem
techniques we define induction-reversed images: given a target image, we build an
image whose perceived luminance matches the actual luminance of the original
stimulus, thus effectively canceling out brightness induction effects. We suggest that
induction-reversed images may help remove undesired perceptual effects and can
find potential applications in fields such as radiological image interpretation
|
|
|
Laura Igual, Joan Carles Soliva, Antonio Hernandez, Sergio Escalera, Oscar Vilarroya, & Petia Radeva. (2012). Supervised Brain Segmentation and Classification in Diagnostic of Attention-Deficit/Hyperactivity Disorder. In High Performance Computing and Simulation, International Conference on (pp. 182–187). IEEE Xplore.
Abstract: This paper presents an automatic method for external and internal segmentation of the caudate nucleus in Magnetic Resonance Images (MRI) based on statistical and structural machine learning approaches. This method is applied in Attention-Deficit/Hyperactivity Disorder (ADHD) diagnosis. The external segmentation method adapts the Graph Cut energy-minimization model to make it suitable for segmenting small, low-contrast structures, such as the caudate nucleus. In particular, new energy function data and boundary potentials are defined and a supervised energy term based on contextual brain structures is added. Furthermore, the internal segmentation method learns a classifier based on shape features of the Region of Interest (ROI) in MRI slices. The results show accurate external and internal caudate segmentation in a real data set and similar performance of ADHD diagnostic test to manual annotation.
|
|
|
Rui Hua, Oriol Pujol, Francesco Ciompi, Marina Alberti, Simone Balocco, J. Mauri, et al. (2012). Stent Strut Detection by Classifying a Wide Set of IVUS Features. In Computed Assisted Stenting Workshop.
|
|