H. Emrah Tasli, Cevahir Çigla, Theo Gevers, & A. Aydin Alatan. (2013). Super pixel extraction via convexity induced boundary adaptation. In 14th IEEE International Conference on Multimedia and Expo (pp. 1–6).
Abstract: This study presents an efficient super-pixel extraction algorithm with major contributions to the state-of-the-art in terms of accuracy and computational complexity. Segmentation accuracy is improved through convexity constrained geodesic distance utilization; while computational efficiency is achieved by replacing complete region processing with boundary adaptation idea. Starting from the uniformly distributed rectangular equal-sized super-pixels, region boundaries are adapted to intensity edges iteratively by assigning boundary pixels to the most similar neighboring super-pixels. At each iteration, super-pixel regions are updated and hence progressively converging to compact pixel groups. Experimental results with state-of-the-art comparisons, validate the performance of the proposed technique in terms of both accuracy and speed.
|
H. Emrah Tasli, Jan van Gemert, & Theo Gevers. (2013). Spot the differences: from a photograph burst to the single best picture. In 21ST ACM International Conference on Multimedia (pp. 729–732).
Abstract: With the rise of the digital camera, people nowadays typically take several near-identical photos of the same scene to maximize the chances of a good shot. This paper proposes a user-friendly tool for exploring a personal photo gallery for selecting or even creating the best shot of a scene between its multiple alternatives. This functionality is realized through a graphical user interface where the best viewpoint can be selected from a generated panorama of the scene. Once the viewpoint is selected, the user is able to go explore possible alternatives coming from the other images. Using this tool, one can explore a photo gallery efficiently. Moreover, additional compositions from other images are also possible. With such additional compositions, one can go from a burst of photographs to the single best one. Even funny compositions of images, where you can duplicate a person in the same image, are possible with our proposed tool.
|
Sezer Karaoglu, Jan van Gemert, & Theo Gevers. (2013). Con-text: text detection using background connectivity for fine-grained object classification. In 21ST ACM International Conference on Multimedia (pp. 757–760).
|
Debora Gil, Agnes Borras, Manuel Ballester, Francesc Carreras, Ruth Aris, Manuel Vazquez, et al. (2011). MIOCARDIA: Integrating cardiac function and muscular architecture for a better diagnosis. In Association for Computing Machinery (Ed.), 14th International Symposium on Applied Sciences in Biomedical and Communication Technologies. Barcelona, Spain.
Abstract: Deep understanding of myocardial structure of the heart would unravel crucial knowledge for clinical and medical procedures. The MIOCARDIA project is a multidisciplinary project in cooperation with l'Hospital de la Santa Creu i de Sant Pau, Clinica la Creu Blanca and Barcelona Supercomputing Center. The ultimate goal of this project is defining a computational model of the myocardium. The model takes into account the deep interrelation between the anatomy and the mechanics of the heart. The paper explains the workflow of the MIOCARDIA project. It also introduces a multiresolution reconstruction technique based on DT-MRI streamlining for simplified global myocardial model generation. Our reconstructions can restore the most complex myocardial structures and provides evidences of a global helical organization.
|
Ivo Everts, Jan van Gemert, & Theo Gevers. (2013). Evaluation of Color STIPs for Human Action Recognition. In IEEE Conference on Computer Vision and Pattern Recognition (pp. 2850–2857).
Abstract: This paper is concerned with recognizing realistic human actions in videos based on spatio-temporal interest points (STIPs). Existing STIP-based action recognition approaches operate on intensity representations of the image data. Because of this, these approaches are sensitive to disturbing photometric phenomena such as highlights and shadows. Moreover, valuable information is neglected by discarding chromaticity from the photometric representation. These issues are addressed by Color STIPs. Color STIPs are multi-channel reformulations of existing intensity-based STIP detectors and descriptors, for which we consider a number of chromatic representations derived from the opponent color space. This enhanced modeling of appearance improves the quality of subsequent STIP detection and description. Color STIPs are shown to substantially outperform their intensity-based counterparts on the challenging UCF~sports, UCF11 and UCF50 action recognition benchmarks. Moreover, the results show that color STIPs are currently the single best low-level feature choice for STIP-based approaches to human action recognition.
|
Fares Alnajar, Theo Gevers, Roberto Valenti, & Sennay Ghebreab. (2013). Calibration-free Gaze Estimation using Human Gaze Patterns. In 15th IEEE International Conference on Computer Vision (pp. 137–144).
Abstract: We present a novel method to auto-calibrate gaze estimators based on gaze patterns obtained from other viewers. Our method is based on the observation that the gaze patterns of humans are indicative of where a new viewer will look at [12]. When a new viewer is looking at a stimulus, we first estimate a topology of gaze points (initial gaze points). Next, these points are transformed so that they match the gaze patterns of other humans to find the correct gaze points. In a flexible uncalibrated setup with a web camera and no chin rest, the proposed method was tested on ten subjects and ten images. The method estimates the gaze points after looking at a stimulus for a few seconds with an average accuracy of 4.3 im. Although the reported performance is lower than what could be achieved with dedicated hardware or calibrated setup, the proposed method still provides a sufficient accuracy to trace the viewer attention. This is promising considering the fact that auto-calibration is done in a flexible setup , without the use of a chin rest, and based only on a few seconds of gaze initialization data. To the best of our knowledge, this is the first work to use human gaze patterns in order to auto-calibrate gaze estimators.
|
Hamdi Dibeklioglu, Albert Ali Salah, & Theo Gevers. (2013). Like Father, Like Son: Facial Expression Dynamics for Kinship Verification. In 15th IEEE International Conference on Computer Vision (pp. 1497–1504).
Abstract: Kinship verification from facial appearance is a difficult problem. This paper explores the possibility of employing facial expression dynamics in this problem. By using features that describe facial dynamics and spatio-temporal appearance over smile expressions, we show that it is possible to improve the state of the art in this problem, and verify that it is indeed possible to recognize kinship by resemblance of facial expressions. The proposed method is tested on different kin relationships. On the average, 72.89% verification accuracy is achieved on spontaneous smiles.
|
Jorge Bernal, F. Javier Sanchez, & Fernando Vilariño. (2011). Current Challenges on Polyp Detection in Colonoscopy Videos: From Region Segmentation to Region Classification. a Pattern Recognition-based Approach.ased Approach. In K. Djemal (Ed.), 2nd International Workshop on Medical Image Analysis and Descriptionfor Diagnosis Systems (pp. 62–71). SciTePress.
Abstract: In this paper we present our approach on real-time polyp detection in colonoscopy videos. Our method consists of three stages: Image Segmentation, Region Description and Image Classification. Taking into account the constraints of our project, we introduce our segmentation system that is based on the model of appearance of the polyp that we have defined after observing real videos from colonoscopy processes. The output of this stage will ideally be a low number of regions of which one of them should cover the whole polyp region (if there is one in the image). This regions will be described in terms of features and, as a result of a machine learning schema, classified based on the values that they have for the several features that we will use on their description. Although we are still on the early stages of the project, we present some preliminary segmentation results that indicates that we are going in a good direction.
Keywords: Medical Imaging, Colonoscopy, Pattern Recognition, Segmentation, Polyp Detection, Region Description, Machine Learning, Real-time.
|
Jorge Bernal, F. Javier Sanchez, & Fernando Vilariño. (2011). A Region Segmentation Method for Colonoscopy Images Using a Model of Polyp Appearance. In Mario João and Hernández J. and S. Vitrià (Ed.), 5th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 6669, pp. 134–143 ). LNCS.
Abstract: This work aims at the segmentation of colonoscopy images into a minimum number of informative regions. Our method performs in a way such, if a polyp is present in the image, it will be exclusively and totally contained in a single region. This result can be used in later stages to classify regions as polyp-containing candidates. The output of the algorithm also defines which regions can be considered as non-informative. The algorithm starts with a high number of initial regions and merges them taking into account the model of polyp appearance obtained from available data. The results show that our segmentations of polyp regions are more accurate than state-of-the-art methods.
Keywords: Colonoscopy, Polyp Detection, Region Merging, Region Segmentation.
|
Jorge Bernal, F. Javier Sanchez, & Fernando Vilariño. (2011). Integration of Valley Orientation Distribution for Polyp Region Identification in Colonoscopy. In In MICCAI 2011 Workshop on Computational and Clinical Applications in Abdominal Imaging (Vol. 6668, pp. 76–83). Lecture Notes in Computer Science. Springer Link.
Abstract: This work presents a region descriptor based on the integration of the information that the depth of valleys image provides. The depth of valleys image is based on the presence of intensity valleys around polyps due to the image acquisition. Our proposed method consists of defining, for each point, a series of radial sectors around it and then accumulates the maxima of the depth of valleys image only if the orientation of the intensity valley coincides with the orientation of the sector above. We apply our descriptor to a prior segmentation of the images and we present promising results on polyp detection, outperforming other approaches that also integrate depth of valleys information.
|
Jorge Bernal, F. Javier Sanchez, & Fernando Vilariño. (2011). Depth of Valleys Accumulation Algorithm for Object Detection. In 14th Congrès Català en Intel·ligencia Artificial (Vol. 1, pp. 71–80).
Abstract: This work aims at detecting in which regions the objects in the image are by using information about the intensity of valleys, which appear to surround ob- jects in images where the source of light is in the line of direction than the camera. We present our depth of valleys accumulation method, which consists of two stages: first, the definition of the depth of valleys image which combines the output of a ridges and valleys detector with the morphological gradient to measure how deep is a point inside a valley and second, an algorithm that denotes points of the image as interior to objects those which are inside complete or incomplete boundaries in the depth of valleys image. To evaluate the performance of our method we have tested it on several application domains. Our results on object region identification are promising, specially in the field of polyp detection in colonoscopy videos, and we also show its applicability in different areas.
Keywords: Object Recognition, Object Region Identification, Image Analysis, Image Processing
|
Victor Ponce, Sergio Escalera, & Xavier Baro. (2013). Multi-modal Social Signal Analysis for Predicting Agreement in Conversation Settings. In 15th ACM International Conference on Multimodal Interaction (pp. 495–502).
Abstract: In this paper we present a non-invasive ambient intelligence framework for the analysis of non-verbal communication applied to conversational settings. In particular, we apply feature extraction techniques to multi-modal audio-RGB-depth data. We compute a set of behavioral indicators that define communicative cues coming from the fields of psychology and observational methodology. We test our methodology over data captured in victim-offender mediation scenarios. Using different state-of-the-art classification approaches, our system achieve upon 75% of recognition predicting agreement among the parts involved in the conversations, using as ground truth the experts opinions.
|
Farhan Riaz, Fernando Vilariño, Mario Dinis-Ribeiro, & Miguel Coimbraln. (2011). Identifying Potentially Cancerous Tissues in Chromoendoscopy Images. In and M. Hernandez J. M. S. J. Vitria (Ed.), 5th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 6669, pp. 709–716). LNCS. Berlin: Springer.
Abstract: The dynamics of image acquisition conditions for gastroenterology imaging scenarios pose novel challenges for automatic computer assisted decision systems. Such systems should have the ability to mimic the tissue characterization of the physicians. In this paper, our objective is to compare some feature extraction methods to classify a Chromoendoscopy image into two different classes: Normal and Potentially cancerous. Results show that LoG filters generally give best classification accuracy among the other feature extraction methods considered.
Keywords: Endoscopy, Computer Assisted Diagnosis, Gradient.
|
Mario Rojas, David Masip, & Jordi Vitria. (2011). Automatic Detection of Facial Feature Points via HOGs and Geometric Prior Models. In 5th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 6669, pp. 371–378). Springer Berlin Heidelberg.
Abstract: Most applications dealing with problems involving the face require a robust estimation of the facial salient points. Nevertheless, this estimation is not usually an automated preprocessing step in applications dealing with facial expression recognition. In this paper we present a simple method to detect facial salient points in the face. It is based on a prior Point Distribution Model and a robust object descriptor. The model learns the distribution of the points from the training data, as well as the amount of variation in location each point exhibits. Using this model, we reduce the search areas to look for each point. In addition, we also exploit the global consistency of the points constellation, increasing the detection accuracy. The method was tested on two separate data sets and the results, in some cases, outperform the state of the art.
|
Jon Almazan, Ernest Valveny, & Alicia Fornes. (2011). Deforming the Blurred Shape Model for Shape Description and Recognition. In Jordi Vitria, Joao Miguel Raposo, & Mario Hernandez (Eds.), 5th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 6669, pp. 1–8). LNCS. Berlin: Springer-Verlag.
Abstract: This paper presents a new model for the description and recognition of distorted shapes, where the image is represented by a pixel density distribution based on the Blurred Shape Model combined with a non-linear image deformation model. This leads to an adaptive structure able to capture elastic deformations in shapes. This method has been evaluated using thee different datasets where deformations are present, showing the robustness and good performance of the new model. Moreover, we show that incorporating deformation and flexibility, the new model outperforms the BSM approach when classifying shapes with high variability of appearance.
|