|
Misael Rosales, Petia Radeva, Oriol Rodriguez, & Debora Gil. (2005). Suppression of IVUS Image Rotation. A Kinematic Approach. In Monica Andres and Hernandez Petia and Santos A. and R. Frangi (Ed.), Functional Imaging and Modeling of the Heart (Vol. 3504, pp. 889–892). LNCS, 3504. Springer Berlin / Heidelberg.
Abstract: IntraVascular Ultrasound (IVUS) is an exploratory technique used in interventional procedures that shows cross section images of arteries and provides qualitative information about the causes and severity of the arterial lumen narrowing. Cross section analysis as well as visualization of plaque extension in a vessel segment during the catheter imaging pullback are the technique main advantages. However, IVUS sequence exhibits a periodic rotation artifact that makes difficult the longitudinal lesion inspection and hinders any segmentation algorithm. In this paper we propose a new kinematic method to estimate and remove the image rotation of IVUS images sequences. Results on several IVUS sequences show good results and prompt some of the clinical applications to vessel dynamics study, and relation to vessel pathology.
|
|
|
Angel Sappa, P. Carvajal, Cristhian A. Aguilera-Carrasco, Miguel Oliveira, Dennis Romero, & Boris X. Vintimilla. (2016). Wavelet based visible and infrared image fusion: a comparative study. SENS - Sensors, 16(6), 1–15.
Abstract: This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and Long Wave InfraRed (LWIR).
Keywords: Image fusion; fusion evaluation metrics; visible and infrared imaging; discrete wavelet transform
|
|
|
Olivier Penacchio, Xavier Otazu, & Laura Dempere-Marco. (2013). A Neurodynamical Model of Brightness Induction in V1. Plos - PloS ONE, 8(5), e64086.
Abstract: Brightness induction is the modulation of the perceived intensity of an area by the luminance of surrounding areas. Recent neurophysiological evidence suggests that brightness information might be explicitly represented in V1, in contrast to the more common assumption that the striate cortex is an area mostly responsive to sensory information. Here we investigate possible neural mechanisms that offer a plausible explanation for such phenomenon. To this end, a neurodynamical model which is based on neurophysiological evidence and focuses on the part of V1 responsible for contextual influences is presented. The proposed computational model successfully accounts for well known psychophysical effects for static contexts and also for brightness induction in dynamic contexts defined by modulating the luminance of surrounding areas. This work suggests that intra-cortical interactions in V1 could, at least partially, explain brightness induction effects and reveals how a common general architecture may account for several different fundamental processes, such as visual saliency and brightness induction, which emerge early in the visual processing pathway.
|
|
|
Pedro Martins, Paulo Carvalho, & Carlo Gatta. (2016). On the completeness of feature-driven maximally stable extremal regions. PRL - Pattern Recognition Letters, 74, 9–16.
Abstract: By definition, local image features provide a compact representation of the image in which most of the image information is preserved. This capability offered by local features has been overlooked, despite being relevant in many application scenarios. In this paper, we analyze and discuss the performance of feature-driven Maximally Stable Extremal Regions (MSER) in terms of the coverage of informative image parts (completeness). This type of features results from an MSER extraction on saliency maps in which features related to objects boundaries or even symmetry axes are highlighted. These maps are intended to be suitable domains for MSER detection, allowing this detector to provide a better coverage of informative image parts. Our experimental results, which were based on a large-scale evaluation, show that feature-driven MSER have relatively high completeness values and provide more complete sets than a traditional MSER detection even when sets of similar cardinality are considered.
Keywords: Local features; Completeness; Maximally Stable Extremal Regions
|
|
|
David Sanchez-Mendoza, David Masip, & Agata Lapedriza. (2015). Emotion recognition from mid-level features. PRL - Pattern Recognition Letters, 67(Part 1), 66–74.
Abstract: In this paper we present a study on the use of Action Units as mid-level features for automatically recognizing basic and subtle emotions. We propose a representation model based on mid-level facial muscular movement features. We encode these movements dynamically using the Facial Action Coding System, and propose to use these intermediate features based on Action Units (AUs) to classify emotions. AUs activations are detected fusing a set of spatiotemporal geometric and appearance features. The algorithm is validated in two applications: (i) the recognition of 7 basic emotions using the publicly available Cohn-Kanade database, and (ii) the inference of subtle emotional cues in the Newscast database. In this second scenario, we consider emotions that are perceived cumulatively in longer periods of time. In particular, we Automatically classify whether video shoots from public News TV channels refer to Good or Bad news. To deal with the different video lengths we propose a Histogram of Action Units and compute it using a sliding window strategy on the frame sequences. Our approach achieves accuracies close to human perception.
Keywords: Facial expression; Emotion recognition; Action units; Computer vision
|
|
|
Fernando Vilariño, Ludmila I. Kuncheva, & Petia Radeva. (2006). ROC curves and video analysis optimization in intestinal capsule endoscopy. PRL - Pattern Recognition Letters, 27(8), 875–881.
Abstract: Wireless capsule endoscopy involves inspection of hours of video material by a highly qualified professional. Time episodes corresponding to intestinal contractions, which are of interest to the physician constitute about 1% of the video. The problem is to label automatically time episodes containing contractions so that only a fraction of the video needs inspection. As the classes of contraction and non-contraction images in the video are largely imbalanced, ROC curves are used to optimize the trade-off between false positive and false negative rates. Classifier ensemble methods and simple classifiers were examined. Our results reinforce the claims from recent literature that classifier ensemble methods specifically designed for imbalanced problems have substantial advantages over simple classifiers and standard classifier ensembles. By using ROC curves with the bagging ensemble method the inspection time can be drastically reduced at the expense of a small fraction of missed contractions.
Keywords: ROC curves; Classification; Classifiers ensemble; Detection of intestinal contractions; Imbalanced classes; Wireless capsule endoscopy
|
|
|
Svebor Karaman, Giuseppe Lisanti, Andrew Bagdanov, & Alberto del Bimbo. (2014). Leveraging local neighborhood topology for large scale person re-identification. PR - Pattern Recognition, 47(12), 3767–3778.
Abstract: In this paper we describe a semi-supervised approach to person re-identification that combines discriminative models of person identity with a Conditional Random Field (CRF) to exploit the local manifold approximation induced by the nearest neighbor graph in feature space. The linear discriminative models learned on few gallery images provides coarse separation of probe images into identities, while a graph topology defined by distances between all person images in feature space leverages local support for label propagation in the CRF. We evaluate our approach using multiple scenarios on several publicly available datasets, where the number of identities varies from 28 to 191 and the number of images ranges between 1003 and 36 171. We demonstrate that the discriminative model and the CRF are complementary and that the combination of both leads to significant improvement over state-of-the-art approaches. We further demonstrate how the performance of our approach improves with increasing test data and also with increasing amounts of additional unlabeled data.
Keywords: Re-identification; Conditional random field; Semi-supervised; ETHZ; CAVIAR; 3DPeS; CMV100
|
|
|
V. Kober, Mikhail Mozerov, J. Alvarez-Borrego, & I.A. Ovseyevich. (2006). Adaptive Correlation Filters for Pattern Recognition. Pattern Recognition and Image Analysis, 425–431.
Abstract: Adaptive correlation filters based on synthetic discriminant functions (SDFs) for reliable pattern recognition are proposed. A given value of discrimination capability can be achieved by adapting a SDF filter to the input scene. This can be done by iterative training. Computer simulation results obtained with the proposed filters are compared with those of various correlation filters in terms of recognition performance.
Keywords: Pattern recognition, Correlation filters, A adaptive filters
|
|
|
Marçal Rusiñol, Dimosthenis Karatzas, & Josep Llados. (2015). Automatic Verification of Properly Signed Multi-page Document Images. In Proceedings of the Eleventh International Symposium on Visual Computing (Vol. 9475, pp. 327–336). LNCS, 9475.
Abstract: In this paper we present an industrial application for the automatic screening of incoming multi-page documents in a banking workflow aimed at determining whether these documents are properly signed or not. The proposed method is divided in three main steps. First individual pages are classified in order to identify the pages that should contain a signature. In a second step, we segment within those key pages the location where the signatures should appear. The last step checks whether the signatures are present or not. Our method is tested in a real large-scale environment and we report the results when checking two different types of real multi-page contracts, having in total more than 14,500 pages.
Keywords: Document Image; Manual Inspection; Signature Verification; Rejection Criterion; Document Flow
|
|
|
Rain Eric Haamer, Eka Rusadze, Iiris Lusi, Tauseef Ahmed, Sergio Escalera, & Gholamreza Anbarjafari. (2018). Review on Emotion Recognition Databases. In Human-Robot Interaction: Theory and Application.
Abstract: Over the past few decades human-computer interaction has become more important in our daily lives and research has developed in many directions: memory research, depression detection, and behavioural deficiency detection, lie detection, (hidden) emotion recognition etc. Because of that, the number of generic emotion and face databases or those tailored to specific needs have grown immensely large. Thus, a comprehensive yet compact guide is needed to help researchers find the most suitable database and understand what types of databases already exist. In this paper, different elicitation methods are discussed and the databases are primarily organized into neat and informative tables based on the format.
Keywords: emotion; computer vision; databases
|
|
|
Cristhian Aguilera, M.Ramos, & Angel Sappa. (2012). Simulated Annealing: A Novel Application of Image Processing in the Wood Area. In Marcos de Sales Guerra Tsuzuki (Ed.), Simulated Annealing – Advances, Applications and Hybridizations (pp. 91–104).
|
|
|
Xinhang Song, Luis Herranz, & Shuqiang Jiang. (2017). Depth CNNs for RGB-D Scene Recognition: Learning from Scratch Better than Transferring from RGB-CNNs. In 31st AAAI Conference on Artificial Intelligence.
Abstract: Scene recognition with RGB images has been extensively studied and has reached very remarkable recognition levels, thanks to convolutional neural networks (CNN) and large scene datasets. In contrast, current RGB-D scene data is much more limited, so often leverages RGB large datasets, by transferring pretrained RGB CNN models and fine-tuning with the target RGB-D dataset. However, we show that this approach has the limitation of hardly reaching bottom layers, which is key to learn modality-specific features. In contrast, we focus on the bottom layers, and propose an alternative strategy to learn depth features combining local weakly supervised training from patches followed by global fine tuning with images. This strategy is capable of learning very discriminative depth-specific features with limited depth images, without resorting to Places-CNN. In addition we propose a modified CNN architecture to further match the complexity of the model and the amount of data available. For RGB-D scene recognition, depth and RGB features are combined by projecting them in a common space and further leaning a multilayer classifier, which is jointly optimized in an end-to-end network. Our framework achieves state-of-the-art accuracy on NYU2 and SUN RGB-D in both depth only and combined RGB-D data.
Keywords: RGB-D scene recognition; weakly supervised; fine tune; CNN
|
|
|
Laura Lopez-Fuentes, Joost Van de Weijer, Marc Bolaños, & Harald Skinnemoen. (2017). Multi-modal Deep Learning Approach for Flood Detection. In MediaEval Benchmarking Initiative for Multimedia Evaluation.
Abstract: In this paper we propose a multi-modal deep learning approach to detect floods in social media posts. Social media posts normally contain some metadata and/or visual information, therefore in order to detect the floods we use this information. The model is based on a Convolutional Neural Network which extracts the visual features and a bidirectional Long Short-Term Memory network to extract the semantic features from the textual metadata. We validate the
method on images extracted from Flickr which contain both visual information and metadata and compare the results when using both, visual information only or metadata only. This work has been done in the context of the MediaEval Multimedia Satellite Task.
|
|
|
Daniel Hernandez, Lukas Schneider, Antonio Espinosa, David Vazquez, Antonio Lopez, Uwe Franke, et al. (2017). Slanted Stixels: Representing San Francisco's Steepest Streets. In 28th British Machine Vision Conference.
Abstract: In this work we present a novel compact scene representation based on Stixels that infers geometric and semantic information. Our approach overcomes the previous rather restrictive geometric assumptions for Stixels by introducing a novel depth model to account for non-flat roads and slanted objects. Both semantic and depth cues are used jointly to infer the scene representation in a sound global energy minimization formulation. Furthermore, a novel approximation scheme is introduced that uses an extremely efficient over-segmentation. In doing so, the computational complexity of the Stixel inference algorithm is reduced significantly, achieving real-time computation capabilities with only a slight drop in accuracy. We evaluate the proposed approach in terms of semantic and geometric accuracy as well as run-time on four publicly available benchmark datasets. Our approach maintains accuracy on flat road scene datasets while improving substantially on a novel non-flat road dataset.
|
|
|
Arturo Fuentes, F. Javier Sanchez, Thomas Voncina, & Jorge Bernal. (2021). LAMV: Learning to Predict Where Spectators Look in Live Music Performances. In 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (Vol. 5, pp. 500–507).
Abstract: The advent of artificial intelligence has supposed an evolution on how different daily work tasks are performed. The analysis of cultural content has seen a huge boost by the development of computer-assisted methods that allows easy and transparent data access. In our case, we deal with the automation of the production of live shows, like music concerts, aiming to develop a system that can indicate the producer which camera to show based on what each of them is showing. In this context, we consider that is essential to understand where spectators look and what they are interested in so the computational method can learn from this information. The work that we present here shows the results of a first preliminary study in which we compare areas of interest defined by human beings and those indicated by an automatic system. Our system is based on the extraction of motion textures from dynamic Spatio-Temporal Volumes (STV) and then analyzing the patterns by means of texture analysis techniques. We validate our approach over several video sequences that have been labeled by 16 different experts. Our method is able to match those relevant areas identified by the experts, achieving recall scores higher than 80% when a distance of 80 pixels between method and ground truth is considered. Current performance shows promise when detecting abnormal peaks and movement trends.
|
|