|
Arjan Gijsenij, Theo Gevers, & Joost Van de Weijer. (2009). Physics-based Edge Evaluation for Improved Color Constancy. In 22nd IEEE Conference on Computer Vision and Pattern Recognition (581 – 588).
Abstract: Edge-based color constancy makes use of image derivatives to estimate the illuminant. However, different edge types exist in real-world images such as shadow, geometry, material and highlight edges. These different edge types may have a distinctive influence on the performance of the illuminant estimation.
|
|
|
Jose Manuel Alvarez, Ferran Diego, Joan Serrat, & Antonio Lopez. (2009). Automatic Ground-truthing using video registration for on-board detection algorithms. In 16th IEEE International Conference on Image Processing (pp. 4389–4392).
Abstract: Ground-truth data is essential for the objective evaluation of object detection methods in computer vision. Many works claim their method is robust but they support it with experiments which are not quantitatively assessed with regard some ground-truth. This is one of the main obstacles to properly evaluate and compare such methods. One of the main reasons is that creating an extensive and representative ground-truth is very time consuming, specially in the case of video sequences, where thousands of frames have to be labelled. Could such a ground-truth be generated, at least in part, automatically? Though it may seem a contradictory question, we show that this is possible for the case of video sequences recorded from a moving camera. The key idea is transferring existing frame segmentations from a reference sequence into another video sequence recorded at a different time on the same track, possibly under a different ambient lighting. We have carried out experiments on several video sequence pairs and quantitatively assessed the precision of the transformed ground-truth, which prove that our approach is not only feasible but also quite accurate.
|
|
|
Francesco Ciompi, Oriol Pujol, Oriol Rodriguez-Leor, Angel Serrano, J. Mauri, & Petia Radeva. (2009). On in-vitro and in-vivo IVUS data fusion. In 12th International Conference of the Catalan Association for Artificial Intelligence (Vol. 202, pp. 147–156).
Abstract: The design and the validation of an automatic plaque characterization technique based on Intravascular Ultrasound (IVUS) usually requires a data ground-truth. The histological analysis of post-mortem coronary arteries is commonly assumed as the state-of-the-art process for the extraction of a reliable data-set of atherosclerotic plaques. Unfortunately, the amount of data provided by this technique is usually few, due to the difficulties in collecting post-mortem cases and phenomena of tissue spoiling during histological analysis. In this paper we tackle the process of fusing in-vivo and in-vitro IVUS data starting with the analysis of recently proposed approaches for the creation of an enhanced IVUS data-set; furthermore, we propose a new approach, named pLDS, based on semi-supervised learning with a data selection criterion. The enhanced data-set obtained by each one of the analyzed approaches is used to train a classifier for tissue characterization purposes. Finally, the discriminative power of each classifier is quantitatively assessed and compared by classifying a data-set of validated in-vitro IVUS data.
|
|
|
Nicola Bellotto, Eric Sommerlade, Ben Benfold, Charles Bibby, I. Reid, Daniel Roth, et al. (2009). A Distributed Camera System for Multi-Resolution Surveillance. In 3rd ACM/IEEE International Conference on Distributed Smart Cameras.
Abstract: We describe an architecture for a multi-camera, multi-resolution surveillance system. The aim is to support a set of distributed static and pan-tilt-zoom (PTZ) cameras and visual tracking algorithms, together with a central supervisor unit. Each camera (and possibly pan-tilt device) has a dedicated process and processor. Asynchronous interprocess communications and archiving of data are achieved in a simple and effective way via a central repository, implemented using an SQL database. Visual tracking data from static views are stored dynamically into tables in the database via client calls to the SQL server. A supervisor process running on the SQL server determines if active zoom cameras should be dispatched to observe a particular target, and this message is effected via writing demands into another database table. We show results from a real implementation of the system comprising one static camera overviewing the environment under consideration and a PTZ camera operating under closed-loop velocity control, which uses a fast and robust level-set-based region tracker. Experiments demonstrate the effectiveness of our approach and its feasibility to multi-camera systems for intelligent surveillance.
Keywords: 10.1109/ICDSC.2009.5289413
|
|
|
Pierluigi Casale, Oriol Pujol, & Petia Radeva. (2009). Face-to-face social activity detection using data collected with a wearable device. In 4th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 5524, 56–63). LNCS. Springer Berlin Heidelberg.
Abstract: In this work the feasibility of building a socially aware badge that learns from user activities is explored. A wearable multisensor device has been prototyped for collecting data about user movements and photos of the environment where the user acts. Using motion data, speaking and other activities have been classified. Images have been analysed in order to complement motion data and help for the detection of social behaviours. A face detector and an activity classifier are both used for detecting if users have a social activity in the time they worn the device. Good results encourage the improvement of the system at both hardware and software level
|
|
|
Mikhail Mozerov, Ariel Amato, & Xavier Roca. (2009). Occlusion Handling in Trinocular Stereo using Composite Disparity Space Image. In 19th International Conference on Computer Graphics and Vision (69–73).
Abstract: In this paper we propose a method that smartly improves occlusion handling in stereo matching using trinocular stereo. The main idea is based on the assumption that any occluded region in a matched stereo pair (middle-left images) in general is not occluded in the opposite matched pair (middle-right images). Then two disparity space images (DSI) can be merged in one composite DSI. The proposed integration differs from the known approach that uses a cumulative cost. A dense disparity map is obtained with a global optimization algorithm using the proposed composite DSI. The experimental results are evaluated on the Middlebury data set, showing high performance of the proposed algorithm especially in the occluded regions. One of the top positions in the rank of the Middlebury website confirms the performance of our method to be competitive with the best stereo matching.
|
|
|
Ivan Huerta, Michael Holte, Thomas B. Moeslund, & Jordi Gonzalez. (2009). Detection and Removal of Chromatic Moving Shadows in Surveillance Scenarios. In 12th International Conference on Computer Vision (pp. 1499–1506).
Abstract: Segmentation in the surveillance domain has to deal with shadows to avoid distortions when detecting moving objects. Most segmentation approaches dealing with shadow detection are typically restricted to penumbra shadows. Therefore, such techniques cannot cope well with umbra shadows. Consequently, umbra shadows are usually detected as part of moving objects. In this paper we present a novel technique based on gradient and colour models for separating chromatic moving cast shadows from detected moving objects. Firstly, both a chromatic invariant colour cone model and an invariant gradient model are built to perform automatic segmentation while detecting potential shadows. In a second step, regions corresponding to potential shadows are grouped by considering “a bluish effect” and an edge partitioning. Lastly, (i) temporal similarities between textures and (ii) spatial similarities between chrominance angle and brightness distortions are analysed for all potential shadow regions in order to finally identify umbra shadows. Unlike other approaches, our method does not make any a-priori assumptions about camera location, surface geometries, surface textures, shapes and types of shadows, objects, and background. Experimental results show the performance and accuracy of our approach in different shadowed materials and illumination conditions.
|
|
|
Marco Pedersoli, Jordi Gonzalez, & Juan J. Villanueva. (2009). High-Speed Human Detection Using a Multiresolution Cascade of Histograms of Oriented Gradients. In 4th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 5524). LNCS. Springer Berlin Heidelberg.
Abstract: This paper presents a new method for human detection based on a multiresolution cascade of Histograms of Oriented Gradients (HOG) that can highly reduce the computational cost of the detection search without affecting accuracy. The method consists of a cascade of sliding window detectors. Each detector is a Support Vector Machine (SVM) composed by features at different resolution, from coarse for the first level to fine for the last one.
Considering that the spatial stride of the sliding window search is affected by the HOG features size, unlike previous methods based on Adaboost cascades, we can adopt a spatial stride inversely proportional to the features resolution. This produces that the speed-up of the cascade is not only due to the low number of features that need to be computed in the first levels, but also to the lower number of detection windows that needs to be evaluated.
Experimental results shows that our method permits a detection rate comparable with the state of the art, but at the same time a gain in the speed of the detection search of 10-20 times depending on the cascade configuration.
|
|
|
Bhaskar Chakraborty, Andrew Bagdanov, & Jordi Gonzalez. (2009). Towards Real-Time Human Action Recognition. In 4th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 5524). LNCS. Springer Berlin Heidelberg.
Abstract: This work presents a novel approach to human detection based action-recognition in real-time. To realize this goal our method first detects humans in different poses using a correlation-based approach. Recognition of actions is done afterward based on the change of the angular values subtended by various body parts. Real-time human detection and action recognition are very challenging, and most state-of-the-art approaches employ complex feature extraction and classification techniques, which ultimately becomes a handicap for real-time recognition. Our correlation-based method, on the other hand, is computationally efficient and uses very simple gradient-based features. For action recognition angular features of body parts are extracted using a skeleton technique. Results for action recognition are comparable with the present state-of-the-art.
|
|
|
Murad Al Haj, Andrew Bagdanov, Jordi Gonzalez, & Xavier Roca. (2009). Robust and Efficient Multipose Face Detection Using Skin Color Segmentation. In 4th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 5524). LNCS. Springer Berlin Heidelberg.
Abstract: In this paper we describe an efficient technique for detecting faces in arbitrary images and video sequences. The approach is based on segmentation of images or video frames into skin-colored blobs using a pixel-based heuristic. Scale and translation invariant features are then computed from these segmented blobs which are used to perform statistical discrimination between face and non-face classes. We train and evaluate our method on a standard, publicly available database of face images and analyze its performance over a range of statistical pattern classifiers. The generalization of our approach is illustrated by testing on an independent sequence of frames containing many faces and non-faces. These experiments indicate that our proposed approach obtains false positive rates comparable to more complex, state-of-the-art techniques, and that it generalizes better to new data. Furthermore, the use of skin blobs and invariant features requires fewer training samples since significantly fewer non-face candidate regions must be considered when compared to AdaBoost-based approaches.
|
|
|
D. Jayagopi, Bogdan Raducanu, & D. Gatica-Perez. (2009). Characterizing conversational group dynamics using nonverbal behaviour. In 10th IEEE International Conference on Multimedia and Expo (370–373).
Abstract: This paper addresses the novel problem of characterizing conversational group dynamics. It is well documented in social psychology that depending on the objectives a group, the dynamics are different. For example, a competitive meeting has a different objective from that of a collaborative meeting. We propose a method to characterize group dynamics based on the joint description of a group members' aggregated acoustical nonverbal behaviour to classify two meeting datasets (one being cooperative-type and the other being competitive-type). We use 4.5 hours of real behavioural multi-party data and show that our methodology can achieve a classification rate of upto 100%.
|
|
|
Miquel Ferrer, Ernest Valveny, F. Serratosa, I. Bardaji, & Horst Bunke. (2009). Graph-based k-means clustering: A comparison of the set versus the generalized median graph. In 13th International Conference on Computer Analysis of Images and Patterns (Vol. 5702, 342–350). LNCS. Springer Berlin Heidelberg.
Abstract: In this paper we propose the application of the generalized median graph in a graph-based k-means clustering algorithm. In the graph-based k-means algorithm, the centers of the clusters have been traditionally represented using the set median graph. We propose an approximate method for the generalized median graph computation that allows to use it to represent the centers of the clusters. Experiments on three databases show that using the generalized median graph as the clusters representative yields better results than the set median graph.
|
|
|
Ricard Coll, Alicia Fornes, & Josep Llados. (2009). Graphological Analysis of Handwritten Text Documents for Human Resources Recruitment. In 10th International Conference on Document Analysis and Recognition (1081–1085).
Abstract: The use of graphology in recruitment processes has become a popular tool in many human resources companies. This paper presents a model that links features from handwritten images to a number of personality characteristics used to measure applicant aptitudes for the job in a particular hiring scenario. In particular we propose a model of measuring active personality and leadership of the writer. Graphological features that define such a profile are measured in terms of document and script attributes like layout configuration, letter size, shape, slant and skew angle of lines, etc. After the extraction, data is classified using a neural network. An experimental framework with real samples has been constructed to illustrate the performance of the approach.
|
|
|
Alicia Fornes, Josep Llados, Gemma Sanchez, & Horst Bunke. (2009). Symbol-independent writer identification in old handwritten music scores. In In proceedings of 8th IAPR International Workshop on Graphics Recognition (186–197). Springer Berlin Heidelberg.
|
|
|
Alicia Fornes, Josep Llados, Gemma Sanchez, & Horst Bunke. (2009). On the use of textural features for writer identification in old handwritten music scores. In 10th International Conference on Document Analysis and Recognition (pp. 996–1000).
Abstract: Writer identification consists in determining the writer of a piece of handwriting from a set of writers. In this paper we present a system for writer identification in old handwritten music scores which uses only music notation to determine the author. The steps of the proposed system are the following. First of all, the music sheet is preprocessed for obtaining a music score without the staff lines. Afterwards, four different methods for generating texture images from music symbols are applied. Every approach uses a different spatial variation when combining the music symbols to generate the textures. Finally, Gabor filters and Grey-scale Co-ocurrence matrices are used to obtain the features. The classification is performed using a k-NN classifier based on Euclidean distance. The proposed method has been tested on a database of old music scores from the 17th to 19th centuries, achieving encouraging identification rates.
|
|