|
Joel Barajas, Karla Lizbeth Caballero, & Petia Radeva. (2007). Cardiac Phase Extraction in IVUS Sequences Using 1-D Gabor Filters. In Engineering in Medicine and Biology Society, 29th Annual International Conference of the IEEE (343–36).
|
|
|
Karla Lizbeth Caballero, Joel Barajas, & Petia Radeva. (2007). Using Reconstructed IVUS Images for Coronary Plaque Classification. In Engineering in Medicine and Biology Society, 29th Annual International Conference of the IEEE (2167–2170).
|
|
|
Oriol Pujol, Misael Rosales, Petia Radeva, & E Fernandez-Nofrerias. (2003). Intravascular Ultrasound Images Vessel Characterization using AdaBoost.
|
|
|
Sergio Escalera, Oriol Pujol, & Petia Radeva. (2008). Loss-Weighted Decoding for Error-Correcting Output Coding. In 3rd International Conference on Computer Vision Theory and Applications, (Vol. 2, 117–122).
|
|
|
David Masip, Agata Lapedriza, & Jordi Vitria. (2008). Multitask Learning: An Application to Incremental Face Recognition. In 3rd International Conference on Computer Vision Theory and Applications (Vol. 1, 585–590).
|
|
|
Agata Lapedriza, David Masip, & Jordi Vitria. (2008). Subject Recognition Using a New Approach for Feature Extraction. In 3rd International Conference on Computer Vision Theory and Applications (Vol. 2, 61–66).
|
|
|
Jose Carlos Rubio, Joan Serrat, Antonio Lopez, & Daniel Ponsa. (2010). Multiple-target tracking for the intelligent headlights control. In 13th Annual International Conference on Intelligent Transportation Systems (903–910).
Abstract: TA7.4
Intelligent vehicle lighting systems aim at automatically regulating the headlights' beam to illuminate as much of the road ahead as possible while avoiding dazzling other drivers. A key component of such a system is computer vision software that is able to distinguish blobs due to vehicles' headlights and rear lights from those due to road lamps and reflective elements such as poles and traffic signs. In a previous work, we have devised a set of specialized supervised classifiers to make such decisions based on blob features related to its intensity and shape. Despite the overall good performance, there remain challenging that have yet to be solved: notably, faint and tiny blobs corresponding to quite distant vehicles. In fact, for such distant blobs, classification decisions can be taken after observing them during a few frames. Hence, incorporating tracking could improve the overall lighting system performance by enforcing the temporal consistency of the classifier decision. Accordingly, this paper focuses on the problem of constructing blob tracks, which is actually one of multiple-target tracking (MTT), but under two special conditions: We have to deal with frequent occlusions, as well as blob splits and merges. We approach it in a novel way by formulating the problem as a maximum a posteriori inference on a Markov random field. The qualitative (in video form) and quantitative evaluation of our new MTT method shows good tracking results. In addition, we will also see that the classification performance of the problematic blobs improves due to the proposed MTT algorithm.
Keywords: Intelligent Headlights
|
|
|
Ferran Diego, Daniel Ponsa, Joan Serrat, & Antonio Lopez. (2010). Vehicle geolocalization based on video synchronization. In 13th Annual International Conference on Intelligent Transportation Systems (1511–1516).
Abstract: TC8.6
This paper proposes a novel method for estimating the geospatial localization of a vehicle. I uses as input a georeferenced video sequence recorded by a forward-facing camera attached to the windscreen. The core of the proposed method is an on-line video synchronization which finds out the corresponding frame in the georeferenced video sequence to the one recorded at each time by the camera on a second drive through the same track. Once found the corresponding frame in the georeferenced video sequence, we transfer its geospatial information of this frame. The key advantages of this method are: 1) the increase of the update rate and the geospatial accuracy with regard to a standard low-cost GPS and 2) the ability to localize a vehicle even when a GPS is not available or is not reliable enough, like in certain urban areas. Experimental results for an urban environments are presented, showing an average of relative accuracy of 1.5 meters.
Keywords: video alignment
|
|
|
Ferran Diego, Jose Manuel Alvarez, Joan Serrat, & Antonio Lopez. (2010). Vision-based road detection via on-line video registration. In 13th Annual International Conference on Intelligent Transportation Systems (1135–1140).
Abstract: TB6.2
Road segmentation is an essential functionality for supporting advanced driver assistance systems (ADAS) such as road following and vehicle and pedestrian detection. Significant efforts have been made in order to solve this task using vision-based techniques. The major challenge is to deal with lighting variations and the presence of objects on the road surface. In this paper, we propose a new road detection method to infer the areas of the image depicting road surfaces without performing any image segmentation. The idea is to previously segment manually or semi-automatically the road region in a traffic-free reference video record on a first drive. And then to transfer these regions to the frames of a second video sequence acquired later in a second drive through the same road, in an on-line manner. This is possible because we are able to automatically align the two videos in time and space, that is, to synchronize them and warp each frame of the first video to its corresponding frame in the second one. The geometric transform can thus transfer the road region to the present frame on-line. In order to reduce the different lighting conditions which are present in outdoor scenarios, our approach incorporates a shadowless feature space which represents an image in an illuminant-invariant feature space. Furthermore, we propose a dynamic background subtraction algorithm which removes the regions containing vehicles in the observed frames which are within the transferred road region.
Keywords: video alignment; road detection
|
|
|
Diego Alejandro Cheda, Daniel Ponsa, & Antonio Lopez. (2010). Camera Egomotion Estimation in the ADAS Context. In 13th International IEEE Annual Conference on Intelligent Transportation Systems (1415–1420).
Abstract: Camera-based Advanced Driver Assistance Systems (ADAS) have concentrated many research efforts in the last decades. Proposals based on monocular cameras require the knowledge of the camera pose with respect to the environment, in order to reach an efficient and robust performance. A common assumption in such systems is considering the road as planar, and the camera pose with respect to it as approximately known. However, in real situations, the camera pose varies along time due to the vehicle movement, the road slope, and irregularities on the road surface. Thus, the changes in the camera position and orientation (i.e., the egomotion) are critical information that must be estimated at every frame to avoid poor performances. This work focuses on egomotion estimation from a monocular camera under the ADAS context. We review and compare egomotion methods with simulated and real ADAS-like sequences. Basing on the results of our experiments, we show which of the considered nonlinear and linear algorithms have the best performance in this domain.
|
|
|
A. Martinez, & Jordi Vitria. (1995). Designing and Implementing Real Walking Agents using Virtual Environments.
|
|
|
Francesco Ciompi, Rui Hua, Simone Balocco, Marina Alberti, Oriol Pujol, Carles Caus, et al. (2013). Learning to Detect Stent Struts in Intravascular Ultrasound. In 6th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 7887, pp. 575–583). Springer Berlin Heidelberg.
Abstract: In this paper we tackle the automatic detection of struts elements (metallic braces of a stent device) in Intravascular Ultrasound (IVUS) sequences. The proposed method is based on context-aware classification of IVUS images, where we use Multi-Class Multi-Scale Stacked Sequential Learning (M2SSL). Additionally, we introduce a novel technique to reduce the amount of required contextual features. The comparison with binary and multi-class learning is also performed, using a dataset of IVUS images with struts manually annotated by an expert. The best performing configuration reaches a F-measure F = 63.97% .
|
|
|
Francisco Alvaro, Francisco Cruz, Joan Andreu Sanchez, Oriol Ramos Terrades, & Jose Miguel Bemedi. (2013). Page Segmentation of Structured Documents Using 2D Stochastic Context-Free Grammars. In 6th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 7887, pp. 133–140). LNCS. Springer Berlin Heidelberg.
Abstract: In this paper we define a bidimensional extension of Stochastic Context-Free Grammars for page segmentation of structured documents. Two sets of text classification features are used to perform an initial classification of each zone of the page. Then, the page segmentation is obtained as the most likely hypothesis according to a grammar. This approach is compared to Conditional Random Fields and results show significant improvements in several cases. Furthermore, grammars provide a detailed segmentation that allowed a semantic evaluation which also validates this model.
|
|
|
Adriana Romero, & Carlo Gatta. (2013). Do We Really Need All These Neurons? In 6th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 7887, pp. 460–467). LNCS. Springer Berlin Heidelberg.
Abstract: Restricted Boltzmann Machines (RBMs) are generative neural networks that have received much attention recently. In particular, choosing the appropriate number of hidden units is important as it might hinder their representative power. According to the literature, RBM require numerous hidden units to approximate any distribution properly. In this paper, we present an experiment to determine whether such amount of hidden units is required in a classification context. We then propose an incremental algorithm that trains RBM reusing the previously trained parameters using a trade-off measure to determine the appropriate number of hidden units. Results on the MNIST and OCR letters databases show that using a number of hidden units, which is one order of magnitude smaller than the literature estimate, suffices to achieve similar performance. Moreover, the proposed algorithm allows to estimate the required number of hidden units without the need of training many RBM from scratch.
Keywords: Retricted Boltzmann Machine; hidden units; unsupervised learning; classification
|
|
|
Antonio Clavelli, Dimosthenis Karatzas, Josep Llados, Mario Ferraro, & Giuseppe Boccignone. (2013). Towards Modelling an Attention-Based Text Localization Process. In 6th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 7887, pp. 296–303). LNCS. Springer Berlin Heidelberg.
Abstract: This note introduces a visual attention model of text localization in real-world scenes. The core of the model built upon the proto-object concept is discussed. It is shown how such dynamic mid-level representation of the scene can be derived in the framework of an action-perception loop engaging salience, text information value computation, and eye guidance mechanisms.
Preliminary results that compare model generated scanpaths with those eye-tracked from human subjects are presented.
Keywords: text localization; visual attention; eye guidance
|
|