A. Richichi, O. Fors, M.T. Merino, Xavier Otazu, J. Nuñez, A. Prades, et al. (2006). The Calar Alto lunar occultation program: update and new results. Astronomy and Astrophysics (Section ’Stellar structure and evolution’), 445:1081–1088.
|
Agata Lapedriza, David Masip, & Jordi Vitria. (2006). Face Verification using External Features.
|
Agata Lapedriza, David Masip, & Jordi Vitria. (2006). On the Use of External Face Features for Identity Verification. Journal of Multimedia, 1(4): 11–20, 11–20.
Abstract: In general automatic face classification applications images are captured in natural environments. In these cases, the performance is affected by variations in facial images related to illumination, pose, occlusion or expressions. Most of the existing face classification systems use only the internal features information, composed by eyes, nose and mouth, since they are more difficult to imitate. Nevertheless, nowadays a lot of applications not related to security are developed, and in these cases the information located at head, chin or ears zones (external features) can be useful to improve the current accuracies. However, the lack of a natural alignment in these areas makes difficult to extract these features applying classic Bottom-Up methods. In this paper, we propose a complete scheme based on a Top-Down reconstruction algorithm to extract external features of face images. To test our system we have performed face verification experiments using public databases, given that identity verification is a general task that has many real life applications. We have considered images uniformly illuminated, images with occlusions and images with high local changes in the illumination, and the obtained results show that the information contributed by the external features can be useful for verification purposes, specially significant when faces are partially occluded.
Keywords: Face Verification, Computer Vision, Machine Learning
|
Alicia Fornes, Josep Llados, & Gemma Sanchez. (2006). Primitive Segmentation in Old Handwritten Music Scores. In Graphics Recognition: Ten Years Review and Future Perspectives, W. Liu, J. Llados (Eds.), LNCS 3926: 288–299.
|
Angel Sappa. (2006). Unsupervised Contour Closure Algorithm for Range Image Edge-Based Segmentation. IEEE Transactions on Image Processing, 15(2):377–384.
|
Angel Sappa. (2006). Splitting up Panoramic Range Images into Compact 2½D Representations. International Journal of Imaging Systems and Technology, 16(3): 85–91.
|
Angel Sappa, & Boris X. Vintimilla. (2006). Edge Point Linking by Means of Global and Local Schemes. In IEEE Int. Conf. on Signal-Image Technology and Internet-Based Systems, Hammamet, Tunisia, December 2006, pp. 551-560..
|
Angel Sappa, David Geronimo, Fadi Dornaika, & Antonio Lopez. (2006). On-board camera extrinsic parameter estimation. EL - Electronics Letters, 42(13), 745–746.
Abstract: An efficient technique for real-time estimation of camera extrinsic parameters is presented. It is intended to be used on on-board vision systems for driving assistance applications. The proposed technique is based on the use of a commercial stereo vision system that does not need any visual feature extraction.
|
Angel Sappa, David Geronimo, Fadi Dornaika, & Antonio Lopez. (2006). Real Time Vehicle Pose Using On-Board Stereo Vision System. In International Conference on Image Analysis and Recognition (205–216).
Abstract: This paper presents a robust technique for a real time estimation of both camera’s position and orientation—referred as pose. A commercial stereo vision system is used. Unlike previous approaches, it can be used either for urban or highway scenarios. The proposed technique consists of two stages. Initially, a compact 2D representation of the original 3D data points is computed. Then, a RANSAC based least squares approach is used for fitting a plane to the road. At the same time,
relative camera’s position and orientation are computed. The proposed technique is intended to be used on a driving assistance scheme for applications such as obstacle or pedestrian detection. Experimental results on urban environments with different road geometries are presented.
|
Angel Sappa, & Fadi Dornaika. (2006). An Edge-Based Approach to Motion Detection. In 6th International Conference on Computational Science (ICCS´06), LNCS 3991:563–570.
|
Anonymous. (2006). A Low Computational-Cost Method to Fuse IKONOS Images Using the Spectral Response Function of Its Sensors. IEEE Transactions on Geoscience and Remote Sensing, 44(6): 1683–1691.
|
Anton Cervantes, Gemma Sanchez, Josep Llados, Agnes Borras, & Ana Rodriguez. (2006). Biometric Recognition Based on Line Shape Descriptors. In Lecture Notes in Computer Science (Vol. 3926, 346–357,). Springer Link.
Abstract: Abstract. In this paper we propose biometric descriptors inspired by shape signatures traditionally used in graphics recognition approaches. In particular several methods based on line shape descriptors used to iden- tify newborns from the biometric information of the ears are developed. The process steps are the following: image acquisition, ear segmentation, ear normalization, feature extraction and identification. Several shape signatures are defined from contour images. These are formulated in terms of zoning and contour crossings descriptors. Experimental results are presented to demonstrate the effectiveness of the used techniques.
|
Aura Hernandez-Sabate, Debora Gil, J. Mauri, & Petia Radeva. (2006). Reducing cardiac motion in IVUS sequences. In Proceeding of Computers in Cardiology (Vol. 33, pp. 685–688).
Abstract: Cardiac vessel displacement is a main artifact in IVUS sequences. It hinders visualization of the main structures in an appropriate orientation and alignment and affects extracting vessel measurements. In this paper, we present a novel approach for image sequence alignment based on spectral analysis, which removes rigid dynamics, preserving at the same time the vessel geometry. First, we suppress the translation by taking, for each frame, the center of mass of the image as origin of coordinates. In polar coordinates with such point as origin, the rotation appears as a horizontal displacement. The translation induces a phase shift in the Fourier coefficients of two consecutive polar images. We estimate the phase by adjusting a regression plane to the phases of the principal frequencies. Experiments show that the presented strategy suppress cardiac motion regardless of the acquisition device. 1.
|
Aura Hernandez-Sabate, Petia Radeva, Antonio Tovar, & Debora Gil. (2006). Vessel structures alignment by spectral analysis of ivus sequences. In Proc. of CVII, MICCAI Workshop (pp. 39–36). 1st International Wokshop on Computer Vision for Intravascular and Intracardiac Imaging (CVII’06). Copenhaguen (Denmark),.
Abstract: Three-dimensional intravascular ultrasound (IVUS) allows to visualize and obtain volumetric measurements of coronary lesions through an exploration of the cross sections and longitudinal views of arteries. However, the visualization and subsequent morpho-geometric measurements in IVUS longitudinal cuts are subject to distortion caused by periodic image/vessel motion around the IVUS catheter. Usually, to overcome the image motion artifact ECG-gating and image-gated approaches are proposed, leading to slowing the pullback acquisition or disregarding part of IVUS data. In this paper, we argue that the image motion is due to 3-D vessel geometry as well as cardiac dynamics, and propose a dynamic model based on the tracking of an elliptical vessel approximation to recover the rigid transformation and align IVUS images without loosing any IVUS data. We report an extensive validation with synthetic simulated data and in vivo IVUS sequences of 30 patients achieving an average reduction of the image artifact of 97% in synthetic data and 79% in real-data. Our study shows that IVUS alignment improves longitudinal analysis of the IVUS data and is a necessary step towards accurate reconstruction and volumetric measurements of 3-D IVUS.
|
Bogdan Raducanu, & Jordi Vitria. (2006). Aprendiendo a Aprender: de Maquinas Listas a Maquinas Inteligentes.
|