Angel Sappa (Ed.). (2010). Computer Graphics and Imaging.
|
Angel Sappa, & Mohammad Rouhani. (2009). Efficient Distance Estimation for Fitting Implicit Quadric Surfaces. In 16th IEEE International Conference on Image Processing (3521–3524).
Abstract: This paper presents a novel approach for estimating the shortest Euclidean distance from a given point to the corresponding implicit quadric fitting surface. It first estimates the orthogonal orientation to the surface from the given point; then the shortest distance is directly estimated by intersecting the implicit surface with a line passing through the given point according to the estimated orthogonal orientation. The proposed orthogonal distance estimation is easily obtained without increasing computational complexity; hence it can be used in error minimization surface fitting frameworks. Comparisons of the proposed metric with previous approaches are provided to show both improvements in CPU time as well as in the accuracy of the obtained results. Surfaces fitted by using the proposed geometric distance estimation and state of the art metrics are presented to show the viability of the proposed approach.
|
Joan Serrat, J. Argemi, & Juan J. Villanueva. (1991). Automatization of TW2 method using a knowledge-based image analysis system. In VIth International Congress of Auxology..
|
Angel Sappa, & Boris X. Vintimilla. (2006). Edge Point Linking by Means of Global and Local Schemes. In IEEE Int. Conf. on Signal-Image Technology and Internet-Based Systems, Hammamet, Tunisia, December 2006, pp. 551-560..
|
Angel Sappa, & Boris X. Vintimilla. (2007). Cost-Based Closed Contour Representations. Journal of Electronic Imaging, 16(2), 023009 (9 pages).
|
Angel Sappa, & Boris X. Vintimilla. (2008). Edge Point Linking by Means of Global and Local Schemes. In E. Damiani (Ed.), in Signal Processing for Image Enhancement and Multimedia Processing (Vol. 11, 115–125). Springer.
|
Sergio Silva, Victor Campmany, Laura Sellart, Juan Carlos Moure, Antoni Espinosa, David Vazquez, et al. (2015). Autonomous GPU-based Driving. In Programming and Tunning Massive Parallel Systems.
Abstract: Human factors cause most driving accidents; this is why nowadays is common to hear about autonomous driving as an alternative. Autonomous driving will not only increase safety, but also will develop a system of cooperative self-driving cars that will reduce pollution and congestion. Furthermore, it will provide more freedom to handicapped people, elderly or kids.
Autonomous Driving requires perceiving and understanding the vehicle environment (e.g., road, traffic signs, pedestrians, vehicles) using sensors (e.g., cameras, lidars, sonars, and radars), selflocalization (requiring GPS, inertial sensors and visual localization in precise maps), controlling the vehicle and planning the routes. These algorithms require high computation capability, and thanks to NVIDIA GPU acceleration this starts to become feasible.
NVIDIA® is developing a new platform for boosting the Autonomous Driving capabilities that is able of managing the vehicle via CAN-Bus: the Drive™ PX. It has 8 ARM cores with dual accelerated Tegra® X1 chips. It has 12 synchronized camera inputs for 360º vehicle perception, 4G and Wi-Fi capabilities allowing vehicle communications and GPS and inertial sensors inputs for self-localization.
Our research group has been selected for testing Drive™ PX. Accordingly, we are developing a Drive™ PX based autonomous car. Currently, we are porting our previous CPU based algorithms (e.g., Lane Departure Warning, Collision Warning, Automatic Cruise Control, Pedestrian Protection, or Semantic Segmentation) for running in the GPU.
Keywords: Autonomous Driving; ADAS; CUDA
|
Joan Serrat, Ferran Diego, Jose Manuel Alvarez, & Felipe Lumbreras. (2007). Alignment of Videos Recorded from Moving Vehicles. In in 14th International Conference on Image Analysis and Processing, (512–517).
|
Angel Sappa, Fadi Dornaika, David Geronimo, & Antonio Lopez. (2007). Efficient On-Board Stereo Vision Pose Estimation. In Computer Aided Systems Theory, Selected paper from (Vol. 4739, 1183–1190). LNCS.
Abstract: This paper presents an efficient technique for real time estimation of on-board stereo vision system pose. The whole process is performed in the Euclidean space and consists of two stages. Initially, a compact representation of the original 3D data points is computed. Then, a RANSAC based least squares approach is used for fitting a plane to the 3D road points. Fast RANSAC fitting is obtained by selecting points according to a probability distribution function that takes into account the density of points at a given depth. Finally, stereo camera position
and orientation—pose—is computed relative to the road plane. The proposed technique is intended to be used on driver assistance systems for applications such as obstacle or pedestrian detection. A real time performance is reached. Experimental results on several environments and comparisons with a previous work are presented.
|
Angel Sappa, Fadi Dornaika, David Geronimo, & Antonio Lopez. (2008). Registration-based Moving Object Detection from a Moving Camera. In IROS2008 2nd Workshop on Perception, Planning and Navigation for Intelligent Vehicles (65–69).
Abstract: This paper presents a robust approach for detecting moving objects from on-board stereo vision systems. It relies on a feature point quaternion-based registration, which avoids common problems that appear when computationally expensive iterative-based algorithms are used on dynamic environments. The proposed approach consists of three stages. Initially, feature points are extracted and tracked through consecutive frames. Then, a RANSAC based approach is used for registering
two 3D point sets with known correspondences by means of the quaternion method. Finally, the computed 3D rigid displacement is used to map two consecutive frames into the same coordinate system. Moving objects correspond to those areas with large registration errors. Experimental results, in different scenarios, show the viability of the proposed approach.
|
Joan Serrat, Ferran Diego, Felipe Lumbreras, & Jose Manuel Alvarez. (2007). Synchronization of Video Sequences from Free-moving Cameras. In J. Marti et al. (Ed.), 3rd Iberian Conference on Pattern Recognition and Image Analysis (Vol. 4477, 620–627). LNCS.
|
Joan Serrat, Ferran Diego, Felipe Lumbreras, Jose Manuel Alvarez, Antonio Lopez, & C. Elvira. (2008). Dynamic Comparison of Headlights. Journal of Automobile Engineering, 222(5), 643–656.
Keywords: video alignment
|
Joan Serrat, Ferran Diego, & Felipe Lumbreras. (2008). Los faros delanteros a traves del objetivo. UAB Divulga, Revista de divulgacion cientifica.
|
Angel Sappa, Fadi Dornaika, Daniel Ponsa, David Geronimo, & Antonio Lopez. (2008). An Efficient Approach to Onboard Stereo Vision System Pose Estimation. TITS - IEEE Transactions on Intelligent Transportation Systems, 9(3), 476–490.
Abstract: This paper presents an efficient technique for estimating the pose of an onboard stereo vision system relative to the environment’s dominant surface area, which is supposed to be the road surface. Unlike previous approaches, it can be used either for urban or highway scenarios since it is not based on a specific visual traffic feature extraction but on 3-D raw data points. The whole process is performed in the Euclidean space and consists of two stages. Initially, a compact 2-D representation of the original 3-D data points is computed. Then, a RANdom SAmple Consensus (RANSAC) based least-squares approach is used to fit a plane to the road. Fast RANSAC fitting is obtained by selecting points according to a probability function that takes into account the density of points at a given depth. Finally, stereo camera height and pitch angle are computed related to the fitted road plane. The proposed technique is intended to be used in driverassistance systems for applications such as vehicle or pedestrian detection. Experimental results on urban environments, which are the most challenging scenarios (i.e., flat/uphill/downhill driving, speed bumps, and car’s accelerations), are presented. These results are validated with manually annotated ground truth. Additionally, comparisons with previous works are presented to show the improvements in the central processing unit processing time, as well as in the accuracy of the obtained results.
Keywords: Camera extrinsic parameter estimation, ground plane estimation, onboard stereo vision system
|
Joan Serrat, & Antonio Lopez. (2006). Una experiencia de Enginyeria del Software amb ABP.
|