Pau Rodriguez, Guillem Cucurull, Josep M. Gonfaus, Xavier Roca, & Jordi Gonzalez. (2017). Age and gender recognition in the wild with deep attention. PR - Pattern Recognition, 72, 563–571.
Abstract: Face analysis in images in the wild still pose a challenge for automatic age and gender recognition tasks, mainly due to their high variability in resolution, deformation, and occlusion. Although the performance has highly increased thanks to Convolutional Neural Networks (CNNs), it is still far from optimal when compared to other image recognition tasks, mainly because of the high sensitiveness of CNNs to facial variations. In this paper, inspired by biology and the recent success of attention mechanisms on visual question answering and fine-grained recognition, we propose a novel feedforward attention mechanism that is able to discover the most informative and reliable parts of a given face for improving age and gender classification. In particular, given a downsampled facial image, the proposed model is trained based on a novel end-to-end learning framework to extract the most discriminative patches from the original high-resolution image. Experimental validation on the standard Adience, Images of Groups, and MORPH II benchmarks show that including attention mechanisms enhances the performance of CNNs in terms of robustness and accuracy.
Keywords: Age recognition; Gender recognition; Deep neural networks; Attention mechanisms
|
Maria Alberich-Carramiñana, Guillem Alenya, Juan Andrade, E. Martinez, & Carme Torras. (2006). Affine Epipolar Direction from Two Views of a Planar Contour. In Proceedings of the Advanced Concepts for Intelligent Vision Systems Conference, LNCS 4179: 944–955.
|
Aitor Alvarez-Gila, Joost Van de Weijer, & Estibaliz Garrote. (2017). Adversarial Networks for Spatial Context-Aware Spectral Image Reconstruction from RGB. In 1st International Workshop on Physics Based Vision meets Deep Learning.
Abstract: Hyperspectral signal reconstruction aims at recovering the original spectral input that produced a certain trichromatic (RGB) response from a capturing device or observer.
Given the heavily underconstrained, non-linear nature of the problem, traditional techniques leverage different statistical properties of the spectral signal in order to build informative priors from real world object reflectances for constructing such RGB to spectral signal mapping. However,
most of them treat each sample independently, and thus do not benefit from the contextual information that the spatial dimensions can provide. We pose hyperspectral natural image reconstruction as an image to image mapping learning problem, and apply a conditional generative adversarial framework to help capture spatial semantics. This is the first time Convolutional Neural Networks -and, particularly, Generative Adversarial Networks- are used to solve this task. Quantitative evaluation shows a Root Mean Squared Error (RMSE) drop of 44:7% and a Relative RMSE drop of 47:0% on the ICVL natural hyperspectral image dataset.
|
Yi Xiao. (2023). Advancing Vision-based End-to-End Autonomous Driving (Antonio Lopez, Ed.). Ph.D. thesis, IMPRIMA, .
Abstract: In autonomous driving, artificial intelligence (AI) processes the traffic environment to drive the vehicle to a desired destination. Currently, there are different paradigms that address the development of AI-enabled drivers. On the one hand, we find modular pipelines, which divide the driving task into sub-tasks such as perception, maneuver planning, and control. On the other hand, we find end-to-end driving approaches that attempt to learn the direct mapping of raw data from input sensors to vehicle control signals. The latter are relatively less studied but are gaining popularity as they are less demanding in terms of data labeling. Therefore, in this thesis, our goal is to investigate end-to-end autonomous driving.
We propose to evaluate three approaches to tackle the challenge of end-to-end
autonomous driving. First, we focus on the input, considering adding depth information as complementary to RGB data, in order to mimic the human being’s
ability to estimate the distance to obstacles. Notice that, in the real world, these depth maps can be obtained either from a LiDAR sensor, or a trained monocular
depth estimation module, where human labeling is not needed. Then, based on
the intuition that the latent space of end-to-end driving models encodes relevant
information for driving, we use it as prior knowledge for training an affordancebased driving model. In this case, the trained affordance-based model can achieve good performance while requiring less human-labeled data, and it can provide interpretability regarding driving actions. Finally, we present a new pure vision-based end-to-end driving model termed CIL++, which is trained by imitation learning.
CIL++ leverages modern best practices, such as a large horizontal field of view and
a self-attention mechanism, which are contributing to the agent’s understanding of
the driving scene and bringing a better imitation of human drivers. Using training
data without any human labeling, our model yields almost expert performance in
the CARLA NoCrash benchmark and could rival SOTA models that require large amounts of human-labeled data.
|
J.Kuhn, A.Nussbaumer, J.Pirker, Dimosthenis Karatzas, A. Pagani, O.Conlan, et al. (2015). Advancing Physics Learning Through Traversing a Multi-Modal Experimentation Space. In Workshop Proceedings on the 11th International Conference on Intelligent Environments (Vol. 19, pp. 373–380).
Abstract: Translating conceptual knowledge into real world experiences presents a significant educational challenge. This position paper presents an approach that supports learners in moving seamlessly between conceptual learning and their application in the real world by bringing physical and virtual experiments into everyday settings. Learners are empowered in conducting these situated experiments in a variety of physical settings by leveraging state of the art mobile, augmented reality, and virtual reality technology. A blend of mobile-based multi-sensory physical experiments, augmented reality and enabling virtual environments can allow learners to bridge their conceptual learning with tangible experiences in a completely novel manner. This approach focuses on the learner by applying self-regulated personalised learning techniques, underpinned by innovative pedagogical approaches and adaptation techniques, to ensure that the needs and preferences of each learner are catered for individually.
|
Angel Sappa, Niki Aifanti, N. Grammalidis, & Sotiris Malassiotis. (2004). Advances in Vision-Based Human Body Modeling. In N. Sarris and M. Strintzis. (Ed.), 3D Modeling & Animation: Systhesis and Analysis Techniques for the Human Body (pp. 1–26).
|
Niki Aifanti, Angel Sappa, N. Grammalidis, & Sotiris Malassiotis. (2009). Advances in Tracking and Recognition of Human Motion. In Encyclopedia of Information Science and Technology (Vol. I, 65–71).
|
Josep Llados. (2007). Advances in Graphics Recognition. In Digital Document Processing, Major Directions and Recent Advances, Advances in Pattern Recognition, B.B. Chaudhuri, ed., 281–304.
|
Jun Wan, Guodong Guo, Sergio Escalera, Hugo Jair Escalante, & Stan Z Li. (2023). Advances in Face Presentation Attack Detection.
|
Debora Gil, & Antoni Rosell. (2019). Advances in Artificial Intelligence – How Lung Cancer CT Screening Will Progress? In World Lung Cancer Conference.
Abstract: Invited speaker
|
David Rotger, Cristina Cañero, Petia Radeva, J. Mauri, E. Fernandez, A. Tovar, et al. (2001). Advanced Visualization of 3D data of Intravascular Ultrasound Images..
|
Maya Dimitrova, Petia Radeva, David Rotger, D. Boyadjiev, & Juan J. Villanueva. (2004). Advanced Cardiological Diagnosis via Intelligent Image Analysis.
|
Miguel Reyes, Jose Ramirez Moreno, Juan R Revilla, Petia Radeva, & Sergio Escalera. (2011). ADiBAS: Sistema Multisensor de Adquisicion Automatica de Datos Corporales Objetivos, Robustos y Fiables para el Analisis de la Postura y el Movimiento. In 6th Congreso Iberoamericano de Tecnologia de Apoyo a la Discapacidad (pp. 939–944).
Abstract: El análisis de la postura y del rango de movimiento son fundamentales para conocer la optimización del gesto y mejorar, de este modo, el rendimiento y la detección de posibles lesiones. Esta cuantificación es especialmente interesante en deportistas o en pacientes que presentan alguna lesión neurológica o del sistema musculo-esquelético, ya que permite conocer el proceso evolutivo de estos pacientes, evaluar la eficacia de la terapia aplicada y proponer, en caso necesario, una modificación del protocolo de tratamiento.
En este trabajo presentamos un sistema automático que permite, mediante una tecnología no invasiva, la captación automática de marcadores LED situados sobre el paciente y su posterior análisis con el fin de mostrar al especialista datos objetivos que permitan un mejor soporte diagnóstico. También se describe un
sistema analítico de la postura corporal sin marcadores, donde su ejecución durante secuencias dinámicas aporta un alto grado de naturalidad al paciente a la hora de realizar los ejercicios funcionales.
|
J.R. Serra, & J.B. Subirana. (1997). Adaptive non-cartesian networks for vision..
|
David Geronimo, Angel Sappa, Antonio Lopez, & Daniel Ponsa. (2007). Adaptive Image Sampling and Windows Classification for On-board Pedestrian Detection. In Proceedings of the 5th International Conference on Computer Vision Systems.
Abstract: On–board pedestrian detection is in the frontier of the state–of–the–art since it implies processing outdoor scenarios from a mobile platform and searching for aspect–changing objects in cluttered urban environments. Most promising approaches include the development of classifiers based on feature selection and machine learning. However, they use a large number of features which compromises real–time. Thus, methods for running the classifiers in only a few image windows must be provided. In this paper we contribute in both aspects, proposing a camera
pose estimation method for adaptive sparse image sampling, as well as a classifier for pedestrian detection based on Haar wavelets and edge orientation histograms as features and AdaBoost as learning machine. Both proposals are compared with relevant approaches in the literature, showing comparable results but reducing processing time by four for the sampling tasks and by ten for the classification one.
Keywords: Pedestrian Detection
|