|
Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, & Yoshua Bengio. (2015). FitNets: Hints for Thin Deep Nets. In 3rd International Conference on Learning Representations ICLR2015.
Abstract: While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this paper, we extend this idea to allow the training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. This allows one to train deeper students that can generalize better or run faster, a trade-off that is controlled by the chosen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teacher network.
Keywords: Computer Science ; Learning; Computer Science ;Neural and Evolutionary Computing
|
|
|
Carles Sanchez, Debora Gil, R. Tazi, Jorge Bernal, Y. Ruiz, L. Planas, et al. (2015). Quasi-real time digital assessment of Central Airway Obstruction. In 3rd European congress for bronchology and interventional pulmonology ECBIP2015.
|
|
|
Eduardo Tusa, Arash Akbarinia, Raquel Gil Rodriguez, & Corina Barbalata. (2015). Real-Time Face Detection and Tracking Utilising OpenMP and ROS. In 3rd Asia-Pacific Conference on Computer Aided System Engineering (pp. 179–184).
Abstract: The first requisite of a robot to succeed in social interactions is accurate human localisation, i.e. subject detection and tracking. Later, it is estimated whether an interaction partner seeks attention, for example by interpreting the position and orientation of the body. In computer vision, these cues usually are obtained in colour images, whose qualities are degraded in ill illuminated social scenes. In these scenarios depth sensors offer a richer representation. Therefore, it is important to combine colour and depth information. The
second aspect that plays a fundamental role in the acceptance of social robots is their real-time-ability. Processing colour and depth images is computationally demanding. To overcome this we propose a parallelisation strategy of face detection and tracking based on two different architectures: message passing and shared memory. Our results demonstrate high accuracy in
low computational time, processing nine times more number of frames in a parallel implementation. This provides a real-time social robot interaction.
Keywords: RGB-D; Kinect; Human Detection and Tracking; ROS; OpenMP
|
|
|
Isabelle Guyon, Kristin Bennett, Gavin Cawley, Hugo Jair Escalante, Sergio Escalera, Tin Kam Ho, et al. (2015). AutoML Challenge 2015: Design and First Results. In 32nd International Conference on Machine Learning, ICML workshop, JMLR proceedings ICML15 (pp. 1–8).
Abstract: ChaLearn is organizing the Automatic Machine Learning (AutoML) contest 2015, which challenges participants to solve classication and regression problems without any human intervention. Participants' code is automatically run on the contest servers to train and test learning machines. However, there is no obligation to submit code; half of the prizes can be won by submitting prediction results only. Datasets of progressively increasing diculty are introduced throughout the six rounds of the challenge. (Participants can
enter the competition in any round.) The rounds alternate phases in which learners are tested on datasets participants have not seen (AutoML), and phases in which participants have limited time to tweak their algorithms on those datasets to improve performance (Tweakathon). This challenge will push the state of the art in fully automatic machine learning on a wide range of real-world problems. The platform will remain available beyond the termination of the challenge: http://codalab.org/AutoML.
Keywords: AutoML Challenge; machine learning; model selection; meta-learning; repre- sentation learning; active learning
|
|
|
Onur Ferhat, Arcadi Llanza, & Fernando Vilariño. (2015). Gaze interaction for multi-display systems using natural light eye-tracker. In 2nd International Workshop on Solutions for Automatic Gaze Data Analysis.
|
|
|
M. Cruz, Cristhian A. Aguilera-Carrasco, Boris X. Vintimilla, Ricardo Toledo, & Angel Sappa. (2015). Cross-spectral image registration and fusion: an evaluation study. In 2nd International Conference on Machine Vision and Machine Learning.
Abstract: This paper presents a preliminary study on the registration and fusion of cross-spectral imaging. The objective is to evaluate the validity of widely used computer vision approaches when they are applied at different
spectral bands. In particular, we are interested in merging images from the infrared (both long wave infrared: LWIR and near infrared: NIR) and visible spectrum (VS). Experimental results with different data sets are presented.
Keywords: multispectral imaging; image registration; data fusion; infrared and visible spectra
|
|
|
Miguel Oliveira, Victor Santos, Angel Sappa, & P. Dias. (2015). Scene Representations for Autonomous Driving: an approach based on polygonal primitives. In 2nd Iberian Robotics Conference ROBOT2015 (Vol. 417, pp. 503–515).
Abstract: In this paper, we present a novel methodology to compute a 3D scene
representation. The algorithm uses macro scale polygonal primitives to model the scene. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Results show that the approach is capable of producing accurate descriptions of the scene. In addition, the algorithm is very efficient when compared to other techniques.
Keywords: Scene reconstruction; Point cloud; Autonomous vehicles
|
|
|
J.Poujol, Cristhian A. Aguilera-Carrasco, E.Danos, Boris X. Vintimilla, Ricardo Toledo, & Angel Sappa. (2015). Visible-Thermal Fusion based Monocular Visual Odometry. In 2nd Iberian Robotics Conference ROBOT2015 (Vol. 417, pp. 517–528). Springer International Publishing.
Abstract: The manuscript evaluates the performance of a monocular visual odometry approach when images from different spectra are considered, both independently and fused. The objective behind this evaluation is to analyze if classical approaches can be improved when the given images, which are from different spectra, are fused and represented in new domains. The images in these new domains should have some of the following properties: i) more robust to noisy data; ii) less sensitive to changes (e.g., lighting); iii) more rich in descriptive information, among other. In particular in the current work two different image fusion strategies are considered. Firstly, images from the visible and thermal spectrum are fused using a Discrete Wavelet Transform (DWT) approach. Secondly, a monochrome threshold strategy is considered. The obtained
representations are evaluated under a visual odometry framework, highlighting
their advantages and disadvantages, using different urban and semi-urban scenarios. Comparisons with both monocular-visible spectrum and monocular-infrared spectrum, are also provided showing the validity of the proposed approach.
Keywords: Monocular Visual Odometry; LWIR-RGB cross-spectral Imaging; Image Fusion.
|
|
|
Sergio Vera, Miguel Angel Gonzalez Ballester, & Debora Gil. (2015). A Novel Cochlear Reference Frame Based On The Laplace Equation. In 29th international Congress and Exhibition on Computer Assisted Radiology and Surgery (Vol. 10, pp. 1–312).
|
|
|
Victor Ponce, Hugo Jair Escalante, Sergio Escalera, & Xavier Baro. (2015). Gesture and Action Recognition by Evolved Dynamic Subgestures. In 26th British Machine Vision Conference (129.pp. 1–129.13).
Abstract: This paper introduces a framework for gesture and action recognition based on the evolution of temporal gesture primitives, or subgestures. Our work is inspired on the principle of producing genetic variations within a population of gesture subsequences, with the goal of obtaining a set of gesture units that enhance the generalization capability of standard gesture recognition approaches. In our context, gesture primitives are evolved over time using dynamic programming and generative models in order to recognize complex actions. In few generations, the proposed subgesture-based representation
of actions and gestures outperforms the state of the art results on the MSRDaily3D and MSRAction3D datasets.
|
|
|
Huamin Ren, Weifeng Liu, Soren Ingvor Olsen, Sergio Escalera, & Thomas B. Moeslund. (2015). Unsupervised Behavior-Specific Dictionary Learning for Abnormal Event Detection. In 26th British Machine Vision Conference.
|
|
|
Cristhian A. Aguilera-Carrasco, Angel Sappa, & Ricardo Toledo. (2015). LGHD: a Feature Descriptor for Matching Across Non-Linear Intensity Variations. In 22th IEEE International Conference on Image Processing (pp. 178–181).
|
|
|
Fernando Vilariño, Dan Norton, & Onur Ferhat. (2015). Memory Fields: DJs in the Library. In 21 st Symposium of Electronic Arts.
|
|
|
Aniol Lidon, Xavier Giro, Marc Bolaños, Petia Radeva, Markus Seidl, & Matthias Zeppelzauer. (2015). UPC-UB-STP @ MediaEval 2015 diversity task: iterative reranking of relevant images. In 2015 MediaEval Retrieving Diverse Images Task.
Abstract: This paper presents the results of the UPC-UB-STP team in the 2015 MediaEval Retrieving Diverse Images Task. The goal of the challenge is to provide a ranked list of Flickr photos for a predefined set of queries. Our approach firstly generates a ranking of images based on a query-independent estimation of its relevance. Only top results are kept and iteratively re-ranked based on their intra-similarity to introduce diversity.
|
|
|
Xavier Baro, Jordi Gonzalez, Junior Fabian, Miguel Angel Bautista, Marc Oliu, Hugo Jair Escalante, et al. (2015). ChaLearn Looking at People 2015 challenges: action spotting and cultural event recognition. In 2015 IEEE Conference on Computer Vision and Pattern Recognition Worshops (CVPRW) (pp. 1–9).
Abstract: Following previous series on Looking at People (LAP) challenges [6, 5, 4], ChaLearn ran two competitions to be presented at CVPR 2015: action/interaction spotting and cultural event recognition in RGB data. We ran a second round on human activity recognition on RGB data sequences. In terms of cultural event recognition, tens of categories have to be recognized. This involves scene understanding and human analysis. This paper summarizes the two performed challenges and obtained results. Details of the ChaLearn LAP competitions can be found at http://gesture.chalearn.org/.
|
|