TY - CONF AU - Ciprian Corneanu AU - Meysam Madadi AU - Sergio Escalera AU - Aleix M. Martinez A2 - CVPR PY - 2019// TI - What does it mean to learn in deep networks? And, how does one detect adversarial attacks? BT - 32nd IEEE Conference on Computer Vision and Pattern Recognition SP - 4752 EP - 4761 N2 - The flexibility and high-accuracy of Deep Neural Networks (DNNs) has transformed computer vision. But, the fact that we do not know when a specific DNN will work and when it will fail has resulted in a lack of trust. A clear example is self-driving cars; people are uncomfortable sitting in a car driven by algorithms that may fail under some unknown, unpredictable conditions. Interpretability and explainability approaches attempt to address this by uncovering what a DNN models, i.e., what each node (cell) in the network represents and what images are most likely to activate it. This can be used to generate, for example, adversarial attacks. But these approaches do not generally allow us to determine where a DNN will succeed or fail and why. i.e., does this learned representation generalize to unseen samples? Here, we derive a novel approach to define what it means to learn in deep networks, and how to use this knowledge to detect adversarial attacks. We show how this defines the ability of a network to generalize to unseen testing samples and, most importantly, why this is the case. UR - https://ieeexplore.ieee.org/document/8953424 L1 - http://refbase.cvc.uab.es/files/CME2019.pdf UR - http://dx.doi.org/10.1109/CVPR.2019.00489 N1 - HuPBA; no proj ID - Ciprian Corneanu2019 ER -