|
Tomas Sixta, Julio C. S. Jacques Junior, Pau Buch Cardona, Eduard Vazquez, & Sergio Escalera. (2020). FairFace Challenge at ECCV 2020: Analyzing Bias in Face Recognition. In ECCV Workshops (Vol. 12540, pp. 463–481). LNCS.
Abstract: This work summarizes the 2020 ChaLearn Looking at People Fair Face Recognition and Analysis Challenge and provides a description of the top-winning solutions and analysis of the results. The aim of the challenge was to evaluate accuracy and bias in gender and skin colour of submitted algorithms on the task of 1:1 face verification in the presence of other confounding attributes. Participants were evaluated using an in-the-wild dataset based on reannotated IJB-C, further enriched 12.5K new images and additional labels. The dataset is not balanced, which simulates a real world scenario where AI-based models supposed to present fair outcomes are trained and evaluated on imbalanced data. The challenge attracted 151 participants, who made more 1.8K submissions in total. The final phase of the challenge attracted 36 active teams out of which 10 exceeded 0.999 AUC-ROC while achieving very low scores in the proposed bias metrics. Common strategies by the participants were face pre-processing, homogenization of data distributions, the use of bias aware loss functions and ensemble models. The analysis of top-10 teams shows higher false positive rates (and lower false negative rates) for females with dark skin tone as well as the potential of eyeglasses and young age to increase the false positive rates too.
|
|
|
Martin Menchon, Estefania Talavera, Jose M. Massa, & Petia Radeva. (2020). Behavioural Pattern Discovery from Collections of Egocentric Photo-Streams. In ECCV Workshops (Vol. 12538, pp. 469–484). LNCS.
Abstract: The automatic discovery of behaviour is of high importance when aiming to assess and improve the quality of life of people. Egocentric images offer a rich and objective description of the daily life of the camera wearer. This work proposes a new method to identify a person’s patterns of behaviour from collected egocentric photo-streams. Our model characterizes time-frames based on the context (place, activities and environment objects) that define the images composition. Based on the similarity among the time-frames that describe the collected days for a user, we propose a new unsupervised greedy method to discover the behavioural pattern set based on a novel semantic clustering approach. Moreover, we present a new score metric to evaluate the performance of the proposed algorithm. We validate our method on 104 days and more than 100k images extracted from 7 users. Results show that behavioural patterns can be discovered to characterize the routine of individuals and consequently their lifestyle.
|
|
|
Parichehr Behjati Ardakani, Diego Velazquez, Josep M. Gonfaus, Pau Rodriguez, Xavier Roca, & Jordi Gonzalez. (2019). Catastrophic interference in Disguised Face Recognition. In 9th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 11868, pp. 64–75). LNCS.
Abstract: It is commonly known the natural tendency of artificial neural networks to completely and abruptly forget previously known information when learning new information. We explore this behaviour in the context of Face Verification on the recently proposed Disguised Faces in the Wild dataset (DFW). We empirically evaluate several commonly used DCNN architectures on Face Recognition and distill some insights about the effect of sequential learning on distinct identities from different datasets, showing that the catastrophic forgetness phenomenon is present even in feature embeddings fine-tuned on different tasks from the original domain.
Keywords: Neural network forgetness; Face recognition; Disguised Faces
|
|
|
Eduardo Aguilar, & Petia Radeva. (2019). Food Recognition by Integrating Local and Flat Classifiers. In 9th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 11867, pp. 65–74). LNCS.
Abstract: The recognition of food image is an interesting research topic, in which its applicability in the creation of nutritional diaries stands out with the aim of improving the quality of life of people with a chronic disease (e.g. diabetes, heart disease) or prone to acquire it (e.g. people with overweight or obese). For a food recognition system to be useful in real applications, it is necessary to recognize a huge number of different foods. We argue that for very large scale classification, a traditional flat classifier is not enough to acquire an acceptable result. To address this, we propose a method that performs prediction with local classifiers, based on a class hierarchy, or with flat classifier. We decide which approach to use, depending on the analysis of both the Epistemic Uncertainty obtained for the image in the children classifiers and the prediction of the parent classifier. When our criterion is met, the final prediction is obtained with the respective local classifier; otherwise, with the flat classifier. From the results, we can see that the proposed method improves the classification performance compared to the use of a single flat classifier.
|
|
|
Gemma Rotger, Francesc Moreno-Noguer, Felipe Lumbreras, & Antonio Agudo. (2019). Single view facial hair 3D reconstruction. In 9th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 11867, pp. 423–436). LNCS.
Abstract: n this work, we introduce a novel energy-based framework that addresses the challenging problem of 3D reconstruction of facial hair from a single RGB image. To this end, we identify hair pixels over the image via texture analysis and then determine individual hair fibers that are modeled by means of a parametric hair model based on 3D helixes. We propose to minimize an energy composed of several terms, in order to adapt the hair parameters that better fit the image detections. The final hairs respond to the resulting fibers after a post-processing step where we encourage further realism. The resulting approach generates realistic facial hair fibers from solely an RGB image without assuming any training data nor user interaction. We provide an experimental evaluation on real-world pictures where several facial hair styles and image conditions are observed, showing consistent results and establishing a comparison with respect to competing approaches.
Keywords: 3D Vision; Shape Reconstruction; Facial Hair Modeling
|
|
|
Debora Gil, Antonio Esteban Lansaque, Sebastian Stefaniga, Mihail Gaianu, & Carles Sanchez. (2019). Data Augmentation from Sketch. In International Workshop on Uncertainty for Safe Utilization of Machine Learning in Medical Imaging (Vol. 11840, pp. 155–162). LNCS.
Abstract: State of the art machine learning methods need huge amounts of data with unambiguous annotations for their training. In the context of medical imaging this is, in general, a very difficult task due to limited access to clinical data, the time required for manual annotations and variability across experts. Simulated data could serve for data augmentation provided that its appearance was comparable to the actual appearance of intra-operative acquisitions. Generative Adversarial Networks (GANs) are a powerful tool for artistic style transfer, but lack a criteria for selecting epochs ensuring also preservation of intra-operative content.
We propose a multi-objective optimization strategy for a selection of cycleGAN epochs ensuring a mapping between virtual images and the intra-operative domain preserving anatomical content. Our approach has been applied to simulate intra-operative bronchoscopic videos and chest CT scans from virtual sketches generated using simple graphical primitives.
Keywords: Data augmentation; cycleGANs; Multi-objective optimization
|
|
|
Eduardo Aguilar, & Petia Radeva. (2019). Class-Conditional Data Augmentation Applied to Image Classification. In 18th International Conference on Computer Analysis of Images and Patterns (Vol. 11679, pp. 182–192). LNCS.
Abstract: Image classification is widely researched in the literature, where models based on Convolutional Neural Networks (CNNs) have provided better results. When data is not enough, CNN models tend to be overfitted. To deal with this, often, traditional techniques of data augmentation are applied, such as: affine transformations, adjusting the color balance, among others. However, we argue that some techniques of data augmentation may be more appropriate for some of the classes. In order to select the techniques that work best for particular class, we propose to explore the epistemic uncertainty for the samples within each class. From our experiments, we can observe that when the data augmentation is applied class-conditionally, we improve the results in terms of accuracy and also reduce the overall epistemic uncertainty. To summarize, in this paper we propose a class-conditional data augmentation procedure that allows us to obtain better results and improve robustness of the classification in the face of model uncertainty.
Keywords: CNNs; Data augmentation; Deep learning; Epistemic uncertainty; Image classification; Food recognition
|
|
|
Estefania Talavera, Nicolai Petkov, & Petia Radeva. (2019). Unsupervised Routine Discovery in Egocentric Photo-Streams. In 18th International Conference on Computer Analysis of Images and Patterns (Vol. 11678, pp. 576–588). LNCS.
Abstract: The routine of a person is defined by the occurrence of activities throughout different days, and can directly affect the person’s health. In this work, we address the recognition of routine related days. To do so, we rely on egocentric images, which are recorded by a wearable camera and allow to monitor the life of the user from a first-person view perspective. We propose an unsupervised model that identifies routine related days, following an outlier detection approach. We test the proposed framework over a total of 72 days in the form of photo-streams covering around 2 weeks of the life of 5 different camera wearers. Our model achieves an average of 76% Accuracy and 68% Weighted F-Score for all the users. Thus, we show that our framework is able to recognise routine related days and opens the door to the understanding of the behaviour of people.
Keywords: Routine discovery; Lifestyle; Egocentric vision; Behaviour analysis
|
|
|
Victoria Ruiz, Angel Sanchez, Jose F. Velez, & Bogdan Raducanu. (2019). Automatic Image-Based Waste Classification. In International Work-Conference on the Interplay Between Natural and Artificial Computation. From Bioinspired Systems and Biomedical Applications to Machine Learning (Vol. 11487, 422–431). LNCS.
Abstract: The management of solid waste in large urban environments has become a complex problem due to increasing amount of waste generated every day by citizens and companies. Current Computer Vision and Deep Learning techniques can help in the automatic detection and classification of waste types for further recycling tasks. In this work, we use the TrashNet dataset to train and compare different deep learning architectures for automatic classification of garbage types. In particular, several Convolutional Neural Networks (CNN) architectures were compared: VGG, Inception and ResNet. The best classification results were obtained using a combined Inception-ResNet model that achieved 88.6% of accuracy. These are the best results obtained with the considered dataset.
Keywords: Computer Vision; Deep learning; Convolutional neural networks; Waste classification
|
|
|
Anguelos Nicolaou, Sounak Dey, V.Christlein, A.Maier, & Dimosthenis Karatzas. (2018). Non-deterministic Behavior of Ranking-based Metrics when Evaluating Embeddings. In International Workshop on Reproducible Research in Pattern Recognition (Vol. 11455, pp. 71–82). LNCS.
Abstract: Embedding data into vector spaces is a very popular strategy of pattern recognition methods. When distances between embeddings are quantized, performance metrics become ambiguous. In this paper, we present an analysis of the ambiguity quantized distances introduce and provide bounds on the effect. We demonstrate that it can have a measurable effect in empirical data in state-of-the-art systems. We also approach the phenomenon from a computer security perspective and demonstrate how someone being evaluated by a third party can exploit this ambiguity and greatly outperform a random predictor without even access to the input data. We also suggest a simple solution making the performance metrics, which rely on ranking, totally deterministic and impervious to such exploits.
|
|
|
Santi Puch, Irina Sanchez, Aura Hernandez-Sabate, Gemma Piella, & Vesna Prckovska. (2018). Global Planar Convolutions for Improved Context Aggregation in Brain Tumor Segmentation. In International MICCAI Brainlesion Workshop (Vol. 11384, pp. 393–405). LNCS.
Abstract: In this work, we introduce the Global Planar Convolution module as a building-block for fully-convolutional networks that aggregates global information and, therefore, enhances the context perception capabilities of segmentation networks in the context of brain tumor segmentation. We implement two baseline architectures (3D UNet and a residual version of 3D UNet, ResUNet) and present a novel architecture based on these two architectures, ContextNet, that includes the proposed Global Planar Convolution module. We show that the addition of such module eliminates the need of building networks with several representation levels, which tend to be over-parametrized and to showcase slow rates of convergence. Furthermore, we provide a visual demonstration of the behavior of GPC modules via visualization of intermediate representations. We finally participate in the 2018 edition of the BraTS challenge with our best performing models, that are based on ContextNet, and report the evaluation scores on the validation and the test sets of the challenge.
Keywords: Brain tumors; 3D fully-convolutional CNN; Magnetic resonance imaging; Global planar convolution
|
|
|
Felipe Codevilla, Antonio Lopez, Vladlen Koltun, & Alexey Dosovitskiy. (2018). On Offline Evaluation of Vision-based Driving Models. In 15th European Conference on Computer Vision (Vol. 11219, pp. 246–262). LNCS.
Abstract: Autonomous driving models should ideally be evaluated by deploying
them on a fleet of physical vehicles in the real world. Unfortunately, this approach is not practical for the vast majority of researchers. An attractive alternative is to evaluate models offline, on a pre-collected validation dataset with ground truth annotation. In this paper, we investigate the relation between various online and offline metrics for evaluation of autonomous driving models. We find that offline prediction error is not necessarily correlated with driving quality, and two models with identical prediction error can differ dramatically in their driving performance. We show that the correlation of offline evaluation with driving quality can be significantly improved by selecting an appropriate validation dataset and
suitable offline metrics.
Keywords: Autonomous driving; deep learning
|
|
|
Lluis Gomez, Andres Mafla, Marçal Rusiñol, & Dimosthenis Karatzas. (2018). Single Shot Scene Text Retrieval. In 15th European Conference on Computer Vision (Vol. 11218, pp. 728–744). LNCS.
Abstract: Textual information found in scene images provides high level semantic information about the image and its context and it can be leveraged for better scene understanding. In this paper we address the problem of scene text retrieval: given a text query, the system must return all images containing the queried text. The novelty of the proposed model consists in the usage of a single shot CNN architecture that predicts at the same time bounding boxes and a compact text representation of the words in them. In this way, the text based image retrieval task can be casted as a simple nearest neighbor search of the query text representation over the outputs of the CNN over the entire image
database. Our experiments demonstrate that the proposed architecture
outperforms previous state-of-the-art while it offers a significant increase
in processing speed.
Keywords: Image retrieval; Scene text; Word spotting; Convolutional Neural Networks; Region Proposals Networks; PHOC
|
|
|
Marc Oliu, Javier Selva, & Sergio Escalera. (2018). Folded Recurrent Neural Networks for Future Video Prediction. In 15th European Conference on Computer Vision (Vol. 11218, pp. 745–761). LNCS.
Abstract: Future video prediction is an ill-posed Computer Vision problem that recently received much attention. Its main challenges are the high variability in video content, the propagation of errors through time, and the non-specificity of the future frames: given a sequence of past frames there is a continuous distribution of possible futures. This work introduces bijective Gated Recurrent Units, a double mapping between the input and output of a GRU layer. This allows for recurrent auto-encoders with state sharing between encoder and decoder, stratifying the sequence representation and helping to prevent capacity problems. We show how with this topology only the encoder or decoder needs to be applied for input encoding and prediction, respectively. This reduces the computational cost and avoids re-encoding the predictions when generating a sequence of frames, mitigating the propagation of errors. Furthermore, it is possible to remove layers from an already trained model, giving an insight to the role performed by each layer and making the model more explainable. We evaluate our approach on three video datasets, outperforming state of the art prediction results on MMNIST and UCF101, and obtaining competitive results on KTH with 2 and 3 times less memory usage and computational cost than the best scored approach.
|
|
|
Ciprian Corneanu, Meysam Madadi, & Sergio Escalera. (2018). Deep Structure Inference Network for Facial Action Unit Recognition. In 15th European Conference on Computer Vision (Vol. 11216, pp. 309–324). LNCS.
Abstract: Facial expressions are combinations of basic components called Action Units (AU). Recognizing AUs is key for general facial expression analysis. Recently, efforts in automatic AU recognition have been dedicated to learning combinations of local features and to exploiting correlations between AUs. We propose a deep neural architecture that tackles both problems by combining learned local and global features in its initial stages and replicating a message passing algorithm between classes similar to a graphical model inference approach in later stages. We show that by training the model end-to-end with increased supervision we improve state-of-the-art by 5.3% and 8.2% performance on BP4D and DISFA datasets, respectively.
Keywords: Computer Vision; Machine Learning; Deep Learning; Facial Expression Analysis; Facial Action Units; Structure Inference
|
|