Debora Gil, & Antoni Rosell. (2019). Advances in Artificial Intelligence – How Lung Cancer CT Screening Will Progress? In World Lung Cancer Conference.
Abstract: Invited speaker
|
Debora Gil, & Guillermo Torres. (2020). A multi-shape loss function with adaptive class balancing for the segmentation of lung structures. In 34th International Congress and Exhibition on Computer Assisted Radiology & Surgery.
|
B. Gotschy, Matthias S. Keil, H. Klos, & I. Rystau. (1994). Transition from static to dynamic Jahn-Teller distortion in (P(C6 H5)4)2 C60|. Solid State Communications, 92(12): 935–938.
|
Sergi Garcia Bordils, Dimosthenis Karatzas, & Marçal Rusiñol. (2023). Accelerating Transformer-Based Scene Text Detection and Recognition via Token Pruning. In 17th International Conference on Document Analysis and Recognition (Vol. 14192, pp. 106–121). LNCS.
Abstract: Scene text detection and recognition is a crucial task in computer vision with numerous real-world applications. Transformer-based approaches are behind all current state-of-the-art models and have achieved excellent performance. However, the computational requirements of the transformer architecture makes training these methods slow and resource heavy. In this paper, we introduce a new token pruning strategy that significantly decreases training and inference times without sacrificing performance, striking a balance between accuracy and speed. We have applied this pruning technique to our own end-to-end transformer-based scene text understanding architecture. Our method uses a separate detection branch to guide the pruning of uninformative image features, which significantly reduces the number of tokens at the input of the transformer. Experimental results show how our network is able to obtain competitive results on multiple public benchmarks while running at significantly higher speeds.
Keywords: Scene Text Detection; Scene Text Recognition; Transformer Acceleration
|
Sergi Garcia Bordils, Dimosthenis Karatzas, & Marçal Rusiñol. (2024). STEP – Towards Structured Scene-Text Spotting. In Winter Conference on Applications of Computer Vision (pp. 883–892).
Abstract: We introduce the structured scene-text spotting task, which requires a scene-text OCR system to spot text in the wild according to a query regular expression. Contrary to generic scene text OCR, structured scene-text spotting seeks to dynamically condition both scene text detection and recognition on user-provided regular expressions. To tackle this task, we propose the Structured TExt sPotter (STEP), a model that exploits the provided text structure to guide the OCR process. STEP is able to deal with regular expressions that contain spaces and it is not bound to detection at the word-level granularity. Our approach enables accurate zero-shot structured text spotting in a wide variety of real-world reading scenarios and is solely trained on publicly available data. To demonstrate the effectiveness of our approach, we introduce a new challenging test dataset that contains several types of out-of-vocabulary structured text, reflecting important reading applications of fields such as prices, dates, serial numbers, license plates etc. We demonstrate that STEP can provide specialised OCR performance on demand in all tested scenarios.
|
Yunchao Gong, Svetlana Lazebnik, Albert Gordo, & Florent Perronnin. (2012). Iterative quantization: A procrustean approach to learning binary codes for Large-Scale Image Retrieval. TPAMI - IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(12), 2916–2929.
Abstract: This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections. We formulate this problem in terms of finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube, and propose a simple and efficient alternating minimization algorithm to accomplish this task. This algorithm, dubbed iterative quantization (ITQ), has connections to multi-class spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). The resulting binary codes significantly outperform several other state-of-the-art methods. We also show that further performance improvements can result from transforming the data with a nonlinear kernel mapping prior to PCA or CCA. Finally, we demonstrate an application of ITQ to learning binary attributes or “classemes” on the ImageNet dataset.
|
Arjan Gijsenij, R. Lu, Theo Gevers, & De Xu. (2012). Color Constancy for Multiple Light Source. TIP - IEEE Transactions on Image Processing, 21(2), 697–707.
Abstract: Impact factor 2010: 2.92
Impact factor 2011/2012?: 3.32
Color constancy algorithms are generally based on the simplifying assumption that the spectral distribution of a light source is uniform across scenes. However, in reality, this assumption is often violated due to the presence of multiple light sources. In this paper, we will address more realistic scenarios where the uniform light-source assumption is too restrictive. First, a methodology is proposed to extend existing algorithms by applying color constancy locally to image patches, rather than globally to the entire image. After local (patch-based) illuminant estimation, these estimates are combined into more robust estimations, and a local correction is applied based on a modified diagonal model. Quantitative and qualitative experiments on spectral and real images show that the proposed methodology reduces the influence of two light sources simultaneously present in one scene. If the chromatic difference between these two illuminants is more than 1° , the proposed framework outperforms algorithms based on the uniform light-source assumption (with error-reduction up to approximately 30%). Otherwise, when the chromatic difference is less than 1° and the scene can be considered to contain one (approximately) uniform light source, the performance of the proposed method framework is similar to global color constancy methods.
|
Raul Gomez, Yahui Liu, Marco de Nadai, Dimosthenis Karatzas, Bruno Lepri, & Nicu Sebe. (2020). Retrieval Guided Unsupervised Multi-domain Image to Image Translation. In 28th ACM International Conference on Multimedia.
Abstract: Image to image translation aims to learn a mapping that transforms an image from one visual domain to another. Recent works assume that images descriptors can be disentangled into a domain-invariant content representation and a domain-specific style representation. Thus, translation models seek to preserve the content of source images while changing the style to a target visual domain. However, synthesizing new images is extremely challenging especially in multi-domain translations, as the network has to compose content and style to generate reliable and diverse images in multiple domains. In this paper we propose the use of an image retrieval system to assist the image-to-image translation task. First, we train an image-to-image translation model to map images to multiple domains. Then, we train an image retrieval model using real and generated images to find images similar to a query one in content but in a different domain. Finally, we exploit the image retrieval system to fine-tune the image-to-image translation model and generate higher quality images. Our experiments show the effectiveness of the proposed solution and highlight the contribution of the retrieval network, which can benefit from additional unlabeled data and help image-to-image translation models in the presence of scarce data.
|
Adrien Gaidon, Antonio Lopez, & Florent Perronnin. (2018). The Reasonable Effectiveness of Synthetic Visual Data. IJCV - International Journal of Computer Vision, 126(9), 899–901.
|
Dipam Goswami, Yuyang Liu, Bartlomiej Twardowski, & Joost Van de Weijer. (2023). FeCAM: Exploiting the Heterogeneity of Class Distributions in Exemplar-Free Continual Learning. In 37th Annual Conference on Neural Information Processing Systems.
|
Jianzhy Guo, Zhen Lei, Jun Wan, Egils Avots, Noushin Hajarolasvadi, Boris Knyazev, et al. (2018). Dominant and Complementary Emotion Recognition from Still Images of Faces. ACCESS - IEEE Access, 6, 26391–26403.
Abstract: Emotion recognition has a key role in affective computing. Recently, fine-grained emotion analysis, such as compound facial expression of emotions, has attracted high interest of researchers working on affective computing. A compound facial emotion includes dominant and complementary emotions (e.g., happily-disgusted and sadly-fearful), which is more detailed than the seven classical facial emotions (e.g., happy, disgust, and so on). Current studies on compound emotions are limited to use data sets with limited number of categories and unbalanced data distributions, with labels obtained automatically by machine learning-based algorithms which could lead to inaccuracies. To address these problems, we released the iCV-MEFED data set, which includes 50 classes of compound emotions and labels assessed by psychologists. The task is challenging due to high similarities of compound facial emotions from different categories. In addition, we have organized a challenge based on the proposed iCV-MEFED data set, held at FG workshop 2017. In this paper, we analyze the top three winner methods and perform further detailed experiments on the proposed data set. Experiments indicate that pairs of compound emotion (e.g., surprisingly-happy vs happily-surprised) are more difficult to be recognized if compared with the seven basic emotions. However, we hope the proposed data set can help to pave the way for further research on compound facial emotion recognition.
|
Sergi Garcia Bordils, Andres Mafla, Ali Furkan Biten, Oren Nuriel, Aviad Aberdam, Shai Mazor, et al. (2022). Out-of-Vocabulary Challenge Report. In Proceedings European Conference on Computer Vision Workshops (Vol. 13804, 359–375). LNCS.
Abstract: This paper presents final results of the Out-Of-Vocabulary 2022 (OOV) challenge. The OOV contest introduces an important aspect that is not commonly studied by Optical Character Recognition (OCR) models, namely, the recognition of unseen scene text instances at training time. The competition compiles a collection of public scene text datasets comprising of 326,385 images with 4,864,405 scene text instances, thus covering a wide range of data distributions. A new and independent validation and test set is formed with scene text instances that are out of vocabulary at training time. The competition was structured in two tasks, end-to-end and cropped scene text recognition respectively. A thorough analysis of results from baselines and different participants is presented. Interestingly, current state-of-the-art models show a significant performance gap under the newly studied setting. We conclude that the OOV dataset proposed in this challenge will be an essential area to be explored in order to develop scene text models that achieve more robust and generalized predictions.
|
Abel Gonzalez-Garcia, Davide Modolo, & Vittorio Ferrari. (2018). Objects as context for detecting their semantic parts. In 31st IEEE Conference on Computer Vision and Pattern Recognition (pp. 6907–6916).
Abstract: We present a semantic part detection approach that effectively leverages object information. We use the object appearance and its class as indicators of what parts to expect. We also model the expected relative location of parts inside the objects based on their appearance. We achieve this with a new network module, called OffsetNet, that efficiently predicts a variable number of part locations within a given object. Our model incorporates all these cues to
detect parts in the context of their objects. This leads to considerably higher performance for the challenging task of part detection compared to using part appearance alone (+5 mAP on the PASCAL-Part dataset). We also compare
to other part detection methods on both PASCAL-Part and CUB200-2011 datasets.
Keywords: Proposals; Semantics; Wheels; Automobiles; Context modeling; Task analysis; Object detection
|
Lluis Gomez, Andres Mafla, Marçal Rusiñol, & Dimosthenis Karatzas. (2018). Single Shot Scene Text Retrieval. In 15th European Conference on Computer Vision (Vol. 11218, pp. 728–744). LNCS.
Abstract: Textual information found in scene images provides high level semantic information about the image and its context and it can be leveraged for better scene understanding. In this paper we address the problem of scene text retrieval: given a text query, the system must return all images containing the queried text. The novelty of the proposed model consists in the usage of a single shot CNN architecture that predicts at the same time bounding boxes and a compact text representation of the words in them. In this way, the text based image retrieval task can be casted as a simple nearest neighbor search of the query text representation over the outputs of the CNN over the entire image
database. Our experiments demonstrate that the proposed architecture
outperforms previous state-of-the-art while it offers a significant increase
in processing speed.
Keywords: Image retrieval; Scene text; Word spotting; Convolutional Neural Networks; Region Proposals Networks; PHOC
|
Andreea Glavan, Alina Matei, Petia Radeva, & Estefania Talavera. (2021). Does our social life influence our nutritional behaviour? Understanding nutritional habits from egocentric photo-streams. ESWA - Expert Systems with Applications, 171, 114506.
Abstract: Nutrition and social interactions are both key aspects of the daily lives of humans. In this work, we propose a system to evaluate the influence of social interaction in the nutritional habits of a person from a first-person perspective. In order to detect the routine of an individual, we construct a nutritional behaviour pattern discovery model, which outputs routines over a number of days. Our method evaluates similarity of routines with respect to visited food-related scenes over the collected days, making use of Dynamic Time Warping, as well as considering social engagement and its correlation with food-related activities. The nutritional and social descriptors of the collected days are evaluated and encoded using an LSTM Autoencoder. Later, the obtained latent space is clustered to find similar days unaffected by outliers using the Isolation Forest method. Moreover, we introduce a new score metric to evaluate the performance of the proposed algorithm. We validate our method on 104 days and more than 100 k egocentric images gathered by 7 users. Several different visualizations are evaluated for the understanding of the findings. Our results demonstrate good performance and applicability of our proposed model for social-related nutritional behaviour understanding. At the end, relevant applications of the model are discussed by analysing the discovered routine of particular individuals.
|