|
Ramin Irani, Kamal Nasrollahi, Chris Bahnsen, D.H. Lundtoft, Thomas B. Moeslund, Marc O. Simon, et al. (2015). Spatio-temporal Analysis of RGB-D-T Facial Images for Multimodal Pain Level Recognition. In 2015 IEEE Conference on Computer Vision and Pattern Recognition Worshops (CVPRW) (pp. 88–95).
Abstract: Pain is a vital sign of human health and its automatic detection can be of crucial importance in many different contexts, including medical scenarios. While most available computer vision techniques are based on RGB, in this paper, we investigate the effect of combining RGB, depth, and thermal
facial images for pain detection and pain intensity level recognition. For this purpose, we extract energies released by facial pixels using a spatiotemporal filter. Experiments on a group of 12 elderly people applying the multimodal approach show that the proposed method successfully detects pain and recognizes between three intensity levels in 82% of the analyzed frames improving more than 6% over RGB only analysis in similar conditions.
|
|
|
Aniol Lidon, Xavier Giro, Marc Bolaños, Petia Radeva, Markus Seidl, & Matthias Zeppelzauer. (2015). UPC-UB-STP @ MediaEval 2015 diversity task: iterative reranking of relevant images. In 2015 MediaEval Retrieving Diverse Images Task.
Abstract: This paper presents the results of the UPC-UB-STP team in the 2015 MediaEval Retrieving Diverse Images Task. The goal of the challenge is to provide a ranked list of Flickr photos for a predefined set of queries. Our approach firstly generates a ranking of images based on a query-independent estimation of its relevance. Only top results are kept and iteratively re-ranked based on their intra-similarity to introduce diversity.
|
|
|
Mickael Cormier, Andreas Specker, Julio C. S. Jacques, Lucas Florin, Jurgen Metzler, Thomas B. Moeslund, et al. (2023). UPAR Challenge: Pedestrian Attribute Recognition and Attribute-based Person Retrieval – Dataset, Design, and Results. In 2023 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (pp. 166–175).
Abstract: In civilian video security monitoring, retrieving and tracking a person of interest often rely on witness testimony and their appearance description. Deployed systems rely on a large amount of annotated training data and are expected to show consistent performance in diverse areas and gen-eralize well between diverse settings w.r.t. different view-points, illumination, resolution, occlusions, and poses for indoor and outdoor scenes. However, for such generalization, the system would require a large amount of various an-notated data for training and evaluation. The WACV 2023 Pedestrian Attribute Recognition and Attributed-based Per-son Retrieval Challenge (UPAR-Challenge) aimed to spot-light the problem of domain gaps in a real-world surveil-lance context and highlight the challenges and limitations of existing methods. The UPAR dataset, composed of 40 important binary attributes over 12 attribute categories across four datasets, was extended with data captured from a low-flying UAV from the P-DESTRE dataset. To this aim, 0.6M additional annotations were manually labeled and vali-dated. Each track evaluated the robustness of the competing methods to domain shifts by training on limited data from a specific domain and evaluating using data from unseen do-mains. The challenge attracted 41 registered participants, but only one team managed to outperform the baseline on one track, emphasizing the task's difficulty. This work de-scribes the challenge design, the adopted dataset, obtained results, as well as future directions on the topic.
|
|
|
Dawid Rymarczyk, Joost van de Weijer, Bartosz Zielinski, & Bartlomiej Twardowski. (2023). ICICLE: Interpretable Class Incremental Continual Learning. Dawid Rymarczyk. In 20th IEEE International Conference on Computer Vision (pp. 1887–1898).
Abstract: Continual learning enables incremental learning of new tasks without forgetting those previously learned, resulting in positive knowledge transfer that can enhance performance on both new and old tasks. However, continual learning poses new challenges for interpretability, as the rationale behind model predictions may change over time, leading to interpretability concept drift. We address this problem by proposing Interpretable Class-InCremental LEarning (ICICLE), an exemplar-free approach that adopts a prototypical part-based approach. It consists of three crucial novelties: interpretability regularization that distills previously learned concepts while preserving user-friendly positive reasoning; proximity-based prototype initialization strategy dedicated to the fine-grained setting; and task-recency bias compensation devoted to prototypical parts. Our experimental results demonstrate that ICICLE reduces the interpretability concept drift and outperforms the existing exemplar-free methods of common class-incremental learning when applied to concept-based models.
|
|
|
Jordy Van Landeghem, Ruben Tito, Lukasz Borchmann, Michal Pietruszka, Pawel Joziak, Rafal Powalski, et al. (2023). Document Understanding Dataset and Evaluation (DUDE). In 20th IEEE International Conference on Computer Vision (pp. 19528–19540).
Abstract: We call on the Document AI (DocAI) community to re-evaluate current methodologies and embrace the challenge of creating more practically-oriented benchmarks. Document Understanding Dataset and Evaluation (DUDE) seeks to remediate the halted research progress in understanding visually-rich documents (VRDs). We present a new dataset with novelties related to types of questions, answers, and document layouts based on multi-industry, multi-domain, and multi-page VRDs of various origins and dates. Moreover, we are pushing the boundaries of current methods by creating multi-task and multi-domain evaluation setups that more accurately simulate real-world situations where powerful generalization and adaptation under low-resource settings are desired. DUDE aims to set a new standard as a more practical, long-standing benchmark for the community, and we hope that it will lead to future extensions and contributions that address real-world challenges. Finally, our work illustrates the importance of finding more efficient ways to model language, images, and layout in DocAI.
|
|
|
Yuyang Liu, Yang Cong, Dipam Goswami, Xialei Liu, & Joost Van de Weijer. (2023). Augmented Box Replay: Overcoming Foreground Shift for Incremental Object Detection. In 20th IEEE International Conference on Computer Vision (pp. 11367–11377).
Abstract: In incremental learning, replaying stored samples from previous tasks together with current task samples is one of the most efficient approaches to address catastrophic forgetting. However, unlike incremental classification, image replay has not been successfully applied to incremental object detection (IOD). In this paper, we identify the overlooked problem of foreground shift as the main reason for this. Foreground shift only occurs when replaying images of previous tasks and refers to the fact that their background might contain foreground objects of the current task. To overcome this problem, a novel and efficient Augmented Box Replay (ABR) method is developed that only stores and replays foreground objects and thereby circumvents the foreground shift problem. In addition, we propose an innovative Attentive RoI Distillation loss that uses spatial attention from region-of-interest (RoI) features to constrain current model to focus on the most important information from old model. ABR significantly reduces forgetting of previous classes while maintaining high plasticity in current classes. Moreover, it considerably reduces the storage requirements when compared to standard image replay. Comprehensive experiments on Pascal-VOC and COCO datasets support the state-of-the-art performance of our model.
|
|
|
Rahat Khan, Joost Van de Weijer, Dimosthenis Karatzas, & Damien Muselet. (2013). Towards multispectral data acquisition with hand-held devices. In 20th IEEE International Conference on Image Processing (pp. 2053–2057).
Abstract: We propose a method to acquire multispectral data with handheld devices with front-mounted RGB cameras. We propose to use the display of the device as an illuminant while the camera captures images illuminated by the red, green and
blue primaries of the display. Three illuminants and three response functions of the camera lead to nine response values which are used for reflectance estimation. Results are promising and show that the accuracy of the spectral reconstruction improves in the range from 30-40% over the spectral
reconstruction based on a single illuminant. Furthermore, we propose to compute sensor-illuminant aware linear basis by discarding the part of the reflectances that falls in the sensorilluminant null-space. We show experimentally that optimizing reflectance estimation on these new basis functions decreases
the RMSE significantly over basis functions that are independent to sensor-illuminant. We conclude that, multispectral data acquisition is potentially possible with consumer hand-held devices such as tablets, mobiles, and laptops, opening up applications which are currently considered to be unrealistic.
Keywords: Multispectral; mobile devices; color measurements
|
|
|
Shida Beigpour, Marc Serra, Joost Van de Weijer, Robert Benavente, Maria Vanrell, Olivier Penacchio, et al. (2013). Intrinsic Image Evaluation On Synthetic Complex Scenes. In 20th IEEE International Conference on Image Processing (pp. 285–289).
Abstract: Scene decomposition into its illuminant, shading, and reflectance intrinsic images is an essential step for scene understanding. Collecting intrinsic image groundtruth data is a laborious task. The assumptions on which the ground-truth
procedures are based limit their application to simple scenes with a single object taken in the absence of indirect lighting and interreflections. We investigate synthetic data for intrinsic image research since the extraction of ground truth is straightforward, and it allows for scenes in more realistic situations (e.g, multiple illuminants and interreflections). With this dataset we aim to motivate researchers to further explore intrinsic image decomposition in complex scenes.
|
|
|
Katerine Diaz, Francesc J. Ferri, & W. Diaz. (2013). Fast Approximated Discriminative Common Vectors using rank-one SVD updates. In 20th International Conference On Neural Information Processing (Vol. 8228, pp. 368–375). LNCS. Springer Berlin Heidelberg.
Abstract: An efficient incremental approach to the discriminative common vector (DCV) method for dimensionality reduction and classification is presented. The proposal consists of a rank-one update along with an adaptive restriction on the rank of the null space which leads to an approximate but convenient solution. The algorithm can be implemented very efficiently in terms of matrix operations and space complexity, which enables its use in large-scale dynamic application domains. Deep comparative experimentation using publicly available high dimensional image datasets has been carried out in order to properly assess the proposed algorithm against several recent incremental formulations.
K. Diaz-Chito, F.J. Ferri, W. Diaz
|
|
|
Jaume Amores. (2010). Vocabulary-based Approaches for Multiple-Instance Data: a Comparative Study. In 20th International Conference on Pattern Recognition (4246–4250).
Abstract: Multiple Instance Learning (MIL) has become a hot topic and many different algorithms have been proposed in the last years. Despite this fact, there is a lack of comparative studies that shed light into the characteristics of the different methods and their behavior in different scenarios. In this paper we provide such an analysis. We include methods from different families, and pay special attention to vocabulary-based approaches, a new family of methods that has not received much attention in the MIL literature. The empirical comparison includes seven databases from four heterogeneous domains, implementations of eight popular MIL methods, and a study of the behavior under synthetic conditions. Based on this analysis, we show that, with an appropriate implementation, vocabulary-based approaches outperform other MIL methods in most of the cases, showing in general a more consistent performance.
|
|
|
Fadi Dornaika, & Bogdan Raducanu. (2010). Person-specific face shape estimation under varying head pose from single snapshots. In 20th International Conference on Pattern Recognition (3496–3499).
Abstract: This paper presents a new method for person-specific face shape estimation under varying head pose of a previously unseen person from a single image. We describe a featureless approach based on a deformable 3D model and a learned face subspace. The proposed approach is based on maximizing a likelihood measure associated with a learned face subspace, which is carried out by a stochastic and genetic optimizer. We conducted the experiments on a subset of Honda Video Database showing the feasibility and robustness of the proposed approach. For this reason, our approach could lend itself nicely to complex frameworks involving 3D face tracking and face gesture recognition in monocular videos.
|
|
|
Francesco Ciompi, Oriol Pujol, & Petia Radeva. (2010). A meta-learning approach to Conditional Random Fields using Error-Correcting Output Codes. In 20th International Conference on Pattern Recognition (710–713).
Abstract: We present a meta-learning framework for the design of potential functions for Conditional Random Fields. The design of both node potential and edge potential is formulated as a classification problem where margin classifiers are used. The set of state transitions for the edge potential is treated as a set of different classes, thus defining a multi-class learning problem. The Error-Correcting Output Codes (ECOC) technique is used to deal with the multi-class problem. Furthermore, the point defined by the combination of margin classifiers in the ECOC space is interpreted in a probabilistic manner, and the obtained distance values are then converted into potential values. The proposed model exhibits very promising results when applied to two real detection problems.
|
|
|
David Augusto Rojas, Fahad Shahbaz Khan, & Joost Van de Weijer. (2010). The Impact of Color on Bag-of-Words based Object Recognition. In 20th International Conference on Pattern Recognition (1549–1553).
Abstract: In recent years several works have aimed at exploiting color information in order to improve the bag-of-words based image representation. There are two stages in which color information can be applied in the bag-of-words framework. Firstly, feature detection can be improved by choosing highly informative color-based regions. Secondly, feature description, typically focusing on shape, can be improved with a color description of the local patches. Although both approaches have been shown to improve results the combined merits have not yet been analyzed. Therefore, in this paper we investigate the combined contribution of color to both the feature detection and extraction stages. Experiments performed on two challenging data sets, namely Flower and Pascal VOC 2009; clearly demonstrate that incorporating color in both feature detection and extraction significantly improves the overall performance.
|
|
|
Murad Al Haj, Andrew Bagdanov, Jordi Gonzalez, & Xavier Roca. (2010). Reactive object tracking with a single PTZ camera. In 20th International Conference on Pattern Recognition (1690–1693).
Abstract: In this paper we describe a novel approach to reactive tracking of moving targets with a pan-tilt-zoom camera. The approach uses an extended Kalman filter to jointly track the object position in the real world, its velocity in 3D and the camera intrinsics, in addition to the rate of change of these parameters. The filter outputs are used as inputs to PID controllers which continuously adjust the camera motion in order to reactively track the object at a constant image velocity while simultaneously maintaining a desirable target scale in the image plane. We provide experimental results on simulated and real tracking sequences to show how our tracker is able to accurately estimate both 3D object position and camera intrinsics with very high precision over a wide range of focal lengths.
|
|
|
Anjan Dutta, Umapada Pal, Alicia Fornes, & Josep Llados. (2010). An Efficient Staff Removal Technique from Printed Musical Documents. In 20th International Conference on Pattern Recognition (1965–1968).
Abstract: Staff removal is an important preprocessing step of the Optical Music Recognition (OMR). The process aims to remove the stafflines from a musical document and retain only the musical symbols, later these symbols are used effectively to identify the music information. This paper proposes a simple but robust method to remove stafflines from printed musical scores. In the proposed methodology we have considered a staffline segment as a horizontal linkage of vertical black runs with uniform height. We have used the neighbouring properties of a staffline segment to validate it as a true segment. We have considered the dataset along with the deformations described in for evaluation purpose. From experimentation we have got encouraging results.
|
|