|
Mickael Cormier, Andreas Specker, Julio C. S. Jacques, Lucas Florin, Jurgen Metzler, Thomas B. Moeslund, et al. (2023). UPAR Challenge: Pedestrian Attribute Recognition and Attribute-based Person Retrieval – Dataset, Design, and Results. In 2023 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (pp. 166–175).
Abstract: In civilian video security monitoring, retrieving and tracking a person of interest often rely on witness testimony and their appearance description. Deployed systems rely on a large amount of annotated training data and are expected to show consistent performance in diverse areas and gen-eralize well between diverse settings w.r.t. different view-points, illumination, resolution, occlusions, and poses for indoor and outdoor scenes. However, for such generalization, the system would require a large amount of various an-notated data for training and evaluation. The WACV 2023 Pedestrian Attribute Recognition and Attributed-based Per-son Retrieval Challenge (UPAR-Challenge) aimed to spot-light the problem of domain gaps in a real-world surveil-lance context and highlight the challenges and limitations of existing methods. The UPAR dataset, composed of 40 important binary attributes over 12 attribute categories across four datasets, was extended with data captured from a low-flying UAV from the P-DESTRE dataset. To this aim, 0.6M additional annotations were manually labeled and vali-dated. Each track evaluated the robustness of the competing methods to domain shifts by training on limited data from a specific domain and evaluating using data from unseen do-mains. The challenge attracted 41 registered participants, but only one team managed to outperform the baseline on one track, emphasizing the task's difficulty. This work de-scribes the challenge design, the adopted dataset, obtained results, as well as future directions on the topic.
|
|
|
Mikhail Mozerov, Ariel Amato, Xavier Roca, & Jordi Gonzalez. (2009). Solving the Multi Object Occlusion Problem in a Multiple Camera Tracking System. Pattern Recognition and Image Analysis, 165–171.
Abstract: An efficient method to overcome adverse effects of occlusion upon object tracking is presented. The method is based on matching paths of objects in time and solves a complex occlusion-caused problem of merging separate segments of the same path.
|
|
|
Carme Julia, Angel Sappa, Felipe Lumbreras, Joan Serrat, & Antonio Lopez. (2010). An Iterative Multiresolution Scheme for SFM with Missing Data: single and multiple object scenes. IMAVIS - Image and Vision Computing, 28(1), 164–176.
Abstract: Most of the techniques proposed for tackling the Structure from Motion problem (SFM) cannot deal with high percentages of missing data in the matrix of trajectories. Furthermore, an additional problem should be faced up when working with multiple object scenes: the rank of the matrix of trajectories should be estimated. This paper presents an iterative multiresolution scheme for SFM with missing data to be used in both the single and multiple object cases. The proposed scheme aims at recovering missing entries in the original input matrix. The objective is to improve the results by applying a factorization technique to the partially or totally filled in matrix instead of to the original input one. Experimental results obtained with synthetic and real data sequences, containing single and multiple objects, are presented to show the viability of the proposed approach.
|
|
|
Josep Llados, Horst Bunke, & Enric Marti. (1997). Using Cyclic String Matching to Find Rotational and Reflectional Symmetries in Shapes. In Intelligent Robots: Sensing, Modeling and Planning (pp. 164–179). World Scientific Press.
Abstract: Dagstuhl Workshop
|
|
|
Arka Ujjal Dey, Suman Ghosh, Ernest Valveny, & Gaurav Harit. (2021). Beyond Visual Semantics: Exploring the Role of Scene Text in Image Understanding. PRL - Pattern Recognition Letters, 149, 164–171.
Abstract: Images with visual and scene text content are ubiquitous in everyday life. However, current image interpretation systems are mostly limited to using only the visual features, neglecting to leverage the scene text content. In this paper, we propose to jointly use scene text and visual channels for robust semantic interpretation of images. We do not only extract and encode visual and scene text cues, but also model their interplay to generate a contextual joint embedding with richer semantics. The contextual embedding thus generated is applied to retrieval and classification tasks on multimedia images, with scene text content, to demonstrate its effectiveness. In the retrieval framework, we augment our learned text-visual semantic representation with scene text cues, to mitigate vocabulary misses that may have occurred during the semantic embedding. To deal with irrelevant or erroneous recognition of scene text, we also apply query-based attention to our text channel. We show how the multi-channel approach, involving visual semantics and scene text, improves upon state of the art.
|
|
|
Santiago Segui, Michal Drozdzal, Guillem Pascual, Petia Radeva, Carolina Malagelada, Fernando Azpiroz, et al. (2016). Generic Feature Learning for Wireless Capsule Endoscopy Analysis. CBM - Computers in Biology and Medicine, 79, 163–172.
Abstract: The interpretation and analysis of wireless capsule endoscopy (WCE) recordings is a complex task which requires sophisticated computer aided decision (CAD) systems to help physicians with video screening and, finally, with the diagnosis. Most CAD systems used in capsule endoscopy share a common system design, but use very different image and video representations. As a result, each time a new clinical application of WCE appears, a new CAD system has to be designed from the scratch. This makes the design of new CAD systems very time consuming. Therefore, in this paper we introduce a system for small intestine motility characterization, based on Deep Convolutional Neural Networks, which circumvents the laborious step of designing specific features for individual motility events. Experimental results show the superiority of the learned features over alternative classifiers constructed using state-of-the-art handcrafted features. In particular, it reaches a mean classification accuracy of 96% for six intestinal motility events, outperforming the other classifiers by a large margin (a 14% relative performance increase).
Keywords: Wireless capsule endoscopy; Deep learning; Feature learning; Motility analysis
|
|
|
Antoni Gurgui, Debora Gil, Enric Marti, & Vicente Grau. (2016). Left-Ventricle Basal Region Constrained Parametric Mapping to Unitary Domain. In 7th International Workshop on Statistical Atlases & Computational Modelling of the Heart (Vol. 10124, pp. 163–171). LNCS.
Abstract: Due to its complex geometry, the basal ring is often omitted when putting different heart geometries into correspondence. In this paper, we present the first results on a new mapping of the left ventricle basal rings onto a normalized coordinate system using a fold-over free approach to the solution to the Laplacian. To guarantee correspondences between different basal rings, we imposed some internal constrained positions at anatomical landmarks in the normalized coordinate system. To prevent internal fold-overs, constraints are handled by cutting the volume into regions defined by anatomical features and mapping each piece of the volume separately. Initial results presented in this paper indicate that our method is able to handle internal constrains without introducing fold-overs and thus guarantees one-to-one mappings between different basal ring geometries.
Keywords: Laplacian; Constrained maps; Parameterization; Basal ring
|
|
|
Carola Figueroa Flores, Bogdan Raducanu, David Berga, & Joost Van de Weijer. (2021). Hallucinating Saliency Maps for Fine-Grained Image Classification for Limited Data Domains. In 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (Vol. 4, pp. 163–171).
Abstract: arXiv:2007.12562
Most of the saliency methods are evaluated on their ability to generate saliency maps, and not on their functionality in a complete vision pipeline, like for instance, image classification. In the current paper, we propose an approach which does not require explicit saliency maps to improve image classification, but they are learned implicitely, during the training of an end-to-end image classification task. We show that our approach obtains similar results as the case when the saliency maps are provided explicitely. Combining RGB data with saliency maps represents a significant advantage for object recognition, especially for the case when training data is limited. We validate our method on several datasets for fine-grained classification tasks (Flowers, Birds and Cars). In addition, we show that our saliency estimation method, which is trained without any saliency groundtruth data, obtains competitive results on real image saliency benchmark (Toronto), and outperforms deep saliency models with synthetic images (SID4VAM).
|
|
|
Mireia Sole, Joan Blanco, Debora Gil, Oliver Valero, Alvaro Pascual, B. Cardenas, et al. (2021). Chromosomal positioning in spermatogenic cells is influenced by chromosomal factors associated with gene activity, bouquet formation, and meiotic sex-chromosome inactivation. Chromosoma, 130, 163–175.
Abstract: Chromosome territoriality is not random along the cell cycle and it is mainly governed by intrinsic chromosome factors and gene expression patterns. Conversely, very few studies have explored the factors that determine chromosome territoriality and its influencing factors during meiosis. In this study, we analysed chromosome positioning in murine spermatogenic cells using three-dimensionally fluorescence in situ hybridization-based methodology, which allows the analysis of the entire karyotype. The main objective of the study was to decipher chromosome positioning in a radial axis (all analysed germ-cell nuclei) and longitudinal axis (only spermatozoa) and to identify the chromosomal factors that regulate such an arrangement. Results demonstrated that the radial positioning of chromosomes during spermatogenesis was cell-type specific and influenced by chromosomal factors associated to gene activity. Chromosomes with specific features that enhance transcription (high GC content, high gene density and high numbers of predicted expressed genes) were preferentially observed in the inner part of the nucleus in virtually all cell types. Moreover, the position of the sex chromosomes was influenced by their transcriptional status, from the periphery of the nucleus when its activity was repressed (pachytene) to a more internal position when it is partially activated (spermatid). At pachytene, chromosome positioning was also influenced by chromosome size due to the bouquet formation. Longitudinal chromosome positioning in the sperm nucleus was not random either, suggesting the importance of ordered longitudinal positioning for the release and activation of the paternal genome after fertilisation.
|
|
|
Joan M. Nuñez, Jorge Bernal, F. Javier Sanchez, & Fernando Vilariño. (2013). Blood Vessel Characterization in Colonoscopy Images to Improve Polyp Localization. In Proceedings of the International Conference on Computer Vision Theory and Applications (Vol. 1, pp. 162–171). SciTePress.
Abstract: This paper presents an approach to mitigate the contribution of blood vessels to the energy image used at different tasks of automatic colonoscopy image analysis. This goal is achieved by introducing a characterization of endoluminal scene objects which allows us to differentiate between the trace of 2-dimensional visual objects,such as vessels, and shades from 3-dimensional visual objects, such as folds. The proposed characterization is based on the influence that the object shape has in the resulting visual feature, and it leads to the development of a blood vessel attenuation algorithm. A database consisting of manually labelled masks was built in order to test the performance of our method, which shows an encouraging success in blood vessel mitigation while keeping other structures intact. Moreover, by extending our method to the only available polyp localization
algorithm tested on a public database, blood vessel mitigation proved to have a positive influence on the overall performance.
Keywords: Colonoscopy; Blood vessel; Linear features; Valley detection
|
|
|
Panagiota Spyridonos, Fernando Vilariño, Jordi Vitria, Fernando Azpiroz, & Petia Radeva. (2006). Anisotropic Feature Extraction from Endoluminal Images for Detection of Intestinal Contractions. In and J. Sporring M. N. R. Larsen (Ed.), 9th International Conference on Medical Image Computing and Computer–Assisted Intervention (Vol. 4191, 161–168). LNCS. Berlin Heidelberg: Springer Verlag.
Abstract: Wireless endoscopy is a very recent and at the same time unique technique allowing to visualize and study the occurrence of con- tractions and to analyze the intestine motility. Feature extraction is es- sential for getting efficient patterns to detect contractions in wireless video endoscopy of small intestine. We propose a novel method based on anisotropic image filtering and efficient statistical classification of con- traction features. In particular, we apply the image gradient tensor for mining informative skeletons from the original image and a sequence of descriptors for capturing the characteristic pattern of contractions. Fea- tures extracted from the endoluminal images were evaluated in terms of their discriminatory ability in correct classifying images as either belong- ing to contractions or not. Classification was performed by means of a support vector machine classifier with a radial basis function kernel. Our classification rates gave sensitivity of the order of 90.84% and specificity of the order of 94.43% respectively. These preliminary results highlight the high efficiency of the selected descriptors and support the feasibility of the proposed method in assisting the automatic detection and analysis of contractions.
|
|
|
Thanh Ha Do, Salvatore Tabbone, & Oriol Ramos Terrades. (2013). Document noise removal using sparse representations over learned dictionary. In Symposium on Document engineering (pp. 161–168).
Abstract: best paper award
In this paper, we propose an algorithm for denoising document images using sparse representations. Following a training set, this algorithm is able to learn the main document characteristics and also, the kind of noise included into the documents. In this perspective, we propose to model the noise energy based on the normalized cross-correlation between pairs of noisy and non-noisy documents. Experimental
results on several datasets demonstrate the robustness of our method compared with the state-of-the-art.
|
|
|
Maria Elena Meza-de-Luna, Juan Ramon Terven Salinas, Bogdan Raducanu, & Joaquin Salas. (2016). Assessing the Influence of Mirroring on the Perception of Professional Competence using Wearable Technology. TAC - IEEE Transactions on Affective Computing, 9(2), 161–175.
Abstract: Nonverbal communication is an intrinsic part in daily face-to-face meetings. A frequently observed behavior during social interactions is mirroring, in which one person tends to mimic the attitude of the counterpart. This paper shows that a computer vision system could be used to predict the perception of competence in dyadic interactions through the automatic detection of mirroring
events. To prove our hypothesis, we developed: (1) A social assistant for mirroring detection, using a wearable device which includes a video camera and (2) an automatic classifier for the perception of competence, using the number of nodding gestures and mirroring events as predictors. For our study, we used a mixed-method approach in an experimental design where 48 participants acting as customers interacted with a confederated psychologist. We found that the number of nods or mirroring events has a significant influence on the perception of competence. Our results suggest that: (1) Customer mirroring is a better predictor than psychologist mirroring; (2) the number of psychologist’s nods is a better predictor than the number of customer’s nods; (3) except for the psychologist mirroring, the computer vision algorithm we used worked about equally well whether it was acquiring images from wearable smartglasses or fixed cameras.
Keywords: Mirroring; Nodding; Competence; Perception; Wearable Technology
|
|
|
Antonio Lopez, Atsushi Imiya, Tomas Pajdla, & Jose Manuel Alvarez. (2017). Computer Vision in Vehicle Technology: Land, Sea & Air. John Wiley & Sons, Ltd.
Abstract: Summary This chapter examines different vision-based commercial solutions for real-live problems related to vehicles. It is worth mentioning the recent astonishing performance of deep convolutional neural networks (DCNNs) in difficult visual tasks such as image classification, object recognition/localization/detection, and semantic segmentation. In fact,
different DCNN architectures are already being explored for low-level tasks such as optical flow and disparity computation, and higher level ones such as place recognition.
|
|
|
Marçal Rusiñol, & Lluis Gomez. (2018). Avances en clasificación de imágenes en los últimos diez años. Perspectivas y limitaciones en el ámbito de archivos fotográficos históricos. Revista anual de la Asociación de Archiveros de Castilla y León, 161–174.
|
|