|
Carme Julia, Angel Sappa, Felipe Lumbreras, Joan Serrat, & Antonio Lopez. (2010). An Iterative Multiresolution Scheme for SFM with Missing Data: single and multiple object scenes. IMAVIS - Image and Vision Computing, 28(1), 164–176.
Abstract: Most of the techniques proposed for tackling the Structure from Motion problem (SFM) cannot deal with high percentages of missing data in the matrix of trajectories. Furthermore, an additional problem should be faced up when working with multiple object scenes: the rank of the matrix of trajectories should be estimated. This paper presents an iterative multiresolution scheme for SFM with missing data to be used in both the single and multiple object cases. The proposed scheme aims at recovering missing entries in the original input matrix. The objective is to improve the results by applying a factorization technique to the partially or totally filled in matrix instead of to the original input one. Experimental results obtained with synthetic and real data sequences, containing single and multiple objects, are presented to show the viability of the proposed approach.
|
|
|
Carme Julia, Angel Sappa, Felipe Lumbreras, Joan Serrat, & Antonio Lopez. (2011). Rank Estimation in Missing Data Matrix Problems. JMIV - Journal of Mathematical Imaging and Vision, 39(2), 140–160.
Abstract: A novel technique for missing data matrix rank estimation is presented. It is focused on matrices of trajectories, where every element of the matrix corresponds to an image coordinate from a feature point of a rigid moving object at a given frame; missing data are represented as empty entries. The objective of the proposed approach is to estimate the rank of a missing data matrix in order to fill in empty entries with some matrix completion method, without using or assuming neither the number of objects contained in the scene nor the kind of their motion. The key point of the proposed technique consists in studying the frequency behaviour of the individual trajectories, which are seen as 1D signals. The main assumption is that due to the rigidity of the moving objects, the frequency content of the trajectories will be similar after filling in their missing entries. The proposed rank estimation approach can be used in different computer vision problems, where the rank of a missing data matrix needs to be estimated. Experimental results with synthetic and real data are provided in order to empirically show the good performance of the proposed approach.
|
|
|
Carme Julia, Felipe Lumbreras, & Angel Sappa. (2011). A Factorization-based Approach to Photometric Stereo. IJIST - International Journal of Imaging Systems and Technology, 21(1), 115–119.
Abstract: This article presents an adaptation of a factorization technique to tackle the photometric stereo problem. That is to recover the surface normals and reflectance of an object from a set of images obtained under different lighting conditions. The main contribution of the proposed approach is to consider pixels in shadow and saturated regions as missing data, in order to reduce their influence to the result. Concretely, an adapted Alternation technique is used to deal with missing data. Experimental results considering both synthetic and real images show the viability of the proposed factorization-based strategy. © 2011 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 21, 115–119, 2011.
|
|
|
Carola Figueroa Flores, Abel Gonzalez-Garcia, Joost Van de Weijer, & Bogdan Raducanu. (2019). Saliency for fine-grained object recognition in domains with scarce training data. PR - Pattern Recognition, 94, 62–73.
Abstract: This paper investigates the role of saliency to improve the classification accuracy of a Convolutional Neural Network (CNN) for the case when scarce training data is available. Our approach consists in adding a saliency branch to an existing CNN architecture which is used to modulate the standard bottom-up visual features from the original image input, acting as an attentional mechanism that guides the feature extraction process. The main aim of the proposed approach is to enable the effective training of a fine-grained recognition model with limited training samples and to improve the performance on the task, thereby alleviating the need to annotate a large dataset. The vast majority of saliency methods are evaluated on their ability to generate saliency maps, and not on their functionality in a complete vision pipeline. Our proposed pipeline allows to evaluate saliency methods for the high-level task of object recognition. We perform extensive experiments on various fine-grained datasets (Flowers, Birds, Cars, and Dogs) under different conditions and show that saliency can considerably improve the network’s performance, especially for the case of scarce training data. Furthermore, our experiments show that saliency methods that obtain improved saliency maps (as measured by traditional saliency benchmarks) also translate to saliency methods that yield improved performance gains when applied in an object recognition pipeline.
|
|
|
Carola Figueroa Flores, David Berga, Joost Van de Weijer, & Bogdan Raducanu. (2021). Saliency for free: Saliency prediction as a side-effect of object recognition. PRL - Pattern Recognition Letters, 150, 1–7.
Abstract: Saliency is the perceptual capacity of our visual system to focus our attention (i.e. gaze) on relevant objects instead of the background. So far, computational methods for saliency estimation required the explicit generation of a saliency map, process which is usually achieved via eyetracking experiments on still images. This is a tedious process that needs to be repeated for each new dataset. In the current paper, we demonstrate that is possible to automatically generate saliency maps without ground-truth. In our approach, saliency maps are learned as a side effect of object recognition. Extensive experiments carried out on both real and synthetic datasets demonstrated that our approach is able to generate accurate saliency maps, achieving competitive results when compared with supervised methods.
Keywords: Saliency maps; Unsupervised learning; Object recognition
|
|
|
Carolina Malagelada, F.De Lorio, Santiago Segui, S. Mendez, Michal Drozdzal, Jordi Vitria, et al. (2012). Functional gut disorders or disordered gut function? Small bowel dysmotility evidenced by an original technique. NEUMOT - Neurogastroenterology & Motility, 24(3), 223–230.
Abstract: JCR Impact Factor 2010: 3.349
Background This study aimed to determine the proportion of cases with abnormal intestinal motility among patients with functional bowel disorders. To this end, we applied an original method, previously developed in our laboratory, for analysis of endoluminal images obtained by capsule endoscopy. This novel technology is based on computer vision and machine learning techniques.
Methods The endoscopic capsule (Pillcam SB1; Given Imaging, Yokneam, Israel) was administered to 80 patients with functional bowel disorders and 70 healthy subjects. Endoluminal image analysis was performed with a computer vision program developed for the evaluation of contractile events (luminal occlusions and radial wrinkles), non-contractile patterns (open tunnel and smooth wall patterns), type of content (secretions, chyme) and motion of wall and contents. Normality range and discrimination of abnormal cases were established by a machine learning technique. Specifically, an iterative classifier (one-class support vector machine) was applied in a random population of 50 healthy subjects as a training set and the remaining subjects (20 healthy subjects and 80 patients) as a test set.
Key Results The classifier identified as abnormal 29% of patients with functional diseases of the bowel (23 of 80), and as normal 97% of healthy subjects (68 of 70) (P < 0.05 by chi-squared test). Patients identified as abnormal clustered in two groups, which exhibited either a hyper- or a hypodynamic motility pattern. The motor behavior was unrelated to clinical features.
Conclusions & Inferences With appropriate methodology, abnormal intestinal motility can be demonstrated in a significant proportion of patients with functional bowel disorders, implying a pathologic disturbance of gut physiology.
Keywords: capsule endoscopy;computer vision analysis;machine learning technique;small bowel motility
|
|
|
Carolina Malagelada, Michal Drozdzal, Santiago Segui, Sara Mendez, Jordi Vitria, Petia Radeva, et al. (2015). Classification of functional bowel disorders by objective physiological criteria based on endoluminal image analysis. AJPGI - American Journal of Physiology-Gastrointestinal and Liver Physiology, 309(6), G413–G419.
Abstract: We have previously developed an original method to evaluate small bowel motor function based on computer vision analysis of endoluminal images obtained by capsule endoscopy. Our aim was to demonstrate intestinal motor abnormalities in patients with functional bowel disorders by endoluminal vision analysis. Patients with functional bowel disorders (n = 205) and healthy subjects (n = 136) ingested the endoscopic capsule (Pillcam-SB2, Given-Imaging) after overnight fast and 45 min after gastric exit of the capsule a liquid meal (300 ml, 1 kcal/ml) was administered. Endoluminal image analysis was performed by computer vision and machine learning techniques to define the normal range and to identify clusters of abnormal function. After training the algorithm, we used 196 patients and 48 healthy subjects, completely naive, as test set. In the test set, 51 patients (26%) were detected outside the normal range (P < 0.001 vs. 3 healthy subjects) and clustered into hypo- and hyperdynamic subgroups compared with healthy subjects. Patients with hypodynamic behavior (n = 38) exhibited less luminal closure sequences (41 ± 2% of the recording time vs. 61 ± 2%; P < 0.001) and more static sequences (38 ± 3 vs. 20 ± 2%; P < 0.001); in contrast, patients with hyperdynamic behavior (n = 13) had an increased proportion of luminal closure sequences (73 ± 4 vs. 61 ± 2%; P = 0.029) and more high-motion sequences (3 ± 1 vs. 0.5 ± 0.1%; P < 0.001). Applying an original methodology, we have developed a novel classification of functional gut disorders based on objective, physiological criteria of small bowel function.
Keywords: capsule endoscopy; computer vision analysis; functional bowel disorders; intestinal motility; machine learning
|
|
|
Cesar de Souza, Adrien Gaidon, Yohann Cabon, Naila Murray, & Antonio Lopez. (2020). Generating Human Action Videos by Coupling 3D Game Engines and Probabilistic Graphical Models. IJCV - International Journal of Computer Vision, 128, 1505–1536.
Abstract: Deep video action recognition models have been highly successful in recent years but require large quantities of manually-annotated data, which are expensive and laborious to obtain. In this work, we investigate the generation of synthetic training data for video action recognition, as synthetic data have been successfully used to supervise models for a variety of other computer vision tasks. We propose an interpretable parametric generative model of human action videos that relies on procedural generation, physics models and other components of modern game engines. With this model we generate a diverse, realistic, and physically plausible dataset of human action videos, called PHAV for “Procedural Human Action Videos”. PHAV contains a total of 39,982 videos, with more than 1000 examples for each of 35 action categories. Our video generation approach is not limited to existing motion capture sequences: 14 of these 35 categories are procedurally-defined synthetic actions. In addition, each video is represented with 6 different data modalities, including RGB, optical flow and pixel-level semantic labels. These modalities are generated almost simultaneously using the Multiple Render Targets feature of modern GPUs. In order to leverage PHAV, we introduce a deep multi-task (i.e. that considers action classes from multiple datasets) representation learning architecture that is able to simultaneously learn from synthetic and real video datasets, even when their action categories differ. Our experiments on the UCF-101 and HMDB-51 benchmarks suggest that combining our large set of synthetic videos with small real-world datasets can boost recognition performance. Our approach also significantly outperforms video representations produced by fine-tuning state-of-the-art unsupervised generative models of videos.
Keywords: Procedural generation; Human action recognition; Synthetic data; Physics
|
|
|
Cesar Isaza, Joaquin Salas, & Bogdan Raducanu. (2014). Rendering ground truth data sets to detect shadows cast by static objects in outdoors. MTAP - Multimedia Tools and Applications, 70(1), 557–571.
Abstract: In our work, we are particularly interested in studying the shadows cast by static objects in outdoor environments, during daytime. To assess the accuracy of a shadow detection algorithm, we need ground truth information. The collection of such information is a very tedious task because it is a process that requires manual annotation. To overcome this severe limitation, we propose in this paper a methodology to automatically render ground truth using a virtual environment. To increase the degree of realism and usefulness of the simulated environment, we incorporate in the scenario the precise longitude, latitude and elevation of the actual location of the object, as well as the sun’s position for a given time and day. To evaluate our method, we consider a qualitative and a quantitative comparison. In the quantitative one, we analyze the shadow cast by a real object in a particular geographical location and its corresponding rendered model. To evaluate qualitatively the methodology, we use some ground truth images obtained both manually and automatically.
Keywords: Synthetic ground truth data set; Sun position; Shadow detection; Static objects shadow detection
|
|
|
Cesar Isaza, Joaquin Salas, & Bogdan Raducanu. (2012). Evaluation of Intrinsic Image Algorithms to Detect the Shadows Cast by Static Objects Outdoors. SENS - Sensors, 12(10), 13333–13348.
Abstract: In some automatic scene analysis applications, the presence of shadows becomes a nuisance that is necessary to deal with. As a consequence, a preliminary stage in many computer vision algorithms is to attenuate their effect. In this paper, we focus our attention on the detection of shadows cast by static objects outdoors, as the scene is viewed for extended periods of time (days, weeks) from a fixed camera and considering daylight intervals where the main source of light is the sun. In this context, we report two contributions. First, we introduce the use of synthetic images for which ground truth can be generated automatically, avoiding the tedious effort of manual annotation. Secondly, we report a novel application of the intrinsic image concept to the automatic detection of shadows cast by static objects in outdoors. We make both a quantitative and a qualitative evaluation of several algorithms based on this image representation. For the quantitative evaluation, we used the synthetic data set, while for the qualitative evaluation we used both data sets. Our experimental results show that the evaluated methods can partially solve the problem of shadow detection.
|
|
|
Chengyi Zou, Shuai Wan, Tiannan Ji, Marc Gorriz Blanch, Marta Mrak, & Luis Herranz. (2023). Chroma Intra Prediction with Lightweight Attention-Based Neural Networks. TCSVT - IEEE Transactions on Circuits and Systems for Video Technology, 34(1), 549–560.
Abstract: Neural networks can be successfully used for cross-component prediction in video coding. In particular, attention-based architectures are suitable for chroma intra prediction using luma information because of their capability to model relations between difierent channels. However, the complexity of such methods is still very high and should be further reduced, especially for decoding. In this paper, a cost-effective attention-based neural network is designed for chroma intra prediction. Moreover, with the goal of further improving coding performance, a novel approach is introduced to utilize more boundary information effectively. In addition to improving prediction, a simplification methodology is also proposed to reduce inference complexity by simplifying convolutions. The proposed schemes are integrated into H.266/Versatile Video Coding (VVC) pipeline, and only one additional binary block-level syntax flag is introduced to indicate whether a given block makes use of the proposed method. Experimental results demonstrate that the proposed scheme achieves up to −0.46%/−2.29%/−2.17% BD-rate reduction on Y/Cb/Cr components, respectively, compared with H.266/VVC anchor. Reductions in the encoding and decoding complexity of up to 22% and 61%, respectively, are achieved by the proposed scheme with respect to the previous attention-based chroma intra prediction method while maintaining coding performance.
|
|
|
Chenyang Fu, Kaida Xiao, Dimosthenis Karatzas, & Sophie Wuerger. (2011). Investigation of Unique Hue Setting Changes with Ageing. COL - Chinese Optics Letters, 9(5), 053301-5.
Abstract: Clromatic sensitivity along the protan, deutan, and tritan lines and the loci of the unique hues (red, green, yellow, blue) for a very large sample (n = 185) of colour-normal observers ranging from 18 to 75 years of age are assessed. Visual judgments are obtained under normal viewing conditions using colour patches on self-luminous display under controlled adaptation conditions. Trivector discrimination thresholds show an increase as a function of age along the protan, deutan, and tritan axes, with the largest increase present along the tritan line, less pronounced shifts in unique hue settings are also observed. Based on the chromatic (protan, deutan, tritan) thresholds and using scaled cone signals, we predict the unique hue changes with ageing. A dependency on age for unique red and unique yellow for predicted hue angle is found. We conclude that the chromatic sensitivity deteriorates significantly with age, whereas the appearance of unique hues is much less affected, remaining almost constant despite the known changes in the ocular media.
|
|
|
Christophe Rigaud, Clement Guerin, Dimosthenis Karatzas, Jean-Christophe Burie, & Jean-Marc Ogier. (2015). Knowledge-driven understanding of images in comic books. IJDAR - International Journal on Document Analysis and Recognition, 18(3), 199–221.
Abstract: Document analysis is an active field of research, which can attain a complete understanding of the semantics of a given document. One example of the document understanding process is enabling a computer to identify the key elements of a comic book story and arrange them according to a predefined domain knowledge. In this study, we propose a knowledge-driven system that can interact with bottom-up and top-down information to progressively understand the content of a document. We model the comic book’s and the image processing domains knowledge for information consistency analysis. In addition, different image processing methods are improved or developed to extract panels, balloons, tails, texts, comic characters and their semantic relations in an unsupervised way.
Keywords: Document Understanding; comics analysis; expert system
|
|
|
Ciprian Corneanu, Marc Oliu, Jeffrey F. Cohn, & Sergio Escalera. (2016). Survey on RGB, 3D, Thermal, and Multimodal Approaches for Facial Expression Recognition: History. TPAMI - IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(8), 1548–1568.
Abstract: Facial expressions are an important way through which humans interact socially. Building a system capable of automatically recognizing facial expressions from images and video has been an intense field of study in recent years. Interpreting such expressions remains challenging and much research is needed about the way they relate to human affect. This paper presents a general overview of automatic RGB, 3D, thermal and multimodal facial expression analysis. We define a new taxonomy for the field, encompassing all steps from face detection to facial expression recognition, and describe and classify the state of the art methods accordingly. We also present the important datasets and the bench-marking of most influential methods. We conclude with a general discussion about trends, important questions and future lines of research.
Keywords: Facial expression; affect; emotion recognition; RGB; 3D; thermal; multimodal
|
|
|
Clementine Decamps, Alexis Arnaud, Florent Petitprez, Mira Ayadi, Aurelia Baures, Lucile Armenoult, et al. (2021). DECONbench: a benchmarking platform dedicated to deconvolution methods for tumor heterogeneity quantification. BMC Bioinformatics, 22, 473.
Abstract: Quantification of tumor heterogeneity is essential to better understand cancer progression and to adapt therapeutic treatments to patient specificities. Bioinformatic tools to assess the different cell populations from single-omic datasets as bulk transcriptome or methylome samples have been recently developed, including reference-based and reference-free methods. Improved methods using multi-omic datasets are yet to be developed in the future and the community would need systematic tools to perform a comparative evaluation of these algorithms on controlled data.
|
|