|
Marina Alberti, Carlo Gatta, Simone Balocco, Francesco Ciompi, Oriol Pujol, Joana Silva, et al. (2011). Automatic Branching Detection in IVUS Sequences. In Jordi Vitria, Joao Miguel Raposo, & Mario Hernandez (Eds.), 5th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 6669, pp. 126–133). LNCS. Berlin: Springer Berlin Heidelberg.
Abstract: Atherosclerosis is a vascular pathology affecting the arterial walls, generally located in specific vessel sites, such as bifurcations. In this paper, for the first time, a fully automatic approach for the detection of bifurcations in IVUS pullback sequences is presented. The method identifies the frames and the angular sectors in which a bifurcation is visible. This goal is achieved by applying a classifier to a set of textural features extracted from each image of an IVUS pullback. A comparison between two state-of-the-art classifiers is performed, AdaBoost and Random Forest. A cross-validation scheme is applied in order to evaluate the performances of the approaches. The obtained results are encouraging, showing a sensitivity of 75% and an accuracy of 94% by using the AdaBoost algorithm.
|
|
|
Laura Igual, Joan Carles Soliva, Sergio Escalera, Roger Gimeno, Oscar Vilarroya, & Petia Radeva. (2012). Automatic Brain Caudate Nuclei Segmentation and Classification in Diagnostic of Attention-Deficit/Hyperactivity Disorder. CMIG - Computerized Medical Imaging and Graphics, 36(8), 591–600.
Abstract: We present a fully automatic diagnostic imaging test for Attention-Deficit/Hyperactivity Disorder diagnosis assistance based on previously found evidences of caudate nucleus volumetric abnormalities. The proposed method consists of different steps: a new automatic method for external and internal segmentation of caudate based on Machine Learning methodologies; the definition of a set of new volume relation features, 3D Dissociated Dipoles, used for caudate representation and classification. We separately validate the contributions using real data from a pediatric population and show precise internal caudate segmentation and discrimination power of the diagnostic test, showing significant performance improvements in comparison to other state-of-the-art methods.
Keywords: Automatic caudate segmentation; Attention-Deficit/Hyperactivity Disorder; Diagnostic test; Machine learning; Decision stumps; Dissociated dipoles
|
|
|
Marina Alberti, Simone Balocco, Carlo Gatta, Francesco Ciompi, Oriol Pujol, Joana Silva, et al. (2012). Automatic Bifurcation Detection in Coronary IVUS Sequences. TBME - IEEE Transactions on Biomedical Engineering, 59(4), 1022–2031.
Abstract: In this paper, we present a fully automatic method which identifies every bifurcation in an intravascular ultrasound (IVUS) sequence, the corresponding frames, the angular orientation with respect to the IVUS acquisition, and the extension. This goal is reached using a two-level classification scheme: first, a classifier is applied to a set of textural features extracted from each image of a sequence. A comparison among three state-of-the-art discriminative classifiers (AdaBoost, random forest, and support vector machine) is performed to identify the most suitable method for the branching detection task. Second, the results are improved by exploiting contextual information using a multiscale stacked sequential learning scheme. The results are then successively refined using a-priori information about branching dimensions and geometry. The proposed approach provides a robust tool for the quick review of pullback sequences, facilitating the evaluation of the lesion at bifurcation sites. The proposed method reaches an F-Measure score of 86.35%, while the F-Measure scores for inter- and intraobserver variability are 71.63% and 76.18%, respectively. The obtained results are positive. Especially, considering the branching detection task is very challenging, due to high variability in bifurcation dimensions and appearance.
|
|
|
Antonio Hernandez, Carlo Gatta, Laura Igual, Sergio Escalera, & Petia Radeva. (2011). Automatic Angiography Segmentation Based on Improved Graph-cut. In Jornada TIC Salut Girona.
|
|
|
Aura Hernandez-Sabate. (2005). Automatic adventitia segmentation in IntraVascular UltraSound images. Master's thesis, , 08193 Bellaterra, Barcelona (Spain).
Abstract: A usual tool in cardiac disease diagnosis is vessel plaque assessment by analysis of IVUS sequences. Manual detection of lumen-intima, intima-media and media-adventitia vessel borders is the main activity of physicians in the process of plaque quantification. Large variety in vessel border descriptors, as well as, shades, artifacts and blurred response due to ultrasound physical properties troubles automated media-adventitia segmentation. This experimental work presents a solution to such a complex problem. The process blends advanced anisotropic filtering operators and statistic classification techniques, achieving an efficient vessel border modelling strategy. First of all, we introduce the theoretic base of the method. After that, we show the steps of the algorithm, validating the method with statistics that show that the media-adventitia border detection achieves an accuracy in the range of inter-observer variability regardless of plaque nature, vessel geometry and incomplete vessel borders. Finally, we present a little Matlab application to the automatic media-adventitia border.
|
|
|
Joan Mas, B. Lamiroy, Gemma Sanchez, & Josep Llados. (2006). Automatic Adjacency Grammar Generation from User Drawn Sketches.
|
|
|
Mohammad N. S. Jahromi, Morten Bojesen Bonderup, Maryam Asadi-Aghbolaghi, Egils Avots, Kamal Nasrollahi, Sergio Escalera, et al. (2018). Automatic Access Control Based on Face and Hand Biometrics in a Non-cooperative Context. In IEEE Winter Applications of Computer Vision Workshops (pp. 28–36).
Abstract: Automatic access control systems (ACS) based on the human biometrics or physical tokens are widely employed in public and private areas. Yet these systems, in their conventional forms, are restricted to active interaction from the users. In scenarios where users are not cooperating with the system, these systems are challenged. Failure in cooperation with the biometric systems might be intentional or because the users are incapable of handling the interaction procedure with the biometric system or simply forget to cooperate with it, due to for example, illness like dementia. This work introduces a challenging bimodal database, including face and hand information of the users when they approach a door to open it by its handle in a noncooperative context. We have defined two (an easy and a challenging) protocols on how to use the database. We have reported results on many baseline methods, including deep learning techniques as well as conventional methods on the database. The obtained results show the merit of the proposed database and the challenging nature of access control with non-cooperative users.
Keywords: IEEE Winter Applications of Computer Vision Workshops
|
|
|
David Masip, Michael S. North, Alexander Todorov, & Daniel N. Osherson. (2014). Automated Prediction of Preferences Using Facial Expressions. Plos - PloS one, 9(2), e87434.
Abstract: We introduce a computer vision problem from social cognition, namely, the automated detection of attitudes from a person's spontaneous facial expressions. To illustrate the challenges, we introduce two simple algorithms designed to predict observers’ preferences between images (e.g., of celebrities) based on covert videos of the observers’ faces. The two algorithms are almost as accurate as human judges performing the same task but nonetheless far from perfect. Our approach is to locate facial landmarks, then predict preference on the basis of their temporal dynamics. The database contains 768 videos involving four different kinds of preferences. We make it publically available.
|
|
|
Jose A. Garcia, David Masip, Valerio Sbragaglia, & Jacopo Aguzzi. (2016). Automated Identification and Tracking of Nephrops norvegicus (L.) Using Infrared and Monochromatic Blue Light. In 19th International Conference of the Catalan Association for Artificial Intelligence.
Abstract: Automated video and image analysis can be a very efficient tool to analyze
animal behavior based on sociality, especially in hard access environments
for researchers. The understanding of this social behavior can play a key role in the sustainable design of capture policies of many species. This paper proposes the use of computer vision algorithms to identify and track a specific specie, the Norway lobster, Nephrops norvegicus, a burrowing decapod with relevant commercial value which is captured by trawling. These animals can only be captured when are engaged in seabed excursions, which are strongly related with their social behavior.
This emergent behavior is modulated by the day-night cycle, but their social
interactions remain unknown to the scientific community. The paper introduces an identification scheme made of four distinguishable black and white tags (geometric shapes). The project has recorded 15-day experiments in laboratory pools, under monochromatic blue light (472 nm.) and darkness conditions (recorded using Infra Red light). Using this massive image set, we propose a comparative of state-ofthe-art computer vision algorithms to distinguish and track the different animals’ movements. We evaluate the robustness to the high noise presence in the infrared video signals and free out-of-plane rotations due to animal movement. The experiments show promising accuracies under a cross-validation protocol, being adaptable to the automation and analysis of large scale data. In a second contribution, we created an extensive dataset of shapes (46027 different shapes) from four daily experimental video recordings, which will be available to the community.
Keywords: computer vision; video analysis; object recognition; tracking; behaviour; social; decapod; Nephrops norvegicus
|
|
|
Iban Berganzo-Besga, Hector A. Orengo, Felipe Lumbreras, Paloma Aliende, & Monica N. Ramsey. (2022). Automated detection and classification of multi-cell Phytoliths using Deep Learning-Based Algorithms. JArchSci - Journal of Archaeological Science, 148, 105654.
Abstract: This paper presents an algorithm for automated detection and classification of multi-cell phytoliths, one of the major components of many archaeological and paleoenvironmental deposits. This identification, based on phytolith wave pattern, is made using a pretrained VGG19 deep learning model. This approach has been tested in three key phytolith genera for the study of agricultural origins in Near East archaeology: Avena, Hordeum and Triticum. Also, this classification has been validated at species-level using Triticum boeoticum and dicoccoides images. Due to the diversity of microscopes, cameras and chemical treatments that can influence images of phytolith slides, three types of data augmentation techniques have been implemented: rotation of the images at 45-degree angles, random colour and brightness jittering, and random blur/sharpen. The implemented workflow has resulted in an overall accuracy of 93.68% for phytolith genera, improving previous attempts. The algorithm has also demonstrated its potential to automatize the classification of phytoliths species with an overall accuracy of 100%. The open code and platforms employed to develop the algorithm assure the method's accessibility, reproducibility and reusability.
|
|
|
D. Seron, F. Moreso, C. Gratin, Jordi Vitria, & E. Condom. (1996). Automated classification of renal interstitium and tubules by local texture analysis and a neural network. Analytical and Quantitative Cytology and Histology, 18(5), 410–9, PMID: 8908314.
|
|
|
Zhengying Liu, Isabelle Guyon, Julio C. S. Jacques Junior, Meysam Madadi, Sergio Escalera, Adrien Pavao, et al. (2019). AutoCV Challenge Design and Baseline Results. In La Conference sur l’Apprentissage Automatique.
Abstract: We present the design and beta tests of a new machine learning challenge called AutoCV (for Automated Computer Vision), which is the first event in a series of challenges we are planning on the theme of Automated Deep Learning. We target applications for which Deep Learning methods have had great success in the past few years, with the aim of pushing the state of the art in fully automated methods to design the architecture of neural networks and train them without any human intervention. The tasks are restricted to multi-label image classification problems, from domains including medical, areal, people, object, and handwriting imaging. Thus the type of images will vary a lot in scales, textures, and structure. Raw data are provided (no features extracted), but all datasets are formatted in a uniform tensor manner (although images may have fixed or variable sizes within a dataset). The participants's code will be blind tested on a challenge platform in a controlled manner, with restrictions on training and test time and memory limitations. The challenge is part of the official selection of IJCNN 2019.
|
|
|
Carles Fernandez, Pau Baiget, Xavier Roca, & Jordi Gonzalez. (2011). Augmenting Video Surveillance Footage with Virtual Agents for Incremental Event Evaluation. PRL - Pattern Recognition Letters, 32(6), 878–889.
Abstract: The fields of segmentation, tracking and behavior analysis demand for challenging video resources to test, in a scalable manner, complex scenarios like crowded environments or scenes with high semantics. Nevertheless, existing public databases cannot scale the presence of appearing agents, which would be useful to study long-term occlusions and crowds. Moreover, creating these resources is expensive and often too particularized to specific needs. We propose an augmented reality framework to increase the complexity of image sequences in terms of occlusions and crowds, in a scalable and controllable manner. Existing datasets can be increased with augmented sequences containing virtual agents. Such sequences are automatically annotated, thus facilitating evaluation in terms of segmentation, tracking, and behavior recognition. In order to easily specify the desired contents, we propose a natural language interface to convert input sentences into virtual agent behaviors. Experimental tests and validation in indoor, street, and soccer environments are provided to show the feasibility of the proposed approach in terms of robustness, scalability, and semantics.
|
|
|
Marçal Rusiñol, J. Chazalon, & Katerine Diaz. (2018). Augmented Songbook: an Augmented Reality Educational Application for Raising Music Awareness. MTAP - Multimedia Tools and Applications, 77(11), 13773–13798.
Abstract: This paper presents the development of an Augmented Reality mobile application which aims at sensibilizing young children to abstract concepts of music. Such concepts are, for instance, the musical notation or the idea of rhythm. Recent studies in Augmented Reality for education suggest that such technologies have multiple benefits for students, including younger ones. As mobile document image acquisition and processing gains maturity on mobile platforms, we explore how it is possible to build a markerless and real-time application to augment the physical documents with didactic animations and interactive virtual content. Given a standard image processing pipeline, we compare the performance of different local descriptors at two key stages of the process. Results suggest alternatives to the SIFT local descriptors, regarding result quality and computational efficiency, both for document model identification and perspective transform estimation. All experiments are performed on an original and public dataset we introduce here.
Keywords: Augmented reality; Document image matching; Educational applications
|
|
|
Yuyang Liu, Yang Cong, Dipam Goswami, Xialei Liu, & Joost Van de Weijer. (2023). Augmented Box Replay: Overcoming Foreground Shift for Incremental Object Detection. In 20th IEEE International Conference on Computer Vision (pp. 11367–11377).
Abstract: In incremental learning, replaying stored samples from previous tasks together with current task samples is one of the most efficient approaches to address catastrophic forgetting. However, unlike incremental classification, image replay has not been successfully applied to incremental object detection (IOD). In this paper, we identify the overlooked problem of foreground shift as the main reason for this. Foreground shift only occurs when replaying images of previous tasks and refers to the fact that their background might contain foreground objects of the current task. To overcome this problem, a novel and efficient Augmented Box Replay (ABR) method is developed that only stores and replays foreground objects and thereby circumvents the foreground shift problem. In addition, we propose an innovative Attentive RoI Distillation loss that uses spatial attention from region-of-interest (RoI) features to constrain current model to focus on the most important information from old model. ABR significantly reduces forgetting of previous classes while maintaining high plasticity in current classes. Moreover, it considerably reduces the storage requirements when compared to standard image replay. Comprehensive experiments on Pascal-VOC and COCO datasets support the state-of-the-art performance of our model.
|
|