|
Clement Guerin, Christophe Rigaud, Karell Bertet, Jean-Christophe Burie, Arnaud Revel, & Jean-Marc Ogier. (2014). Réduction de l’espace de recherche pour les personnages de bandes dessinées. In 19th National Congress Reconnaissance de Formes et l'Intelligence Artificielle.
Abstract: Les bandes dessinées représentent un patrimoine culturel important dans de nombreux pays et leur numérisation massive offre la possibilité d'effectuer des recherches dans le contenu des images. À ce jour, ce sont principalement les structures des pages et leurs contenus textuels qui ont été étudiés, peu de travaux portent sur le contenu graphique. Nous proposons de nous appuyer sur des éléments déjà étudiés tels que la position des cases et des bulles, pour réduire l'espace de recherche et localiser les personnages en fonction de la queue des bulles. L'évaluation de nos différentes contributions à partir de la base eBDtheque montre un taux de détection des queues de bulle de 81.2%, de localisation des personnages allant jusqu'à 85% et un gain d'espace de recherche de plus de 50%.
Keywords: contextual search; document analysis; comics characters
|
|
|
Clementine Decamps, Alexis Arnaud, Florent Petitprez, Mira Ayadi, Aurelia Baures, Lucile Armenoult, et al. (2021). DECONbench: a benchmarking platform dedicated to deconvolution methods for tumor heterogeneity quantification. BMC Bioinformatics, 22, 473.
Abstract: Quantification of tumor heterogeneity is essential to better understand cancer progression and to adapt therapeutic treatments to patient specificities. Bioinformatic tools to assess the different cell populations from single-omic datasets as bulk transcriptome or methylome samples have been recently developed, including reference-based and reference-free methods. Improved methods using multi-omic datasets are yet to be developed in the future and the community would need systematic tools to perform a comparative evaluation of these algorithms on controlled data.
|
|
|
Corina Krauter, Ursula Reiter, Albrecht Schmidt, Marc Masana, Rudolf Stollberger, Michael Fuchsjager, et al. (2019). Objective extraction of the temporal evolution of the mitral valve vortex ring from 4D flow MRI. In 27th Annual Meeting & Exhibition of the International Society for Magnetic Resonance in Medicine.
Abstract: The mitral valve vortex ring is a promising flow structure for analysis of diastolic function, however, methods for objective extraction of its formation to dissolution are lacking. We present a novel algorithm for objective extraction of the temporal evolution of the mitral valve vortex ring from magnetic resonance 4D flow data and validated the method against visual analysis. The algorithm successfully extracted mitral valve vortex rings during both early- and late-diastolic filling and agreed substantially with visual assessment. Early-diastolic mitral valve vortex ring properties differed between healthy subjects and patients with ischemic heart disease.
|
|
|
Craig Von Land, Ricardo Toledo, & Juan J. Villanueva. (1997). TeleRegions: Application of Telematics in Cardiac Care. In Computers In Cardiology (pp. 195–198).
|
|
|
Craig Von Land, Ricardo Toledo, & Juan J. Villanueva. (1996). TeleRegion: Tele-Applications for European Regions.
|
|
|
Craig Von Land, Ricardo Toledo, & Juan J. Villanueva. (1996). Object Oriented Design of the DICOM standard. In International Symposium on Cardiovascular Imaging..
|
|
|
Craig Von Land, Ricardo Toledo, & Juan J. Villanueva. (1996). CARE: Computer Assisted Radiology Environment. In Tecnologia de Imagenes Medicas, Convencion Iberoamericana sobre la Salud en la Sociedad Global de la Informacion..
|
|
|
Craig Von Land, V. Lashin, A. Oriol, & Juan J. Villanueva. (1997). Object-oriented Design of the DICOM Standard and its Application to Cardiovascular Imaging. In Computers In Cardiology (pp. 645–648).
|
|
|
Cristhian A. Aguilera-Carrasco. (2014). Evaluation of feature detectors and descriptors in VISIBLE-LWIR cross-spectral imaging (Vol. 177). Master's thesis, , .
Abstract: This thesis evaluates the performance of different state-of-art feature detectors and descriptors algorithms in the Visible-LWIR cross-spectral scenario. The focus is to determine if current detector and descriptor algorithms can be used to match features between the LWIR spectrum and the visible spectrum in applications such as, visual odometry, object recognition, image registration and stereo vision. An outdoor cross-spectral dataset was created to evaluate the suitability of the different algorithms. The results
show that the tested algorithms are not suitable to the task of matching features across different spectra. The repeatability ratio was smaller than the 30 percent in the best case and in general matched features were not accurate located. Additionally, these results also suggest that is necessary to create new algorithms that take into account the nature of the different spectra, describing characteristics that exist in both spectra such as discontinuities.
Keywords: Multi-spectral; Cross-spectral; Visible-LWIR imaging; Multimodal.
|
|
|
Cristhian A. Aguilera-Carrasco, Angel Sappa, Cristhian Aguilera, & Ricardo Toledo. (2017). Cross-Spectral Local Descriptors via Quadruplet Network. SENS - Sensors, 17(4), 873.
Abstract: This paper presents a novel CNN-based architecture, referred to as Q-Net, to learn local feature descriptors that are useful for matching image patches from two different spectral bands. Given correctly matched and non-matching cross-spectral image pairs, a quadruplet network is trained to map input image patches to a common Euclidean space, regardless of the input spectral band. Our approach is inspired by the recent success of triplet networks in the visible spectrum, but adapted for cross-spectral scenarios, where, for each matching pair, there are always two possible non-matching patches: one for each spectrum. Experimental evaluations on a public cross-spectral VIS-NIR dataset shows that the proposed approach improves the state-of-the-art. Moreover, the proposed technique can also be used in mono-spectral settings, obtaining a similar performance to triplet network descriptors, but requiring less training data.
|
|
|
Cristhian A. Aguilera-Carrasco, Angel Sappa, & Ricardo Toledo. (2015). LGHD: a Feature Descriptor for Matching Across Non-Linear Intensity Variations. In 22th IEEE International Conference on Image Processing (pp. 178–181).
|
|
|
Cristhian A. Aguilera-Carrasco, C. Aguilera, & Angel Sappa. (2018). Melamine Faced Panels Defect Classification beyond the Visible Spectrum. SENS - Sensors, 18(11), 1–10.
Abstract: In this work, we explore the use of images from different spectral bands to classify defects in melamine faced panels, which could appear through the production process. Through experimental evaluation, we evaluate the use of images from the visible (VS), near-infrared (NIR), and long wavelength infrared (LWIR), to classify the defects using a feature descriptor learning approach together with a support vector machine classifier. Two descriptors were evaluated, Extended Local Binary Patterns (E-LBP) and SURF using a Bag of Words (BoW) representation. The evaluation was carried on with an image set obtained during this work, which contained five different defect categories that currently occurs in the industry. Results show that using images from beyond the visual spectrum helps to improve classification performance in contrast with a single visible spectrum solution.
Keywords: industrial application; infrared; machine learning
|
|
|
Cristhian A. Aguilera-Carrasco, Cristhian Aguilera, Cristobal A. Navarro, & Angel Sappa. (2020). Fast CNN Stereo Depth Estimation through Embedded GPU Devices. SENS - Sensors, 20(11), 3249.
Abstract: Current CNN-based stereo depth estimation models can barely run under real-time constraints on embedded graphic processing unit (GPU) devices. Moreover, state-of-the-art evaluations usually do not consider model optimization techniques, being that it is unknown what is the current potential on embedded GPU devices. In this work, we evaluate two state-of-the-art models on three different embedded GPU devices, with and without optimization methods, presenting performance results that illustrate the actual capabilities of embedded GPU devices for stereo depth estimation. More importantly, based on our evaluation, we propose the use of a U-Net like architecture for postprocessing the cost-volume, instead of a typical sequence of 3D convolutions, drastically augmenting the runtime speed of current models. In our experiments, we achieve real-time inference speed, in the range of 5–32 ms, for 1216 × 368 input stereo images on the Jetson TX2, Jetson Xavier, and Jetson Nano embedded devices.
Keywords: stereo matching; deep learning; embedded GPU
|
|
|
Cristhian A. Aguilera-Carrasco, F. Aguilera, Angel Sappa, C. Aguilera, & Ricardo Toledo. (2016). Learning cross-spectral similarity measures with deep convolutional neural networks. In 29th IEEE Conference on Computer Vision and Pattern Recognition Worshops.
Abstract: The simultaneous use of images from different spectracan be helpful to improve the performance of many computer vision tasks. The core idea behind the usage of crossspectral approaches is to take advantage of the strengths of each spectral band providing a richer representation of a scene, which cannot be obtained with just images from one spectral band. In this work we tackle the cross-spectral image similarity problem by using Convolutional Neural Networks (CNNs). We explore three different CNN architectures to compare the similarity of cross-spectral image patches. Specifically, we train each network with images from the visible and the near-infrared spectrum, and then test the result with two public cross-spectral datasets. Experimental results show that CNN approaches outperform the current state-of-art on both cross-spectral datasets. Additionally, our experiments show that some CNN architectures are capable of generalizing between different crossspectral domains.
|
|
|
Cristhian A. Aguilera-Carrasco, Luis Felipe Gonzalez-Böhme, Francisco Valdes, Francisco Javier Quitral Zapata, & Bogdan Raducanu. (2023). A Hand-Drawn Language for Human–Robot Collaboration in Wood Stereotomy. ACCESS - IEEE Access, 11, 100975–100985.
Abstract: This study introduces a novel, hand-drawn language designed to foster human-robot collaboration in wood stereotomy, central to carpentry and joinery professions. Based on skilled carpenters’ line and symbol etchings on timber, this language signifies the location, geometry of woodworking joints, and timber placement within a framework. A proof-of-concept prototype has been developed, integrating object detectors, keypoint regression, and traditional computer vision techniques to interpret this language and enable an extensive repertoire of actions. Empirical data attests to the language’s efficacy, with the successful identification of a specific set of symbols on various wood species’ sawn surfaces, achieving a mean average precision (mAP) exceeding 90%. Concurrently, the system can accurately pinpoint critical positions that facilitate robotic comprehension of carpenter-indicated woodworking joint geometry. The positioning error, approximately 3 pixels, meets industry standards.
|
|