|
Cristhian A. Aguilera-Carrasco, Luis Felipe Gonzalez-Böhme, Francisco Valdes, Francisco Javier Quitral Zapata, & Bogdan Raducanu. (2023). A Hand-Drawn Language for Human–Robot Collaboration in Wood Stereotomy. ACCESS - IEEE Access, 11, 100975–100985.
Abstract: This study introduces a novel, hand-drawn language designed to foster human-robot collaboration in wood stereotomy, central to carpentry and joinery professions. Based on skilled carpenters’ line and symbol etchings on timber, this language signifies the location, geometry of woodworking joints, and timber placement within a framework. A proof-of-concept prototype has been developed, integrating object detectors, keypoint regression, and traditional computer vision techniques to interpret this language and enable an extensive repertoire of actions. Empirical data attests to the language’s efficacy, with the successful identification of a specific set of symbols on various wood species’ sawn surfaces, achieving a mean average precision (mAP) exceeding 90%. Concurrently, the system can accurately pinpoint critical positions that facilitate robotic comprehension of carpenter-indicated woodworking joint geometry. The positioning error, approximately 3 pixels, meets industry standards.
|
|
|
Xiangyang Li, Luis Herranz, & Shuqiang Jiang. (2020). Multifaceted Analysis of Fine-Tuning in Deep Model for Visual Recognition. ACM - ACM Transactions on Data Science.
Abstract: In recent years, convolutional neural networks (CNNs) have achieved impressive performance for various visual recognition scenarios. CNNs trained on large labeled datasets can not only obtain significant performance on most challenging benchmarks but also provide powerful representations, which can be used to a wide range of other tasks. However, the requirement of massive amounts of data to train deep neural networks is a major drawback of these models, as the data available is usually limited or imbalanced. Fine-tuning (FT) is an effective way to transfer knowledge learned in a source dataset to a target task. In this paper, we introduce and systematically investigate several factors that influence the performance of fine-tuning for visual recognition. These factors include parameters for the retraining procedure (e.g., the initial learning rate of fine-tuning), the distribution of the source and target data (e.g., the number of categories in the source dataset, the distance between the source and target datasets) and so on. We quantitatively and qualitatively analyze these factors, evaluate their influence, and present many empirical observations. The results reveal insights into what fine-tuning changes CNN parameters and provide useful and evidence-backed intuitions about how to implement fine-tuning for computer vision tasks.
|
|
|
Simone Balocco, Carlo Gatta, Francesco Ciompi, A. Wahle, Petia Radeva, S. Carlier, et al. (2014). Standardized evaluation methodology and reference database for evaluating IVUS image segmentation. CMIG - Computerized Medical Imaging and Graphics, 38(2), 70–90.
Abstract: This paper describes an evaluation framework that allows a standardized and quantitative comparison of IVUS lumen and media segmentation algorithms. This framework has been introduced at the MICCAI 2011 Computing and Visualization for (Intra)Vascular Imaging (CVII) workshop, comparing the results of eight teams that participated.
We describe the available data-base comprising of multi-center, multi-vendor and multi-frequency IVUS datasets, their acquisition, the creation of the reference standard and the evaluation measures. The approaches address segmentation of the lumen, the media, or both borders; semi- or fully-automatic operation; and 2-D vs. 3-D methodology. Three performance measures for quantitative analysis have
been proposed. The results of the evaluation indicate that segmentation of the vessel lumen and media is possible with an accuracy that is comparable to manual annotation when semi-automatic methods are used, as well as encouraging results can be obtained also in case of fully-automatic segmentation. The analysis performed in this paper also highlights the challenges in IVUS segmentation that remains to be
solved.
Keywords: IVUS (intravascular ultrasound); Evaluation framework; Algorithm comparison; Image segmentation
|
|
|
Simeon Petkov, Xavier Carrillo, Petia Radeva, & Carlo Gatta. (2014). Diaphragm border detection in coronary X-ray angiographies: New method and applications. CMIG - Computerized Medical Imaging and Graphics, 38(4), 296–305.
Abstract: X-ray angiography is widely used in cardiac disease diagnosis during or prior to intravascular interventions. The diaphragm motion and the heart beating induce gray-level changes, which are one of the main obstacles in quantitative analysis of myocardial perfusion. In this paper we focus on detecting the diaphragm border in both single images or whole X-ray angiography sequences. We show that the proposed method outperforms state of the art approaches. We extend a previous publicly available data set, adding new ground truth data. We also compose another set of more challenging images, thus having two separate data sets of increasing difficulty. Finally, we show three applications of our method: (1) a strategy to reduce false positives in vessel enhanced images; (2) a digital diaphragm removal algorithm; (3) an improvement in Myocardial Blush Grade semi-automatic estimation.
|
|
|
Juan Ramon Terven Salinas, Joaquin Salas, & Bogdan Raducanu. (2014). New Opportunities for Computer Vision-Based Assistive Technology Systems for the Visually Impaired. COMP - Computer, 47(4), 52–58.
Abstract: Computing advances and increased smartphone use gives technology system designers greater flexibility in exploiting computer vision to support visually impaired users. Understanding these users' needs will certainly provide insight for the development of improved usability of computing devices.
|
|