|
Records |
Links |
|
Author |
Cristhian A. Aguilera-Carrasco; Luis Felipe Gonzalez-Böhme; Francisco Valdes; Francisco Javier Quitral Zapata; Bogdan Raducanu |
![goto web page (via DOI) doi](http://refbase.cvc.uab.es/img/doi.gif)
|
|
Title |
A Hand-Drawn Language for Human–Robot Collaboration in Wood Stereotomy |
Type |
Journal Article |
|
Year |
2023 |
Publication |
IEEE Access |
Abbreviated Journal |
ACCESS |
|
|
Volume |
11 |
Issue |
|
Pages |
100975 - 100985 |
|
|
Keywords |
|
|
|
Abstract |
This study introduces a novel, hand-drawn language designed to foster human-robot collaboration in wood stereotomy, central to carpentry and joinery professions. Based on skilled carpenters’ line and symbol etchings on timber, this language signifies the location, geometry of woodworking joints, and timber placement within a framework. A proof-of-concept prototype has been developed, integrating object detectors, keypoint regression, and traditional computer vision techniques to interpret this language and enable an extensive repertoire of actions. Empirical data attests to the language’s efficacy, with the successful identification of a specific set of symbols on various wood species’ sawn surfaces, achieving a mean average precision (mAP) exceeding 90%. Concurrently, the system can accurately pinpoint critical positions that facilitate robotic comprehension of carpenter-indicated woodworking joint geometry. The positioning error, approximately 3 pixels, meets industry standards. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes ![sorted by Notes field, descending order (down)](http://refbase.cvc.uab.es/img/sort_desc.gif) |
LAMP |
Approved |
no |
|
|
Call Number |
Admin @ si @ AGV2023 |
Serial |
3969 |
|
Permanent link to this record |
|
|
|
|
Author |
Qingshan Chen; Zhenzhen Quan; Yujun Li; Chao Zhai; Mikhail Mozerov |
![goto web page url](http://refbase.cvc.uab.es/img/www.gif)
![find record details (via OpenURL) openurl](http://refbase.cvc.uab.es/img/xref.gif)
|
|
Title |
An Unsupervised Domain Adaption Approach for Cross-Modality RGB-Infrared Person Re-Identification |
Type |
Journal Article |
|
Year |
2023 |
Publication |
IEEE Sensors Journal |
Abbreviated Journal |
IEEE-SENS |
|
|
Volume |
23 |
Issue |
24 |
Pages |
|
|
|
Keywords |
Q. Chen, Z. Quan, Y. Li, C. Zhai and M. G. Mozerov |
|
|
Abstract |
Dual-camera systems commonly employed in surveillance serve as the foundation for RGB-infrared (IR) cross-modality person re-identification (ReID). However, significant modality differences give rise to inferior performance compared to single-modality scenarios. Furthermore, most existing studies in this area rely on supervised training with meticulously labeled datasets. Labeling RGB-IR image pairs is more complex than labeling conventional image data, and deploying pretrained models on unlabeled datasets can lead to catastrophic performance degradation. In contrast to previous solutions that focus solely on cross-modality or domain adaptation issues, this article presents an end-to-end unsupervised domain adaptation (UDA) framework for the cross-modality person ReID, which can simultaneously address both of these challenges. This model employs source domain classes, target domain clusters, and unclustered instance samples for the training, maximizing the comprehensive use of the dataset. Moreover, it addresses the problem of mismatched clustering labels between the two modalities in the target domain by incorporating a label matching module that reassigns reliable clusters with labels, ensuring correspondence between different modality labels. We construct the loss function by incorporating distinctiveness loss and multiplicity loss, both of which are determined by the similarity of neighboring features in the predicted feature space and the difference between distant features. This approach enables efficient feature clustering and cluster class assignment to occur concurrently. Eight UDA cross-modality person ReID experiments are conducted on three real datasets and six synthetic datasets. The experimental results unequivocally demonstrate that the proposed model outperforms the existing state-of-the-art algorithms to a significant degree. Notably, in RegDB → RegDB_light, the Rank-1 accuracy exhibits a remarkable improvement of 8.24%. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes ![sorted by Notes field, descending order (down)](http://refbase.cvc.uab.es/img/sort_desc.gif) |
LAMP |
Approved |
no |
|
|
Call Number |
Admin @ si @ CQL2023 |
Serial |
3884 |
|
Permanent link to this record |
|
|
|
|
Author |
Qingshan Chen; Zhenzhen Quan; Yifan Hu; Yujun Li; Zhi Liu; Mikhail Mozerov |
![goto web page url](http://refbase.cvc.uab.es/img/www.gif)
|
|
Title |
MSIF: multi-spectrum image fusion method for cross-modality person re-identification |
Type |
Journal Article |
|
Year |
2023 |
Publication |
International Journal of Machine Learning and Cybernetics |
Abbreviated Journal |
IJMLC |
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
Sketch-RGB cross-modality person re-identification (ReID) is a challenging task that aims to match a sketch portrait drawn by a professional artist with a full-body photo taken by surveillance equipment to deal with situations where the monitoring equipment is damaged at the accident scene. However, sketch portraits only provide highly abstract frontal body contour information and lack other important features such as color, pose, behavior, etc. The difference in saliency between the two modalities brings new challenges to cross-modality person ReID. To overcome this problem, this paper proposes a novel dual-stream model for cross-modality person ReID, which is able to mine modality-invariant features to reduce the discrepancy between sketch and camera images end-to-end. More specifically, we propose a multi-spectrum image fusion (MSIF) method, which aims to exploit the image appearance changes brought by multiple spectrums and guide the network to mine modality-invariant commonalities during training. It only processes the spectrum of the input images without adding additional calculations and model complexity, which can be easily integrated into other models. Moreover, we introduce a joint structure via a generalized mean pooling (GMP) layer and a self-attention (SA) mechanism to balance background and texture information and obtain the regional features with a large amount of information in the image. To further shrink the intra-class distance, a weighted regularized triplet (WRT) loss is developed without introducing additional hyperparameters. The model was first evaluated on the PKU Sketch ReID dataset, and extensive experimental results show that the Rank-1/mAP accuracy of our method is 87.00%/91.12%, reaching the current state-of-the-art performance. To further validate the effectiveness of our approach in handling cross-modality person ReID, we conducted experiments on two commonly used IR-RGB datasets (SYSU-MM01 and RegDB). The obtained results show that our method achieves competitive performance. These results confirm the ability of our method to effectively process images from different modalities. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes ![sorted by Notes field, descending order (down)](http://refbase.cvc.uab.es/img/sort_desc.gif) |
LAMP |
Approved |
no |
|
|
Call Number |
Admin @ si @ CQH2023 |
Serial |
3885 |
|
Permanent link to this record |
|
|
|
|
Author |
Tao Wu; Kai Wang; Chuanming Tang; Jianlin Zhang |
![goto web page url](http://refbase.cvc.uab.es/img/www.gif)
|
|
Title |
Diffusion-based network for unsupervised landmark detection |
Type |
Journal Article |
|
Year |
2024 |
Publication |
Knowledge-Based Systems |
Abbreviated Journal |
|
|
|
Volume |
292 |
Issue |
|
Pages |
111627 |
|
|
Keywords |
|
|
|
Abstract |
Landmark detection is a fundamental task aiming at identifying specific landmarks that serve as representations of distinct object features within an image. However, the present landmark detection algorithms often adopt complex architectures and are trained in a supervised manner using large datasets to achieve satisfactory performance. When faced with limited data, these algorithms tend to experience a notable decline in accuracy. To address these drawbacks, we propose a novel diffusion-based network (DBN) for unsupervised landmark detection, which leverages the generation ability of the diffusion models to detect the landmark locations. In particular, we introduce a dual-branch encoder (DualE) for extracting visual features and predicting landmarks. Additionally, we lighten the decoder structure for faster inference, referred to as LightD. By this means, we avoid relying on extensive data comparison and the necessity of designing complex architectures as in previous methods. Experiments on CelebA, AFLW, 300W and Deepfashion benchmarks have shown that DBN performs state-of-the-art compared to the existing methods. Furthermore, DBN shows robustness even when faced with limited data cases. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes ![sorted by Notes field, descending order (down)](http://refbase.cvc.uab.es/img/sort_desc.gif) |
LAMP |
Approved |
no |
|
|
Call Number |
Admin @ si @ WWT2024 |
Serial |
4024 |
|
Permanent link to this record |
|
|
|
|
Author |
Mikhail Mozerov; Joost Van de Weijer |
![goto web page (via DOI) doi](http://refbase.cvc.uab.es/img/doi.gif)
|
|
Title |
Accurate stereo matching by two step global optimization |
Type |
Journal Article |
|
Year |
2015 |
Publication |
IEEE Transactions on Image Processing |
Abbreviated Journal |
TIP |
|
|
Volume |
24 |
Issue |
3 |
Pages |
1153-1163 |
|
|
Keywords |
|
|
|
Abstract |
In stereo matching cost filtering methods and energy minimization algorithms are considered as two different techniques. Due to their global extend energy minimization methods obtain good stereo matching results. However, they tend to fail in occluded regions, in which cost filtering approaches obtain better results. In this paper we intend to combine both approaches with the aim to improve overall stereo matching results. We show that a global optimization with a fully connected model can be solved by cost fil tering methods. Based on this observation we propose to perform stereo matching as a two-step energy minimization algorithm. We consider two MRF models: a fully connected model defined on the complete set of pixels in an image and a conventional locally connected model. We solve the energy minimization problem for the fully connected model, after which the marginal function of the solution is used as the unary potential in the locally connected MRF model. Experiments on the Middlebury stereo datasets show that the proposed method achieves state-of-the-arts results. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1057-7149 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes ![sorted by Notes field, descending order (down)](http://refbase.cvc.uab.es/img/sort_desc.gif) |
ISE; LAMP; 600.079; 600.078 |
Approved |
no |
|
|
Call Number |
Admin @ si @ MoW2015a |
Serial |
2568 |
|
Permanent link to this record |