Records |
Author |
David Castells; Vinh Ngo; Juan Borrego-Carazo; Marc Codina; Carles Sanchez; Debora Gil; Jordi Carrabina |
Title |
A Survey of FPGA-Based Vision Systems for Autonomous Cars |
Type |
Journal Article |
Year |
2022 |
Publication |
IEEE Access |
Abbreviated Journal |
ACESS |
Volume |
10 |
Issue |
|
Pages |
132525-132563 |
Keywords |
Autonomous automobile; Computer vision; field programmable gate arrays; reconfigurable architectures |
Abstract |
On the road to making self-driving cars a reality, academic and industrial researchers are working hard to continue to increase safety while meeting technical and regulatory constraints Understanding the surrounding environment is a fundamental task in self-driving cars. It requires combining complex computer vision algorithms. Although state-of-the-art algorithms achieve good accuracy, their implementations often require powerful computing platforms with high power consumption. In some cases, the processing speed does not meet real-time constraints. FPGA platforms are often used to implement a category of latency-critical algorithms that demand maximum performance and energy efficiency. Since self-driving car computer vision functions fall into this category, one could expect to see a wide adoption of FPGAs in autonomous cars. In this paper, we survey the computer vision FPGA-based works from the literature targeting automotive applications over the last decade. Based on the survey, we identify the strengths and weaknesses of FPGAs in this domain and future research opportunities and challenges. |
Address |
16 December 2022 |
Corporate Author |
|
Thesis |
|
Publisher |
IEEE |
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
IAM; 600.166 |
Approved |
no |
Call Number |
Admin @ si @ CNB2022 |
Serial |
3760 |
Permanent link to this record |
|
|
|
Author |
Saad Minhas; Zeba Khanam; Shoaib Ehsan; Klaus McDonald Maier; Aura Hernandez-Sabate |
Title |
Weather Classification by Utilizing Synthetic Data |
Type |
Journal Article |
Year |
2022 |
Publication |
Sensors |
Abbreviated Journal |
SENS |
Volume |
22 |
Issue |
9 |
Pages |
3193 |
Keywords |
Weather classification; synthetic data; dataset; autonomous car; computer vision; advanced driver assistance systems; deep learning; intelligent transportation systems |
Abstract |
Weather prediction from real-world images can be termed a complex task when targeting classification using neural networks. Moreover, the number of images throughout the available datasets can contain a huge amount of variance when comparing locations with the weather those images are representing. In this article, the capabilities of a custom built driver simulator are explored specifically to simulate a wide range of weather conditions. Moreover, the performance of a new synthetic dataset generated by the above simulator is also assessed. The results indicate that the use of synthetic datasets in conjunction with real-world datasets can increase the training efficiency of the CNNs by as much as 74%. The article paves a way forward to tackle the persistent problem of bias in vision-based datasets. |
Address |
21 April 2022 |
Corporate Author |
|
Thesis |
|
Publisher |
MDPI |
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
IAM; 600.139; 600.159; 600.166; 600.145; |
Approved |
no |
Call Number |
Admin @ si @ MKE2022 |
Serial |
3761 |
Permanent link to this record |
|
|
|
Author |
Eduardo Aguilar; Bhalaji Nagarajan; Beatriz Remeseiro; Petia Radeva |
Title |
Bayesian deep learning for semantic segmentation of food images |
Type |
Journal Article |
Year |
2022 |
Publication |
Computers and Electrical Engineering |
Abbreviated Journal |
CEE |
Volume |
103 |
Issue |
|
Pages |
108380 |
Keywords |
Deep learning; Uncertainty quantification; Bayesian inference; Image segmentation; Food analysis |
Abstract |
Deep learning has provided promising results in various applications; however, algorithms tend to be overconfident in their predictions, even though they may be entirely wrong. Particularly for critical applications, the model should provide answers only when it is very sure of them. This article presents a Bayesian version of two different state-of-the-art semantic segmentation methods to perform multi-class segmentation of foods and estimate the uncertainty about the given predictions. The proposed methods were evaluated on three public pixel-annotated food datasets. As a result, we can conclude that Bayesian methods improve the performance achieved by the baseline architectures and, in addition, provide information to improve decision-making. Furthermore, based on the extracted uncertainty map, we proposed three measures to rank the images according to the degree of noisy annotations they contained. Note that the top 135 images ranked by one of these measures include more than half of the worst-labeled food images. |
Address |
October 2022 |
Corporate Author |
|
Thesis |
|
Publisher |
Science Direct |
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
MILAB |
Approved |
no |
Call Number |
Admin @ si @ ANR2022 |
Serial |
3763 |
Permanent link to this record |
|
|
|
Author |
Zhen Xu; Sergio Escalera; Adrien Pavao; Magali Richard; Wei-Wei Tu; Quanming Yao; Huan Zhao; Isabelle Guyon |
Title |
Codabench: Flexible, easy-to-use, and reproducible meta-benchmark platform |
Type |
Journal Article |
Year |
2022 |
Publication |
Patterns |
Abbreviated Journal |
PATTERNS |
Volume |
3 |
Issue |
7 |
Pages |
100543 |
Keywords |
Machine learning; data science; benchmark platform; reproducibility; competitions |
Abstract |
Obtaining a standardized benchmark of computational methods is a major issue in data-science communities. Dedicated frameworks enabling fair benchmarking in a unified environment are yet to be developed. Here, we introduce Codabench, a meta-benchmark platform that is open sourced and community driven for benchmarking algorithms or software agents versus datasets or tasks. A public instance of Codabench is open to everyone free of charge and allows benchmark organizers to fairly compare submissions under the same setting (software, hardware, data, algorithms), with custom protocols and data formats. Codabench has unique features facilitating easy organization of flexible and reproducible benchmarks, such as the possibility of reusing templates of benchmarks and supplying compute resources on demand. Codabench has been used internally and externally on various applications, receiving more than 130 users and 2,500 submissions. As illustrative use cases, we introduce four diverse benchmarks covering graph machine learning, cancer heterogeneity, clinical diagnosis, and reinforcement learning. |
Address |
June 24, 2022 |
Corporate Author |
|
Thesis |
|
Publisher |
Science Direct |
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
HuPBA |
Approved |
no |
Call Number |
Admin @ si @ XEP2022 |
Serial |
3764 |
Permanent link to this record |
|
|
|
Author |
Ajian Liu; Chenxu Zhao; Zitong Yu; Jun Wan; Anyang Su; Xing Liu; Zichang Tan; Sergio Escalera; Junliang Xing; Yanyan Liang; Guodong Guo; Zhen Lei; Stan Z. Li; Shenshen Du |
Title |
Contrastive Context-Aware Learning for 3D High-Fidelity Mask Face Presentation Attack Detection |
Type |
Journal Article |
Year |
2022 |
Publication |
IEEE Transactions on Information Forensics and Security |
Abbreviated Journal |
TIForensicSEC |
Volume |
17 |
Issue |
|
Pages |
2497 - 2507 |
Keywords |
|
Abstract |
Face presentation attack detection (PAD) is essential to secure face recognition systems primarily from high-fidelity mask attacks. Most existing 3D mask PAD benchmarks suffer from several drawbacks: 1) a limited number of mask identities, types of sensors, and a total number of videos; 2) low-fidelity quality of facial masks. Basic deep models and remote photoplethysmography (rPPG) methods achieved acceptable performance on these benchmarks but still far from the needs of practical scenarios. To bridge the gap to real-world applications, we introduce a large-scale Hi gh- Fi delity Mask dataset, namely HiFiMask . Specifically, a total amount of 54,600 videos are recorded from 75 subjects with 225 realistic masks by 7 new kinds of sensors. Along with the dataset, we propose a novel C ontrastive C ontext-aware L earning (CCL) framework. CCL is a new training methodology for supervised PAD tasks, which is able to learn by leveraging rich contexts accurately (e.g., subjects, mask material and lighting) among pairs of live faces and high-fidelity mask attacks. Extensive experimental evaluations on HiFiMask and three additional 3D mask datasets demonstrate the effectiveness of our method. The codes and dataset will be released soon. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
IEEE |
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
HuPBA |
Approved |
no |
Call Number |
Admin @ si @ LZY2022 |
Serial |
3778 |
Permanent link to this record |
|
|
|
Author |
Hugo Bertiche; Meysam Madadi; Sergio Escalera |
Title |
Neural Cloth Simulation |
Type |
Journal Article |
Year |
2022 |
Publication |
ACM Transactions on Graphics |
Abbreviated Journal |
ACMTGraph |
Volume |
41 |
Issue |
6 |
Pages |
1-14 |
Keywords |
|
Abstract |
We present a general framework for the garment animation problem through unsupervised deep learning inspired in physically based simulation. Existing trends in the literature already explore this possibility. Nonetheless, these approaches do not handle cloth dynamics. Here, we propose the first methodology able to learn realistic cloth dynamics unsupervisedly, and henceforth, a general formulation for neural cloth simulation. The key to achieve this is to adapt an existing optimization scheme for motion from simulation based methodologies to deep learning. Then, analyzing the nature of the problem, we devise an architecture able to automatically disentangle static and dynamic cloth subspaces by design. We will show how this improves model performance. Additionally, this opens the possibility of a novel motion augmentation technique that greatly improves generalization. Finally, we show it also allows to control the level of motion in the predictions. This is a useful, never seen before, tool for artists. We provide of detailed analysis of the problem to establish the bases of neural cloth simulation and guide future research into the specifics of this domain.
ACM Transactions on GraphicsVolume 41Issue 6December 2022 Article No.: 220pp 1– |
Address |
Dec 2022 |
Corporate Author |
|
Thesis |
|
Publisher |
ACM |
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
|
Approved |
no |
Call Number |
Admin @ si @ BME2022b |
Serial |
3779 |
Permanent link to this record |
|
|
|
Author |
Joakim Bruslund Haurum; Meysam Madadi; Sergio Escalera; Thomas B. Moeslund |
Title |
Multi-scale hybrid vision transformer and Sinkhorn tokenizer for sewer defect classification |
Type |
Journal Article |
Year |
2022 |
Publication |
Automation in Construction |
Abbreviated Journal |
AC |
Volume |
144 |
Issue |
|
Pages |
104614 |
Keywords |
Sewer Defect Classification; Vision Transformers; Sinkhorn-Knopp; Convolutional Neural Networks; Closed-Circuit Television; Sewer Inspection |
Abstract |
A crucial part of image classification consists of capturing non-local spatial semantics of image content. This paper describes the multi-scale hybrid vision transformer (MSHViT), an extension of the classical convolutional neural network (CNN) backbone, for multi-label sewer defect classification. To better model spatial semantics in the images, features are aggregated at different scales non-locally through the use of a lightweight vision transformer, and a smaller set of tokens was produced through a novel Sinkhorn clustering-based tokenizer using distinct cluster centers. The proposed MSHViT and Sinkhorn tokenizer were evaluated on the Sewer-ML multi-label sewer defect classification dataset, showing consistent performance improvements of up to 2.53 percentage points. |
Address |
Dec 2022 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
HuPBA |
Approved |
no |
Call Number |
Admin @ si @ BME2022c |
Serial |
3780 |
Permanent link to this record |
|
|
|
Author |
Swathikiran Sudhakaran; Sergio Escalera; Oswald Lanz |
Title |
Gate-Shift-Fuse for Video Action Recognition |
Type |
Journal Article |
Year |
2023 |
Publication |
IEEE Transactions on Pattern Analysis and Machine Intelligence |
Abbreviated Journal |
TPAMI |
Volume |
45 |
Issue |
9 |
Pages |
10913-10928 |
Keywords |
Action Recognition; Video Classification; Spatial Gating; Channel Fusion |
Abstract |
Convolutional Neural Networks are the de facto models for image recognition. However 3D CNNs, the straight forward extension of 2D CNNs for video recognition, have not achieved the same success on standard action recognition benchmarks. One of the main reasons for this reduced performance of 3D CNNs is the increased computational complexity requiring large scale annotated datasets to train them in scale. 3D kernel factorization approaches have been proposed to reduce the complexity of 3D CNNs. Existing kernel factorization approaches follow hand-designed and hard-wired techniques. In this paper we propose Gate-Shift-Fuse (GSF), a novel spatio-temporal feature extraction module which controls interactions in spatio-temporal decomposition and learns to adaptively route features through time and combine them in a data dependent manner. GSF leverages grouped spatial gating to decompose input tensor and channel weighting to fuse the decomposed tensors. GSF can be inserted into existing 2D CNNs to convert them into an efficient and high performing spatio-temporal feature extractor, with negligible parameter and compute overhead. We perform an extensive analysis of GSF using two popular 2D CNN families and achieve state-of-the-art or competitive performance on five standard action recognition benchmarks. |
Address |
1 Sept. 2023 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
HUPBA; no menciona |
Approved |
no |
Call Number |
Admin @ si @ SEL2023 |
Serial |
3814 |
Permanent link to this record |
|
|
|
Author |
Guillermo Torres; Debora Gil; Antoni Rosell; S. Mena; Carles Sanchez |
Title |
Virtual Radiomics Biopsy for the Histological Diagnosis of Pulmonary Nodules – Intermediate Results of the RadioLung Project |
Type |
Journal Article |
Year |
2023 |
Publication |
International Journal of Computer Assisted Radiology and Surgery |
Abbreviated Journal |
IJCARS |
Volume |
|
Issue |
|
Pages |
|
Keywords |
|
Abstract |
|
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
IAM |
Approved |
no |
Call Number |
Admin @ si @ TGM2023 |
Serial |
3830 |
Permanent link to this record |
|
|
|
Author |
Kunal Biswas; Palaiahnakote Shivakumara; Umapada Pal; Tong Lu; Michel Blumenstein; Josep Llados |
Title |
Classification of aesthetic natural scene images using statistical and semantic features |
Type |
Journal Article |
Year |
2023 |
Publication |
Multimedia Tools and Applications |
Abbreviated Journal |
MTAP |
Volume |
82 |
Issue |
9 |
Pages |
13507-13532 |
Keywords |
|
Abstract |
Aesthetic image analysis is essential for improving the performance of multimedia image retrieval systems, especially from a repository of social media and multimedia content stored on mobile devices. This paper presents a novel method for classifying aesthetic natural scene images by studying the naturalness of image content using statistical features, and reading text in the images using semantic features. Unlike existing methods that focus only on image quality with human information, the proposed approach focuses on image features as well as text-based semantic features without human intervention to reduce the gap between subjectivity and objectivity in the classification. The aesthetic classes considered in this work are (i) Very Pleasant, (ii) Pleasant, (iii) Normal and (iv) Unpleasant. The naturalness is represented by features of focus, defocus, perceived brightness, perceived contrast, blurriness and noisiness, while semantics are represented by text recognition, description of the images and labels of images, profile pictures, and banner images. Furthermore, a deep learning model is proposed in a novel way to fuse statistical and semantic features for the classification of aesthetic natural scene images. Experiments on our own dataset and the standard datasets demonstrate that the proposed approach achieves 92.74%, 88.67% and 83.22% average classification rates on our own dataset, AVA dataset and CUHKPQ dataset, respectively. Furthermore, a comparative study of the proposed model with the existing methods shows that the proposed method is effective for the classification of aesthetic social media images. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
DAG |
Approved |
no |
Call Number |
Admin @ si @ BSP2023 |
Serial |
3873 |
Permanent link to this record |
|
|
|
Author |
Javier Selva; Anders S. Johansen; Sergio Escalera; Kamal Nasrollahi; Thomas B. Moeslund; Albert Clapes |
Title |
Video transformers: A survey |
Type |
Journal Article |
Year |
2023 |
Publication |
IEEE Transactions on Pattern Analysis and Machine Intelligence |
Abbreviated Journal |
TPAMI |
Volume |
45 |
Issue |
11 |
Pages |
12922-12943 |
Keywords |
Artificial Intelligence; Computer Vision; Self-Attention; Transformers; Video Representations |
Abstract |
Transformer models have shown great success handling long-range interactions, making them a promising tool for modeling video. However, they lack inductive biases and scale quadratically with input length. These limitations are further exacerbated when dealing with the high dimensionality introduced by the temporal dimension. While there are surveys analyzing the advances of Transformers for vision, none focus on an in-depth analysis of video-specific designs. In this survey, we analyze the main contributions and trends of works leveraging Transformers to model video. Specifically, we delve into how videos are handled at the input level first. Then, we study the architectural changes made to deal with video more efficiently, reduce redundancy, re-introduce useful inductive biases, and capture long-term temporal dynamics. In addition, we provide an overview of different training regimes and explore effective self-supervised learning strategies for video. Finally, we conduct a performance comparison on the most common benchmark for Video Transformers (i.e., action classification), finding them to outperform 3D ConvNets even with less computational complexity. |
Address |
1 Nov. 2023 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
HUPBA; no menciona |
Approved |
no |
Call Number |
Admin @ si @ SJE2023 |
Serial |
3823 |
Permanent link to this record |
|
|
|
Author |
Ruben Tito; Dimosthenis Karatzas; Ernest Valveny |
Title |
Hierarchical multimodal transformers for Multi-Page DocVQA |
Type |
Journal Article |
Year |
2023 |
Publication |
Pattern Recognition |
Abbreviated Journal |
PR |
Volume |
144 |
Issue |
|
Pages |
109834 |
Keywords |
|
Abstract |
Document Visual Question Answering (DocVQA) refers to the task of answering questions from document images. Existing work on DocVQA only considers single-page documents. However, in real scenarios documents are mostly composed of multiple pages that should be processed altogether. In this work we extend DocVQA to the multi-page scenario. For that, we first create a new dataset, MP-DocVQA, where questions are posed over multi-page documents instead of single pages. Second, we propose a new hierarchical method, Hi-VT5, based on the T5 architecture, that overcomes the limitations of current methods to process long multi-page documents. The proposed method is based on a hierarchical transformer architecture where the encoder summarizes the most relevant information of every page and then, the decoder takes this summarized information to generate the final answer. Through extensive experimentation, we demonstrate that our method is able, in a single stage, to answer the questions and provide the page that contains the relevant information to find the answer, which can be used as a kind of explainability measure. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
ISSN 0031-3203 |
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
DAG; 600.155; 600.121 |
Approved |
no |
Call Number |
Admin @ si @ TKV2023 |
Serial |
3825 |
Permanent link to this record |
|
|
|
Author |
Souhail Bakkali; Zuheng Ming; Mickael Coustaty; Marçal Rusiñol; Oriol Ramos Terrades |
Title |
VLCDoC: Vision-Language Contrastive Pre-Training Model for Cross-Modal Document Classification |
Type |
Journal Article |
Year |
2023 |
Publication |
Pattern Recognition |
Abbreviated Journal |
PR |
Volume |
139 |
Issue |
|
Pages |
109419 |
Keywords |
|
Abstract |
Multimodal learning from document data has achieved great success lately as it allows to pre-train semantically meaningful features as a prior into a learnable downstream approach. In this paper, we approach the document classification problem by learning cross-modal representations through language and vision cues, considering intra- and inter-modality relationships. Instead of merging features from different modalities into a common representation space, the proposed method exploits high-level interactions and learns relevant semantic information from effective attention flows within and across modalities. The proposed learning objective is devised between intra- and inter-modality alignment tasks, where the similarity distribution per task is computed by contracting positive sample pairs while simultaneously contrasting negative ones in the common feature representation space}. Extensive experiments on public document classification datasets demonstrate the effectiveness and the generalization capacity of our model on both low-scale and large-scale datasets. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
ISSN 0031-3203 |
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
DAG; 600.140; 600.121 |
Approved |
no |
Call Number |
Admin @ si @ BMC2023 |
Serial |
3826 |
Permanent link to this record |
|
|
|
Author |
Juan Borrego-Carazo; Carles Sanchez; David Castells; Jordi Carrabina; Debora Gil |
Title |
A benchmark for the evaluation of computational methods for bronchoscopic navigation |
Type |
Journal Article |
Year |
2022 |
Publication |
International Journal of Computer Assisted Radiology and Surgery |
Abbreviated Journal |
IJCARS |
Volume |
17 |
Issue |
1 |
Pages |
|
Keywords |
|
Abstract |
|
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
IAM |
Approved |
no |
Call Number |
Admin @ si @ BSC2022 |
Serial |
3832 |
Permanent link to this record |
|
|
|
Author |
Antoni Rosell; Sonia Baeza; S. Garcia-Reina; JL. Mate; Ignasi Guasch; I. Nogueira; I. Garcia-Olive; Guillermo Torres; Carles Sanchez; Debora Gil |
Title |
EP01.05-001 Radiomics to Increase the Effectiveness of Lung Cancer Screening Programs. Radiolung Preliminary Results |
Type |
Journal Article |
Year |
2022 |
Publication |
Journal of Thoracic Oncology |
Abbreviated Journal |
JTO |
Volume |
17 |
Issue |
9 |
Pages |
S182 |
Keywords |
|
Abstract |
|
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
IAM |
Approved |
no |
Call Number |
Admin @ si @ RBG2022b |
Serial |
3834 |
Permanent link to this record |