|
Records |
Links |
|
Author |
Ilke Demir; Dena Bazazian; Adriana Romero; Viktoriia Sharmanska; Lyne P. Tchapmi |
|
|
Title |
WiCV 2018: The Fourth Women In Computer Vision Workshop |
Type |
Conference Article |
|
Year |
2018 |
Publication |
4th Women in Computer Vision Workshop |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
1941-19412 |
|
|
Keywords |
Conferences; Computer vision; Industries; Object recognition; Engineering profession; Collaboration; Machine learning |
|
|
Abstract |
We present WiCV 2018 – Women in Computer Vision Workshop to increase the visibility and inclusion of women researchers in computer vision field, organized in conjunction with CVPR 2018. Computer vision and machine learning have made incredible progress over the past years, yet the number of female researchers is still low both in academia and industry. WiCV is organized to raise visibility of female researchers, to increase the collaboration,
and to provide mentorship and give opportunities to femaleidentifying junior researchers in the field. In its fourth year, we are proud to present the changes and improvements over the past years, summary of statistics for presenters and attendees, followed by expectations from future generations. |
|
|
Address |
Salt Lake City; USA; June 2018 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
WiCV |
|
|
Notes |
DAG; 600.121; 600.129 |
Approved |
no |
|
|
Call Number |
Admin @ si @ DBR2018 |
Serial |
3222 |
|
Permanent link to this record |
|
|
|
|
Author |
Arnau Baro; Pau Riba; Alicia Fornes |
|
|
Title |
A Starting Point for Handwritten Music Recognition |
Type |
Conference Article |
|
Year |
2018 |
Publication |
1st International Workshop on Reading Music Systems |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
5-6 |
|
|
Keywords |
Optical Music Recognition; Long Short-Term Memory; Convolutional Neural Networks; MUSCIMA++; CVCMUSCIMA |
|
|
Abstract |
In the last years, the interest in Optical Music Recognition (OMR) has reawakened, especially since the appearance of deep learning. However, there are very few works addressing handwritten scores. In this work we describe a full OMR pipeline for handwritten music scores by using Convolutional and Recurrent Neural Networks that could serve as a baseline for the research community. |
|
|
Address |
Paris; France; September 2018 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
WORMS |
|
|
Notes |
DAG; 600.097; 601.302; 601.330; 600.121 |
Approved |
no |
|
|
Call Number |
Admin @ si @ BRF2018 |
Serial |
3223 |
|
Permanent link to this record |
|
|
|
|
Author |
Laura Lopez-Fuentes; Alessandro Farasin; Harald Skinnemoen; Paolo Garza |
|
|
Title |
Deep Learning models for passability detection of flooded roads |
Type |
Conference Article |
|
Year |
2018 |
Publication |
MediaEval 2018 Multimedia Benchmark Workshop |
Abbreviated Journal |
|
|
|
Volume |
2283 |
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
In this paper we study and compare several approaches to detect floods and evidence for passability of roads by conventional means in Twitter. We focus on tweets containing both visual information (a picture shared by the user) and metadata, a combination of text and related extra information intrinsic to the Twitter API. This work has been done in the context of the MediaEval 2018 Multimedia Satellite Task. |
|
|
Address |
Sophia Antipolis; France; October 2018 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
MediaEval |
|
|
Notes |
LAMP; 600.084; 600.109; 600.120 |
Approved |
no |
|
|
Call Number |
Admin @ si @ LFS2018 |
Serial |
3224 |
|
Permanent link to this record |
|
|
|
|
Author |
Abel Gonzalez-Garcia; Davide Modolo; Vittorio Ferrari |
|
|
Title |
Objects as context for detecting their semantic parts |
Type |
Conference Article |
|
Year |
2018 |
Publication |
31st IEEE Conference on Computer Vision and Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
6907 - 6916 |
|
|
Keywords |
Proposals; Semantics; Wheels; Automobiles; Context modeling; Task analysis; Object detection |
|
|
Abstract |
We present a semantic part detection approach that effectively leverages object information. We use the object appearance and its class as indicators of what parts to expect. We also model the expected relative location of parts inside the objects based on their appearance. We achieve this with a new network module, called OffsetNet, that efficiently predicts a variable number of part locations within a given object. Our model incorporates all these cues to
detect parts in the context of their objects. This leads to considerably higher performance for the challenging task of part detection compared to using part appearance alone (+5 mAP on the PASCAL-Part dataset). We also compare
to other part detection methods on both PASCAL-Part and CUB200-2011 datasets. |
|
|
Address |
Salt Lake City; USA; June 2018 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CVPR |
|
|
Notes |
LAMP; 600.109; 600.120 |
Approved |
no |
|
|
Call Number |
Admin @ si @ GMF2018 |
Serial |
3229 |
|
Permanent link to this record |
|
|
|
|
Author |
Gemma Rotger; Felipe Lumbreras; Francesc Moreno-Noguer; Antonio Agudo |
|
|
Title |
2D-to-3D Facial Expression Transfer |
Type |
Conference Article |
|
Year |
2018 |
Publication |
24th International Conference on Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
2008 - 2013 |
|
|
Keywords |
|
|
|
Abstract |
Automatically changing the expression and physical features of a face from an input image is a topic that has been traditionally tackled in a 2D domain. In this paper, we bring this problem to 3D and propose a framework that given an
input RGB video of a human face under a neutral expression, initially computes his/her 3D shape and then performs a transfer to a new and potentially non-observed expression. For this purpose, we parameterize the rest shape –obtained from standard factorization approaches over the input video– using a triangular
mesh which is further clustered into larger macro-segments. The expression transfer problem is then posed as a direct mapping between this shape and a source shape, such as the blend shapes of an off-the-shelf 3D dataset of human facial expressions. The mapping is resolved to be geometrically consistent between 3D models by requiring points in specific regions to map on semantic
equivalent regions. We validate the approach on several synthetic and real examples of input faces that largely differ from the source shapes, yielding very realistic expression transfers even in cases with topology changes, such as a synthetic video sequence of a single-eyed cyclops. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICPR |
|
|
Notes |
MSIAU; 600.086; 600.130; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ RLM2018 |
Serial |
3232 |
|
Permanent link to this record |
|
|
|
|
Author |
Simone Balocco; Mauricio Gonzalez; Ricardo Ñancule; Petia Radeva; Gabriel Thomas |
|
|
Title |
Calcified Plaque Detection in IVUS Sequences: Preliminary Results Using Convolutional Nets |
Type |
Conference Article |
|
Year |
2018 |
Publication |
International Workshop on Artificial Intelligence and Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
11047 |
Issue |
|
Pages |
34-42 |
|
|
Keywords |
Intravascular ultrasound images; Convolutional nets; Deep learning; Medical image analysis |
|
|
Abstract |
The manual inspection of intravascular ultrasound (IVUS) images to detect clinically relevant patterns is a difficult and laborious task performed routinely by physicians. In this paper, we present a framework based on convolutional nets for the quick selection of IVUS frames containing arterial calcification, a pattern whose detection plays a vital role in the diagnosis of atherosclerosis. Preliminary experiments on a dataset acquired from eighty patients show that convolutional architectures improve detections of a shallow classifier in terms of 𝐹1-measure, precision and recall. |
|
|
Address |
Cuba; September 2018 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
IWAIPR |
|
|
Notes |
MILAB; no menciona |
Approved |
no |
|
|
Call Number |
Admin @ si @ BGÑ2018 |
Serial |
3237 |
|
Permanent link to this record |
|
|
|
|
Author |
Ozan Caglayan; Adrien Bardet; Fethi Bougares; Loic Barrault; Kai Wang; Marc Masana; Luis Herranz; Joost Van de Weijer |
|
|
Title |
LIUM-CVC Submissions for WMT18 Multimodal Translation Task |
Type |
Conference Article |
|
Year |
2018 |
Publication |
3rd Conference on Machine Translation |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
This paper describes the multimodal Neural Machine Translation systems developed by LIUM and CVC for WMT18 Shared Task on Multimodal Translation. This year we propose several modifications to our previou multimodal attention architecture in order to better integrate convolutional features and refine them using encoder-side information. Our final constrained submissions
ranked first for English→French and second for English→German language pairs among the constrained submissions according to the automatic evaluation metric METEOR. |
|
|
Address |
Brussels; Belgium; October 2018 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
WMT |
|
|
Notes |
LAMP; 600.106; 600.120 |
Approved |
no |
|
|
Call Number |
Admin @ si @ CBB2018 |
Serial |
3240 |
|
Permanent link to this record |
|
|
|
|
Author |
Lu Yu; Yongmei Cheng; Joost Van de Weijer |
|
|
Title |
Weakly Supervised Domain-Specific Color Naming Based on Attention |
Type |
Conference Article |
|
Year |
2018 |
Publication |
24th International Conference on Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
3019 - 3024 |
|
|
Keywords |
|
|
|
Abstract |
The majority of existing color naming methods focuses on the eleven basic color terms of the English language. However, in many applications, different sets of color names are used for the accurate description of objects. Labeling data to learn these domain-specific color names is an expensive and laborious task. Therefore, in this article we aim to learn color names from weakly labeled data. For this purpose, we add an attention branch to the color naming network. The attention branch is used to modulate the pixel-wise color naming predictions of the network. In experiments, we illustrate that the attention branch correctly identifies the relevant regions. Furthermore, we show that our method obtains state-of-the-art results for pixel-wise and image-wise classification on the EBAY dataset and is able to learn color names for various domains. |
|
|
Address |
Beijing; August 2018 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICPR |
|
|
Notes |
LAMP; 600.109; 602.200; 600.120 |
Approved |
no |
|
|
Call Number |
Admin @ si @ YCW2018 |
Serial |
3243 |
|
Permanent link to this record |
|
|
|
|
Author |
Ana Maria Ares; Jorge Bernal; Maria Jesus Nozal; F. Javier Sanchez; Jose Bernal |
|
|
Title |
Results of the use of Kahoot! gamification tool in a course of Chemistry |
Type |
Conference Article |
|
Year |
2018 |
Publication |
4th International Conference on Higher Education Advances |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
1215-1222 |
|
|
Keywords |
|
|
|
Abstract |
The present study examines the use of Kahoot! as a gamification tool to explore mixed learning strategies. We analyze its use in two different groups of a theoretical subject of the third course of the Degree in Chemistry. An empirical-analytical methodology was used using Kahoot! in two different groups of students, with different frequencies. The academic results of these two group of students were compared between them and with those obtained in the previous course, in which Kahoot! was not employed, with the aim of measuring the evolution in the students´ knowledge. The results showed, in all cases, that the use of Kahoot! has led to a significant increase in the overall marks, and in the number of students who passed the subject. Moreover, some differences were also observed in students´ academic performance according to the group. Finally, it can be concluded that the use of a gamification tool (Kahoot!) in a university classroom had generally improved students´ learning and marks, and that this improvement is more prevalent in those students who have achieved a better Kahoot! performance. |
|
|
Address |
Valencia; June 2018 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
HEAD |
|
|
Notes |
MV; no proj |
Approved |
no |
|
|
Call Number |
Admin @ si @ ABN2018 |
Serial |
3246 |
|
Permanent link to this record |
|
|
|
|
Author |
Chenshen Wu; Luis Herranz; Xialei Liu; Joost Van de Weijer; Bogdan Raducanu |
|
|
Title |
Memory Replay GANs: Learning to Generate New Categories without Forgetting |
Type |
Conference Article |
|
Year |
2018 |
Publication |
32nd Annual Conference on Neural Information Processing Systems |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
5966-5976 |
|
|
Keywords |
|
|
|
Abstract |
Previous works on sequential learning address the problem of forgetting in discriminative models. In this paper we consider the case of generative models. In particular, we investigate generative adversarial networks (GANs) in the task of learning new categories in a sequential fashion. We first show that sequential fine tuning renders the network unable to properly generate images from previous categories (ie forgetting). Addressing this problem, we propose Memory Replay GANs (MeRGANs), a conditional GAN framework that integrates a memory replay generator. We study two methods to prevent forgetting by leveraging these replays, namely joint training with replay and replay alignment. Qualitative and quantitative experimental results in MNIST, SVHN and LSUN datasets show that our memory replay approach can generate competitive images while significantly mitigating the forgetting of previous categories. |
|
|
Address |
Montreal; Canada; December 2018 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
NIPS |
|
|
Notes |
LAMP; 600.106; 600.109; 602.200; 600.120 |
Approved |
no |
|
|
Call Number |
Admin @ si @ WHL2018 |
Serial |
3249 |
|
Permanent link to this record |
|
|
|
|
Author |
Santi Puch; Irina Sanchez; Aura Hernandez-Sabate; Gemma Piella; Vesna Prckovska |
|
|
Title |
Global Planar Convolutions for Improved Context Aggregation in Brain Tumor Segmentation |
Type |
Conference Article |
|
Year |
2018 |
Publication |
International MICCAI Brainlesion Workshop |
Abbreviated Journal |
|
|
|
Volume |
11384 |
Issue |
|
Pages |
393-405 |
|
|
Keywords |
Brain tumors; 3D fully-convolutional CNN; Magnetic resonance imaging; Global planar convolution |
|
|
Abstract |
In this work, we introduce the Global Planar Convolution module as a building-block for fully-convolutional networks that aggregates global information and, therefore, enhances the context perception capabilities of segmentation networks in the context of brain tumor segmentation. We implement two baseline architectures (3D UNet and a residual version of 3D UNet, ResUNet) and present a novel architecture based on these two architectures, ContextNet, that includes the proposed Global Planar Convolution module. We show that the addition of such module eliminates the need of building networks with several representation levels, which tend to be over-parametrized and to showcase slow rates of convergence. Furthermore, we provide a visual demonstration of the behavior of GPC modules via visualization of intermediate representations. We finally participate in the 2018 edition of the BraTS challenge with our best performing models, that are based on ContextNet, and report the evaluation scores on the validation and the test sets of the challenge. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
MICCAIW |
|
|
Notes |
ADAS; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ PSH2018 |
Serial |
3251 |
|
Permanent link to this record |
|
|
|
|
Author |
Carles Sanchez; Miguel Viñas; Coen Antens; Agnes Borras; Debora Gil |
|
|
Title |
Back to Front Architecture for Diagnosis as a Service |
Type |
Conference Article |
|
Year |
2018 |
Publication |
20th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
343-346 |
|
|
Keywords |
|
|
|
Abstract |
Software as a Service (SaaS) is a cloud computing model in which a provider hosts applications in a server that customers use via internet. Since SaaS does not require to install applications on customers' own computers, it allows the use by multiple users of highly specialized software without extra expenses for hardware acquisition or licensing. A SaaS tailored for clinical needs not only would alleviate licensing costs, but also would facilitate easy access to new methods for diagnosis assistance. This paper presents a SaaS client-server architecture for Diagnosis as a Service (DaaS). The server is based on docker technology in order to allow execution of softwares implemented in different languages with the highest portability and scalability. The client is a content management system allowing the design of websites with multimedia content and interactive visualization of results allowing user editing. We explain a usage case that uses our DaaS as crowdsourcing platform in a multicentric pilot study carried out to evaluate the clinical benefits of a software for assessment of central airway obstruction. |
|
|
Address |
Timisoara; Rumania; September 2018 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
SYNASC |
|
|
Notes |
IAM; 600.145 |
Approved |
no |
|
|
Call Number |
Admin @ si @ SVA2018 |
Serial |
3360 |
|
Permanent link to this record |
|
|
|
|
Author |
Bojana Gajic; Ramon Baldrich |
|
|
Title |
Cross-domain fashion image retrieval |
Type |
Conference Article |
|
Year |
2018 |
Publication |
CVPR 2018 Workshop on Women in Computer Vision (WiCV 2018, 4th Edition) |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
19500-19502 |
|
|
Keywords |
|
|
|
Abstract |
Cross domain image retrieval is a challenging task that implies matching images from one domain to their pairs from another domain. In this paper we focus on fashion image retrieval, which involves matching an image of a fashion item taken by users, to the images of the same item taken in controlled condition, usually by professional photographer. When facing this problem, we have different products
in train and test time, and we use triplet loss to train the network. We stress the importance of proper training of simple architecture, as well as adapting general models to the specific task. |
|
|
Address |
Salt Lake City, USA; 22 June 2018 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CVPRW |
|
|
Notes |
CIC; 600.087 |
Approved |
no |
|
|
Call Number |
Admin @ si @ |
Serial |
3709 |
|
Permanent link to this record |
|
|
|
|
Author |
Patrick Brandao; O. Zisimopoulos; E. Mazomenos; G. Ciutib; Jorge Bernal; M. Visentini-Scarzanell; A. Menciassi; P. Dario; A. Koulaouzidis; A. Arezzo; D.J. Hawkes; D. Stoyanov |
|
|
Title |
Towards a computed-aided diagnosis system in colonoscopy: Automatic polyp segmentation using convolution neural networks |
Type |
Journal |
|
Year |
2018 |
Publication |
Journal of Medical Robotics Research |
Abbreviated Journal |
JMRR |
|
|
Volume |
3 |
Issue |
2 |
Pages |
|
|
|
Keywords |
convolutional neural networks; colonoscopy; computer aided diagnosis |
|
|
Abstract |
Early diagnosis is essential for the successful treatment of bowel cancers including colorectal cancer (CRC) and capsule endoscopic imaging with robotic actuation can be a valuable diagnostic tool when combined with automated image analysis. We present a deep learning rooted detection and segmentation framework for recognizing lesions in colonoscopy and capsule endoscopy images. We restructure established convolution architectures, such as VGG and ResNets, by converting them into fully-connected convolution networks (FCNs), ne-tune them and study their capabilities for polyp segmentation and detection. We additionally use Shape-from-Shading (SfS) to recover depth and provide a richer representation of the tissue's structure in colonoscopy images. Depth is
incorporated into our network models as an additional input channel to the RGB information and we demonstrate that the resulting network yields improved performance. Our networks are tested on publicly available datasets and the most accurate segmentation model achieved a mean segmentation IU of 47.78% and 56.95% on the ETIS-Larib and CVC-Colon datasets, respectively. For polyp
detection, the top performing models we propose surpass the current state of the art with detection recalls superior to 90% for all datasets tested. To our knowledge, we present the rst work to use FCNs for polyp segmentation in addition to proposing a novel combination of SfS and RGB that boosts performance. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MV; no menciona |
Approved |
no |
|
|
Call Number |
BZM2018 |
Serial |
2976 |
|
Permanent link to this record |
|
|
|
|
Author |
Esmitt Ramirez; Carles Sanchez; Agnes Borras; Marta Diez-Ferrer; Antoni Rosell; Debora Gil |
|
|
Title |
BronchoX: bronchoscopy exploration software for biopsy intervention planning |
Type |
Journal |
|
Year |
2018 |
Publication |
Healthcare Technology Letters |
Abbreviated Journal |
HTL |
|
|
Volume |
5 |
Issue |
5 |
Pages |
177–182 |
|
|
Keywords |
|
|
|
Abstract |
Virtual bronchoscopy (VB) is a non-invasive exploration tool for intervention planning and navigation of possible pulmonary lesions (PLs). A VB software involves the location of a PL and the calculation of a route, starting from the trachea, to reach it. The selection of a VB software might be a complex process, and there is no consensus in the community of medical software developers in which is the best-suited system to use or framework to choose. The authors present Bronchoscopy Exploration (BronchoX), a VB software to plan biopsy interventions that generate physician-readable instructions to reach the PLs. The authors’ solution is open source, multiplatform, and extensible for future functionalities, designed by their multidisciplinary research and development group. BronchoX is a compound of different algorithms for segmentation, visualisation, and navigation of the respiratory tract. Performed results are a focus on the test the effectiveness of their proposal as an exploration software, also to measure its accuracy as a guiding system to reach PLs. Then, 40 different virtual planning paths were created to guide physicians until distal bronchioles. These results provide a functional software for BronchoX and demonstrate how following simple instructions is possible to reach distal lesions from the trachea. |
|
|
Address |
|
|
|
Corporate Author |
rank (SJR) |
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
IAM; 600.096; 600.075; 601.323; 601.337; 600.145 |
Approved |
no |
|
|
Call Number |
Admin @ si @ RSB2018a |
Serial |
3132 |
|
Permanent link to this record |