Home | [51–60] << 61 62 63 64 65 66 67 68 69 70 >> [71–80] |
Records | |||||
---|---|---|---|---|---|
Author | Katerine Diaz; Jesus Martinez del Rincon; Aura Hernandez-Sabate; Marçal Rusiñol; Francesc J. Ferri | ||||
Title | Fast Kernel Generalized Discriminative Common Vectors for Feature Extraction | Type | Journal Article | ||
Year | 2018 | Publication | Journal of Mathematical Imaging and Vision | Abbreviated Journal | JMIV |
Volume | 60 | Issue | 4 | Pages | 512-524 |
Keywords | |||||
Abstract | This paper presents a supervised subspace learning method called Kernel Generalized Discriminative Common Vectors (KGDCV), as a novel extension of the known Discriminative Common Vectors method with Kernels. Our method combines the advantages of kernel methods to model complex data and solve nonlinear
problems with moderate computational complexity, with the better generalization properties of generalized approaches for large dimensional data. These attractive combination makes KGDCV specially suited for feature extraction and classification in computer vision, image processing and pattern recognition applications. Two different approaches to this generalization are proposed, a first one based on the kernel trick (KT) and a second one based on the nonlinear projection trick (NPT) for even higher efficiency. Both methodologies have been validated on four different image datasets containing faces, objects and handwritten digits, and compared against well known non-linear state-of-art methods. Results show better discriminant properties than other generalized approaches both linear or kernel. In addition, the KGDCV-NPT approach presents a considerable computational gain, without compromising the accuracy of the model. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG; ADAS; 600.086; 600.130; 600.121; 600.118; 600.129 | Approved | no | ||
Call Number | Admin @ si @ DMH2018a | Serial | 3062 | ||
Permanent link to this record | |||||
Author | Katerine Diaz; Jesus Martinez del Rincon; Marçal Rusiñol; Aura Hernandez-Sabate | ||||
Title | Feature Extraction by Using Dual-Generalized Discriminative Common Vectors | Type | Journal Article | ||
Year | 2019 | Publication | Journal of Mathematical Imaging and Vision | Abbreviated Journal | JMIV |
Volume | 61 | Issue | 3 | Pages | 331-351 |
Keywords | Online feature extraction; Generalized discriminative common vectors; Dual learning; Incremental learning; Decremental learning | ||||
Abstract | In this paper, a dual online subspace-based learning method called dual-generalized discriminative common vectors (Dual-GDCV) is presented. The method extends incremental GDCV by exploiting simultaneously both the concepts of incremental and decremental learning for supervised feature extraction and classification. Our methodology is able to update the feature representation space without recalculating the full projection or accessing the previously processed training data. It allows both adding information and removing unnecessary data from a knowledge base in an efficient way, while retaining the previously acquired knowledge. The proposed method has been theoretically proved and empirically validated in six standard face recognition and classification datasets, under two scenarios: (1) removing and adding samples of existent classes, and (2) removing and adding new classes to a classification problem. Results show a considerable computational gain without compromising the accuracy of the model in comparison with both batch methodologies and other state-of-art adaptive methods. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG; ADAS; 600.084; 600.118; 600.121; 600.129 | Approved | no | ||
Call Number | Admin @ si @ DRR2019 | Serial | 3172 | ||
Permanent link to this record | |||||
Author | Daniel Rato; Miguel Oliveira; Vitor Santos; Manuel Gomes; Angel Sappa | ||||
Title | A sensor-to-pattern calibration framework for multi-modal industrial collaborative cells | Type | Journal Article | ||
Year | 2022 | Publication | Journal of Manufacturing Systems | Abbreviated Journal | JMANUFSYST |
Volume | 64 | Issue | Pages | 497-507 | |
Keywords | Calibration; Collaborative cell; Multi-modal; Multi-sensor | ||||
Abstract | Collaborative robotic industrial cells are workspaces where robots collaborate with human operators. In this context, safety is paramount, and for that a complete perception of the space where the collaborative robot is inserted is necessary. To ensure this, collaborative cells are equipped with a large set of sensors of multiple modalities, covering the entire work volume. However, the fusion of information from all these sensors requires an accurate extrinsic calibration. The calibration of such complex systems is challenging, due to the number of sensors and modalities, and also due to the small overlapping fields of view between the sensors, which are positioned to capture different viewpoints of the cell. This paper proposes a sensor to pattern methodology that can calibrate a complex system such as a collaborative cell in a single optimization procedure. Our methodology can tackle RGB and Depth cameras, as well as LiDARs. Results show that our methodology is able to accurately calibrate a collaborative cell containing three RGB cameras, a depth camera and three 3D LiDARs. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Science Direct | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MSIAU; MACO | Approved | no | ||
Call Number | Admin @ si @ ROS2022 | Serial | 3750 | ||
Permanent link to this record | |||||
Author | Mariella Dimiccoli; Jean-Pascal Jacob; Lionel Moisan | ||||
Title | Particle detection and tracking in fluorescence time-lapse imaging: a contrario approach | Type | Journal Article | ||
Year | 2016 | Publication | Journal of Machine Vision and Applications | Abbreviated Journal | MVAP |
Volume | 27 | Issue | Pages | 511-527 | |
Keywords | particle detection; particle tracking; a-contrario approach; time-lapse fluorescence imaging | ||||
Abstract | In this work, we propose a probabilistic approach for the detection and the
tracking of particles on biological images. In presence of very noised and poor quality data, particles and trajectories can be characterized by an a-contrario model, that estimates the probability of observing the structures of interest in random data. This approach, first introduced in the modeling of human visual perception and then successfully applied in many image processing tasks, leads to algorithms that do not require a previous learning stage, nor a tedious parameter tuning and are very robust to noise. Comparative evaluations against a well established baseline show that the proposed approach outperforms the state of the art. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB; | Approved | no | ||
Call Number | Admin @ si @ DJM2016 | Serial | 2735 | ||
Permanent link to this record | |||||
Author | Sergio Escalera; Oriol Pujol; Petia Radeva | ||||
Title | Error-Correcting Output Codes Library | Type | Journal Article | ||
Year | 2010 | Publication | Journal of Machine Learning Research | Abbreviated Journal | JMLR |
Volume | 11 | Issue | Pages | 661-664 | |
Keywords | |||||
Abstract | (Feb):661−664
In this paper, we present an open source Error-Correcting Output Codes (ECOC) library. The ECOC framework is a powerful tool to deal with multi-class categorization problems. This library contains both state-of-the-art coding (one-versus-one, one-versus-all, dense random, sparse random, DECOC, forest-ECOC, and ECOC-ONE) and decoding designs (hamming, euclidean, inverse hamming, laplacian, β-density, attenuated, loss-based, probabilistic kernel-based, and loss-weighted) with the parameters defined by the authors, as well as the option to include your own coding, decoding, and base classifier. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1532-4435 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | MILAB;HUPBA | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ EPR2010c | Serial | 1286 | ||
Permanent link to this record | |||||
Author | Sergio Escalera; Vassilis Athitsos; Isabelle Guyon | ||||
Title | Challenges in multimodal gesture recognition | Type | Journal Article | ||
Year | 2016 | Publication | Journal of Machine Learning Research | Abbreviated Journal | JMLR |
Volume | 17 | Issue | Pages | 1-54 | |
Keywords | Gesture Recognition; Time Series Analysis; Multimodal Data Analysis; Computer Vision; Pattern Recognition; Wearable sensors; Infrared Cameras; KinectTM | ||||
Abstract | This paper surveys the state of the art on multimodal gesture recognition and introduces the JMLR special topic on gesture recognition 2011-2015. We began right at the start of the KinectTMrevolution when inexpensive infrared cameras providing image depth recordings became available. We published papers using this technology and other more conventional methods, including regular video cameras, to record data, thus providing a good overview of uses of machine learning and computer vision using multimodal data in this area of application. Notably, we organized a series of challenges and made available several datasets we recorded for that purpose, including tens of thousands
of videos, which are available to conduct further research. We also overview recent state of the art works on gesture recognition based on a proposed taxonomy for gesture recognition, discussing challenges and future lines of research. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | Zhuowen Tu | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | HuPBA;MILAB; | Approved | no | ||
Call Number | Admin @ si @ EAG2016 | Serial | 2764 | ||
Permanent link to this record | |||||
Author | Diego Velazquez; Pau Rodriguez; Josep M. Gonfaus; Xavier Roca; Jordi Gonzalez | ||||
Title | A Closer Look at Embedding Propagation for Manifold Smoothing | Type | Journal Article | ||
Year | 2022 | Publication | Journal of Machine Learning Research | Abbreviated Journal | JMLR |
Volume | 23 | Issue | 252 | Pages | 1-27 |
Keywords | Regularization; emi-supervised learning; self-supervised learning; adversarial robustness; few-shot classification | ||||
Abstract | Supervised training of neural networks requires a large amount of manually annotated data and the resulting networks tend to be sensitive to out-of-distribution (OOD) data.
Self- and semi-supervised training schemes reduce the amount of annotated data required during the training process. However, OOD generalization remains a major challenge for most methods. Strategies that promote smoother decision boundaries play an important role in out-of-distribution generalization. For example, embedding propagation (EP) for manifold smoothing has recently shown to considerably improve the OOD performance for few-shot classification. EP achieves smoother class manifolds by building a graph from sample embeddings and propagating information through the nodes in an unsupervised manner. In this work, we extend the original EP paper providing additional evidence and experiments showing that it attains smoother class embedding manifolds and improves results in settings beyond few-shot classification. Concretely, we show that EP improves the robustness of neural networks against multiple adversarial attacks as well as semi- and self-supervised learning performance. |
||||
Address | 9/2022 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | Admin @ si @ VRG2022 | Serial | 3762 | ||
Permanent link to this record | |||||
Author | Adrien Pavao; Isabelle Guyon; Anne-Catherine Letournel; Dinh-Tuan Tran; Xavier Baro; Hugo Jair Escalante; Sergio Escalera; Tyler Thomas; Zhen Xu | ||||
Title | CodaLab Competitions: An Open Source Platform to Organize Scientific Challenges | Type | Journal Article | ||
Year | 2023 | Publication | Journal of Machine Learning Research | Abbreviated Journal | JMLR |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | CodaLab Competitions is an open source web platform designed to help data scientists and research teams to crowd-source the resolution of machine learning problems through the organization of competitions, also called challenges or contests. CodaLab Competitions provides useful features such as multiple phases, results and code submissions, multi-score leaderboards, and jobs running
inside Docker containers. The platform is very flexible and can handle large scale experiments, by allowing organizers to upload large datasets and provide their own CPU or GPU compute workers. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | HUPBA | Approved | no | ||
Call Number | Admin @ si @ PGL2023 | Serial | 3973 | ||
Permanent link to this record | |||||
Author | Xavier Carrillo; E Fernandez-Nofrerias; Francesco Ciompi; O. Rodriguez-Leor; Petia Radeva; Neus Salvatella; Oriol Pujol; J. Mauri; A. Bayes | ||||
Title | Changes in Radial Artery Volume Assessed Using Intravascular Ultrasound: A Comparison of Two Vasodilator Regimens in Transradial Coronary Intervention | Type | Journal Article | ||
Year | 2011 | Publication | Journal of Invasive Cardiology | Abbreviated Journal | JOIC |
Volume | 23 | Issue | 10 | Pages | 401-404 |
Keywords | radial; vasodilator treatment; percutaneous coronary intervention; IVUS; volumetric IVUS analysis | ||||
Abstract | OBJECTIVES:
This study used intravascular ultrasound (IVUS) to evaluate radial artery volume changes after intraarterial administration of nitroglycerin and/or verapamil. BACKGROUND: Radial artery spasm, which is associated with radial artery size, is the main limitation of the transradial approach in percutaneous coronary interventions (PCI). METHODS: This prospective, randomized study compared the effect of two intra-arterial vasodilator regimens on radial artery volume: 0.2 mg of nitroglycerin plus 2.5 mg of verapamil (Group 1; n = 15) versus 2.5 mg of verapamil alone (Group 2; n = 15). Radial artery lumen volume was assessed using IVUS at two time points: at baseline (5 minutes after sheath insertion) and post-vasodilator (1 minute after drug administration). The luminal volume of the radial artery was computed using ECOC Random Fields (ECOC-RF), a technique used for automatic segmentation of luminal borders in longitudinal cut images from IVUS sequences. RESULTS: There was a significant increase in arterial lumen volume in both groups, with an increase from 451 ± 177 mm³ to 508 ± 192 mm³ (p = 0.001) in Group 1 and from 456 ± 188 mm³ to 509 ± 170 mm³ (p = 0.001) in Group 2. There were no significant differences between the groups in terms of absolute volume increase (58 mm³ versus 53 mm³, respectively; p = 0.65) or in relative volume increase (14% versus 20%, respectively; p = 0.69). CONCLUSIONS: Administration of nitroglycerin plus verapamil or verapamil alone to the radial artery resulted in similar increases in arterial lumen volume according to ECOC-RF IVUS measurements. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB;HuPBA | Approved | no | ||
Call Number | Admin @ si @ CFC2011 | Serial | 1797 | ||
Permanent link to this record | |||||
Author | Arnau Ramisa; Alex Goldhoorn; David Aldavert; Ricardo Toledo; Ramon Lopez de Mantaras | ||||
Title | Combining Invariant Features and the ALV Homing Method for Autonomous Robot Navigation Based on Panoramas | Type | Journal Article | ||
Year | 2011 | Publication | Journal of Intelligent and Robotic Systems | Abbreviated Journal | JIRC |
Volume | 64 | Issue | 3-4 | Pages | 625-649 |
Keywords | |||||
Abstract | Biologically inspired homing methods, such as the Average Landmark Vector, are an interesting solution for local navigation due to its simplicity. However, usually they require a modification of the environment by placing artificial landmarks in order to work reliably. In this paper we combine the Average Landmark Vector with invariant feature points automatically detected in panoramic images to overcome this limitation. The proposed approach has been evaluated first in simulation and, as promising results are found, also in two data sets of panoramas from real world environments. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Springer Netherlands | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0921-0296 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | RV;ADAS | Approved | no | ||
Call Number | Admin @ si @ RGA2011 | Serial | 1728 | ||
Permanent link to this record | |||||
Author | Arnau Ramisa; David Aldavert; Shrihari Vasudevan; Ricardo Toledo; Ramon Lopez de Mantaras | ||||
Title | Evaluation of Three Vision Based Object Perception Methods for a Mobile Robot | Type | Journal Article | ||
Year | 2012 | Publication | Journal of Intelligent and Robotic Systems | Abbreviated Journal | JIRC |
Volume | 68 | Issue | 2 | Pages | 185-208 |
Keywords | |||||
Abstract | This paper addresses visual object perception applied to mobile robotics. Being able to perceive household objects in unstructured environments is a key capability in order to make robots suitable to perform complex tasks in home environments. However, finding a solution for this task is daunting: it requires the ability to handle the variability in image formation in a moving camera with tight time constraints. The paper brings to attention some of the issues with applying three state of the art object recognition and detection methods in a mobile robotics scenario, and proposes methods to deal with windowing/segmentation. Thus, this work aims at evaluating the state-of-the-art in object perception in an attempt to develop a lightweight solution for mobile robotics use/research in typical indoor settings. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Springer Netherlands | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0921-0296 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS | Approved | no | ||
Call Number | Admin @ si @ RAV2012 | Serial | 2150 | ||
Permanent link to this record | |||||
Author | Xavier Otazu; Maria Vanrell | ||||
Title | Perceptual representation of textured images | Type | Journal | ||
Year | 2005 | Publication | Journal of Imaging Science and Technology, 49(3):262–271 (IF: 0.522) | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | CIC | Approved | no | ||
Call Number | CAT @ cat @ OtV2005b | Serial | 542 | ||
Permanent link to this record | |||||
Author | C. Alejandro Parraga; Robert Benavente; Maria Vanrell; Ramon Baldrich | ||||
Title | Psychophysical measurements to model inter-colour regions of colour-naming space | Type | Journal Article | ||
Year | 2009 | Publication | Journal of Imaging Science and Technology | Abbreviated Journal | |
Volume | 53 | Issue | 3 | Pages | 031106 (8 pages) |
Keywords | image processing; Analysis | ||||
Abstract | JCR Impact Factor 2009: 0.391
In this paper, we present a fuzzy-set of parametric functions which segment the CIE lab space into eleven regions which correspond to the group of common universal categories present in all evolved languages as identified by anthropologists and linguists. The set of functions is intended to model a color-name assignment task by humans and differs from other models in its emphasis on the inter-color boundary regions, which were explicitly measured by means of a psychophysics experiment. In our particular implementation, the CIE lab space was segmented into eleven color categories using a Triple Sigmoid as the fuzzy sets basis, whose parameters are included in this paper. The model’s parameters were adjusted according to the psychophysical results of a yes/no discrimination paradigm where observers had to choose (English) names for isoluminant colors belonging to regions in-between neighboring categories. These colors were presented on a calibrated CRT monitor (14-bit x 3 precision). The experimental results show that inter- color boundary regions are much less defined than expected and color samples other than those near the most representatives are needed to define the position and shape of boundaries between categories. The extended set of model parameters is given as a table. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | CIC | Approved | no | ||
Call Number | CAT @ cat @ PBV2009 | Serial | 1157 | ||
Permanent link to this record | |||||
Author | Javier Vazquez; C. Alejandro Parraga; Maria Vanrell; Ramon Baldrich | ||||
Title | Color Constancy Algorithms: Psychophysical Evaluation on a New Dataset | Type | Journal Article | ||
Year | 2009 | Publication | Journal of Imaging Science and Technology | Abbreviated Journal | |
Volume | 53 | Issue | 3 | Pages | 031105–9 |
Keywords | |||||
Abstract | The estimation of the illuminant of a scene from a digital image has been the goal of a large amount of research in computer vision. Color constancy algorithms have dealt with this problem by defining different heuristics to select a unique solution from within the feasible set. The performance of these algorithms has shown that there is still a long way to go to globally solve this problem as a preliminary step in computer vision. In general, performance evaluation has been done by comparing the angular error between the estimated chromaticity and the chromaticity of a canonical illuminant, which is highly dependent on the image dataset. Recently, some workers have used high-level constraints to estimate illuminants; in this case selection is based on increasing the performance on the subsequent steps of the systems. In this paper we propose a new performance measure, the perceptual angular error. It evaluates the performance of a color constancy algorithm according to the perceptual preferences of humans, or naturalness (instead of the actual optimal solution) and is independent of the visual task. We show the results of a new psychophysical experiment comparing solutions from three different color constancy algorithms. Our results show that in more than a half of the judgments the preferred solution is not the one closest to the optimal solution. Our experiments were performed on a new dataset of images acquired with a calibrated camera with an attached neutral grey sphere, which better copes with the illuminant variations of the scene. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | CIC | Approved | no | ||
Call Number | CAT @ cat @ VPV2009a | Serial | 1171 | ||
Permanent link to this record | |||||
Author | David Vazquez; Jorge Bernal; F. Javier Sanchez; Gloria Fernandez Esparrach; Antonio Lopez; Adriana Romero; Michal Drozdzal; Aaron Courville | ||||
Title | A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images | Type | Journal Article | ||
Year | 2017 | Publication | Journal of Healthcare Engineering | Abbreviated Journal | JHCE |
Volume | Issue | Pages | 2040-2295 | ||
Keywords | Colonoscopy images; Deep Learning; Semantic Segmentation | ||||
Abstract | Colorectal cancer (CRC) is the third cause of cancer death world-wide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss- rate and inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing Decision Support Systems (DSS) aim- ing to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image segmentation, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. The proposed dataset consists of 4 relevant classes to inspect the endolumninal scene, tar- geting different clinical needs. Together with the dataset and taking advantage of advances in semantic segmentation literature, we provide new baselines by training standard fully convolutional networks (FCN). We perform a compar- ative study to show that FCN significantly outperform, without any further post-processing, prior results in endoluminal scene segmentation, especially with respect to polyp segmentation and localization. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ADAS; MV; 600.075; 600.085; 600.076; 601.281; 600.118 | Approved | no | ||
Call Number | VBS2017b | Serial | 2940 | ||
Permanent link to this record |