|
Records |
Links |
|
Author |
Fernando Vilariño; Panagiota Spyridonos; Jordi Vitria; Fernando Azpiroz; Petia Radeva |
|
|
Title |
Automatic Detection of Intestinal Juices in Wireless Capsule Video Endoscopy |
Type |
Conference Article |
|
Year |
2006 |
Publication |
18th International Conference on Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
4 |
Issue |
|
Pages |
719-722 |
|
|
Keywords |
Clinical diagnosis , Endoscopes , Fluids and secretions , Gabor filters , Hospitals , Image sequence analysis , Intestines , Lighting , Shape , Visualization |
|
|
Abstract |
Wireless capsule video endoscopy is a novel and challenging clinical technique, whose major reported drawback relates to the high amount of time needed for video visualization. In this paper, we propose a method for the rejection of the parts of the video resulting not valid for analysis by means of automatic detection of intestinal juices. We applied Gabor filters for the characterization of the bubble-like shape of intestinal juices in fasting patients. Our method achieves a significant reduction in visualization time, with no relevant loss of valid frames. The proposed approach is easily extensible to other image analysis scenarios where the described pattern of bubbles can be found. |
|
|
Address |
Hong Kong |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1051-4651 |
ISBN |
0-7695-2521-0 |
Medium |
|
|
|
Area |
800 |
Expedition |
|
Conference |
ICPR |
|
|
Notes |
MV;OR;MILAB;SIAI |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ VSV2006b; IAM @ iam @ VSV2006g |
Serial |
727 |
|
Permanent link to this record |
|
|
|
|
Author |
Fernando Vilariño; Ludmila I. Kuncheva; Petia Radeva |
|
|
Title |
ROC curves and video analysis optimization in intestinal capsule endoscopy |
Type |
Journal Article |
|
Year |
2006 |
Publication |
Pattern Recognition Letters |
Abbreviated Journal |
PRL |
|
|
Volume |
27 |
Issue |
8 |
Pages |
875–881 |
|
|
Keywords |
ROC curves; Classification; Classifiers ensemble; Detection of intestinal contractions; Imbalanced classes; Wireless capsule endoscopy |
|
|
Abstract |
Wireless capsule endoscopy involves inspection of hours of video material by a highly qualified professional. Time episodes corresponding to intestinal contractions, which are of interest to the physician constitute about 1% of the video. The problem is to label automatically time episodes containing contractions so that only a fraction of the video needs inspection. As the classes of contraction and non-contraction images in the video are largely imbalanced, ROC curves are used to optimize the trade-off between false positive and false negative rates. Classifier ensemble methods and simple classifiers were examined. Our results reinforce the claims from recent literature that classifier ensemble methods specifically designed for imbalanced problems have substantial advantages over simple classifiers and standard classifier ensembles. By using ROC curves with the bagging ensemble method the inspection time can be drastically reduced at the expense of a small fraction of missed contractions. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
800 |
Expedition |
|
Conference |
|
|
|
Notes |
MILAB;MV;SIAI |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ VKR2006; IAM @ iam @ VKR2006 |
Serial |
647 |
|
Permanent link to this record |
|
|
|
|
Author |
Michal Drozdzal; Santiago Segui; Petia Radeva; Carolina Malagelada; Fernando Azpiroz; Jordi Vitria |
|
|
Title |
Motility bar: a new tool for motility analysis of endoluminal videos |
Type |
Journal Article |
|
Year |
2015 |
Publication |
Computers in Biology and Medicine |
Abbreviated Journal |
CBM |
|
|
Volume |
65 |
Issue |
|
Pages |
320-330 |
|
|
Keywords |
Small intestine; Motility; WCE; Computer vision; Image classification |
|
|
Abstract |
Wireless Capsule Endoscopy (WCE) provides a new perspective of the small intestine, since it enables, for the first time, visualization of the entire organ. However, the long visual video analysis time, due to the large number of data in a single WCE study, was an important factor impeding the widespread use of the capsule as a tool for intestinal abnormalities detection. Therefore, the introduction of WCE triggered a new field for the application of computational methods, and in particular, of computer vision. In this paper, we follow the computational approach and come up with a new perspective on the small intestine motility problem. Our approach consists of three steps: first, we review a tool for the visualization of the motility information contained in WCE video; second, we propose algorithms for the characterization of two motility building-blocks: contraction detector and lumen size estimation; finally, we introduce an approach to detect segments of stable motility behavior. Our claims are supported by an evaluation performed with 10 WCE videos, suggesting that our methods ably capture the intestinal motility information. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB;MV |
Approved |
no |
|
|
Call Number |
Admin @ si @ DSR2015 |
Serial |
2635 |
|
Permanent link to this record |
|
|
|
|
Author |
Michal Drozdzal |
|
|
Title |
Sequential image analysis for computer-aided wireless endoscopy |
Type |
Book Whole |
|
Year |
2014 |
Publication |
PhD Thesis, Universitat de Barcelona-CVC |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
Wireless Capsule Endoscopy (WCE) is a technique for inner-visualization of the entire small intestine and, thus, offers an interesting perspective on intestinal motility. The two major drawbacks of this technique are: 1) huge amount of data acquired by WCE makes the motility analysis tedious and 2) since the capsule is the first tool that offers complete inner-visualization of the small intestine,the exact importance of the observed events is still an open issue. Therefore, in this thesis, a novel computer-aided system for intestinal motility analysis is presented. The goal of the system is to provide an easily-comprehensible visual description of motility-related intestinal events to a physician. In order to do so, several tools based either on computer vision concepts or on machine learning techniques are presented. A method for transforming 3D video signal to a holistic image of intestinal motility, called motility bar, is proposed. The method calculates the optimal mapping from video into image from the intestinal motility point of view.
To characterize intestinal motility, methods for automatic extraction of motility information from WCE are presented. Two of them are based on the motility bar and two of them are based on frame-per-frame analysis. In particular, four algorithms dealing with the problems of intestinal contraction detection, lumen size estimation, intestinal content characterization and wrinkle frame detection are proposed and validated. The results of the algorithms are converted into sequential features using an online statistical test. This test is designed to work with multivariate data streams. To this end, we propose a novel formulation of concentration inequality that is introduced into a robust adaptive windowing algorithm for multivariate data streams. The algorithm is used to obtain robust representation of segments with constant intestinal motility activity. The obtained sequential features are shown to be discriminative in the problem of abnormal motility characterization.
Finally, we tackle the problem of efficient labeling. To this end, we incorporate active learning concepts to the problems present in WCE data and propose two approaches. The first one is based the concepts of sequential learning and the second one adapts the partition-based active learning to an error-free labeling scheme. All these steps are sufficient to provide an extensive visual description of intestinal motility that can be used by an expert as decision support system. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
Ph.D. thesis |
|
|
Publisher |
Ediciones Graficas Rey |
Place of Publication |
|
Editor |
Petia Radeva |
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
978-84-940902-3-3 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ Dro2014 |
Serial |
2486 |
|
Permanent link to this record |
|
|
|
|
Author |
Santiago Segui; Michal Drozdzal; Fernando Vilariño; Carolina Malagelada; Fernando Azpiroz; Petia Radeva; Jordi Vitria |
|
|
Title |
Categorization and Segmentation of Intestinal Content Frames for Wireless Capsule Endoscopy |
Type |
Journal Article |
|
Year |
2012 |
Publication |
IEEE Transactions on Information Technology in Biomedicine |
Abbreviated Journal |
TITB |
|
|
Volume |
16 |
Issue |
6 |
Pages |
1341-1352 |
|
|
Keywords |
|
|
|
Abstract |
Wireless capsule endoscopy (WCE) is a device that allows the direct visualization of gastrointestinal tract with minimal discomfort for the patient, but at the price of a large amount of time for screening. In order to reduce this time, several works have proposed to automatically remove all the frames showing intestinal content. These methods label frames as {intestinal content – clear} without discriminating between types of content (with different physiological meaning) or the portion of image covered. In addition, since the presence of intestinal content has been identified as an indicator of intestinal motility, its accurate quantification can show a potential clinical relevance. In this paper, we present a method for the robust detection and segmentation of intestinal content in WCE images, together with its further discrimination between turbid liquid and bubbles. Our proposal is based on a twofold system. First, frames presenting intestinal content are detected by a support vector machine classifier using color and textural information. Second, intestinal content frames are segmented into {turbid, bubbles, and clear} regions. We show a detailed validation using a large dataset. Our system outperforms previous methods and, for the first time, discriminates between turbid from bubbles media. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1089-7771 |
ISBN |
|
Medium |
|
|
|
Area |
800 |
Expedition |
|
Conference |
|
|
|
Notes |
MILAB; MV; OR;SIAI |
Approved |
no |
|
|
Call Number |
Admin @ si @ SDV2012 |
Serial |
2124 |
|
Permanent link to this record |
|
|
|
|
Author |
Frederic Sampedro; Sergio Escalera; Anna Domenech; Ignasi Carrio |
|
|
Title |
Automatic Tumor Volume Segmentation in Whole-Body PET/CT Scans: A Supervised Learning Approach Source |
Type |
Journal Article |
|
Year |
2015 |
Publication |
Journal of Medical Imaging and Health Informatics |
Abbreviated Journal |
JMIHI |
|
|
Volume |
5 |
Issue |
2 |
Pages |
192-201 |
|
|
Keywords |
CONTEXTUAL CLASSIFICATION; PET/CT; SUPERVISED LEARNING; TUMOR SEGMENTATION; WHOLE BODY |
|
|
Abstract |
Whole-body 3D PET/CT tumoral volume segmentation provides relevant diagnostic and prognostic information in clinical oncology and nuclear medicine. Carrying out this procedure manually by a medical expert is time consuming and suffers from inter- and intra-observer variabilities. In this paper, a completely automatic approach to this task is presented. First, the problem is stated and described both in clinical and technological terms. Then, a novel supervised learning segmentation framework is introduced. The segmentation by learning approach is defined within a Cascade of Adaboost classifiers and a 3D contextual proposal of Multiscale Stacked Sequential Learning. Segmentation accuracy results on 200 Breast Cancer whole body PET/CT volumes show mean 49% sensitivity, 99.993% specificity and 39% Jaccard overlap Index, which represent good performance results both at the clinical and technological level. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HuPBA;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ SED2015 |
Serial |
2584 |
|
Permanent link to this record |
|
|
|
|
Author |
Mariella Dimiccoli; Marc Bolaños; Estefania Talavera; Maedeh Aghaei; Stavri G. Nikolov; Petia Radeva |
|
|
Title |
SR-Clustering: Semantic Regularized Clustering for Egocentric Photo Streams Segmentation |
Type |
Journal Article |
|
Year |
2017 |
Publication |
Computer Vision and Image Understanding |
Abbreviated Journal |
CVIU |
|
|
Volume |
155 |
Issue |
|
Pages |
55-69 |
|
|
Keywords |
|
|
|
Abstract |
While wearable cameras are becoming increasingly popular, locating relevant information in large unstructured collections of egocentric images is still a tedious and time consuming processes. This paper addresses the problem of organizing egocentric photo streams acquired by a wearable camera into semantically meaningful segments. First, contextual and semantic information is extracted for each image by employing a Convolutional Neural Networks approach. Later, by integrating language processing, a vocabulary of concepts is defined in a semantic space. Finally, by exploiting the temporal coherence in photo streams, images which share contextual and semantic attributes are grouped together. The resulting temporal segmentation is particularly suited for further analysis, ranging from activity and event recognition to semantic indexing and summarization. Experiments over egocentric sets of nearly 17,000 images, show that the proposed approach outperforms state-of-the-art methods. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB; 601.235 |
Approved |
no |
|
|
Call Number |
Admin @ si @ DBT2017 |
Serial |
2714 |
|
Permanent link to this record |
|
|
|
|
Author |
Galadrielle Humblot-Renaux; Sergio Escalera; Thomas B. Moeslund |
|
|
Title |
Beyond AUROC & co. for evaluating out-of-distribution detection performance |
Type |
Conference Article |
|
Year |
2023 |
Publication |
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
3880-3889 |
|
|
Keywords |
|
|
|
Abstract |
While there has been a growing research interest in developing out-of-distribution (OOD) detection methods, there has been comparably little discussion around how these methods should be evaluated. Given their relevance for safe(r) AI, it is important to examine whether the basis for comparing OOD detection methods is consistent with practical needs. In this work, we take a closer look at the go-to metrics for evaluating OOD detection, and question the approach of exclusively reducing OOD detection to a binary classification task with little consideration for the detection threshold. We illustrate the limitations of current metrics (AUROC & its friends) and propose a new metric – Area Under the Threshold Curve (AUTC), which explicitly penalizes poor separation between ID and OOD samples. Scripts and data are available at https://github.com/glhr/beyond-auroc |
|
|
Address |
Vancouver; Canada; June 2023 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CVPRW |
|
|
Notes |
HUPBA |
Approved |
no |
|
|
Call Number |
Admin @ si @ HEM2023 |
Serial |
3918 |
|
Permanent link to this record |
|
|
|
|
Author |
Jaume Amores |
|
|
Title |
MILDE: multiple instance learning by discriminative embedding |
Type |
Journal Article |
|
Year |
2015 |
Publication |
Knowledge and Information Systems |
Abbreviated Journal |
KAIS |
|
|
Volume |
42 |
Issue |
2 |
Pages |
381-407 |
|
|
Keywords |
Multi-instance learning; Codebook; Bag of words |
|
|
Abstract |
While the objective of the standard supervised learning problem is to classify feature vectors, in the multiple instance learning problem, the objective is to classify bags, where each bag contains multiple feature vectors. This represents a generalization of the standard problem, and this generalization becomes necessary in many real applications such as drug activity prediction, content-based image retrieval, and others. While the existing paradigms are based on learning the discriminant information either at the instance level or at the bag level, we propose to incorporate both levels of information. This is done by defining a discriminative embedding of the original space based on the responses of cluster-adapted instance classifiers. Results clearly show the advantage of the proposed method over the state of the art, where we tested the performance through a variety of well-known databases that come from real problems, and we also included an analysis of the performance using synthetically generated data. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer London |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0219-1377 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS; 601.042; 600.057; 600.076 |
Approved |
no |
|
|
Call Number |
Admin @ si @ Amo2015 |
Serial |
2383 |
|
Permanent link to this record |
|
|
|
|
Author |
Daniel Sanchez; Meysam Madadi; Marc Oliu; Sergio Escalera |
|
|
Title |
Multi-task human analysis in still images: 2D/3D pose, depth map, and multi-part segmentation |
Type |
Conference Article |
|
Year |
2019 |
Publication |
14th IEEE International Conference on Automatic Face and Gesture Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
While many individual tasks in the domain of human analysis have recently received an accuracy boost from deep learning approaches, multi-task learning has mostly been ignored due to a lack of data. New synthetic datasets are being released, filling this gap with synthetic generated data. In this work, we analyze four related human analysis tasks in still images in a multi-task scenario by leveraging such datasets. Specifically, we study the correlation of 2D/3D pose estimation, body part segmentation and full-body depth estimation. These tasks are learned via the well-known Stacked Hourglass module such that each of the task-specific streams shares information with the others. The main goal is to analyze how training together these four related tasks can benefit each individual task for a better generalization. Results on the newly released SURREAL dataset show that all four tasks benefit from the multi-task approach, but with different combinations of tasks: while combining all four tasks improves 2D pose estimation the most, 2D pose improves neither 3D pose nor full-body depth estimation. On the other hand 2D parts segmentation can benefit from 2D pose but not from 3D pose. In all cases, as expected, the maximum improvement is achieved on those human body parts that show more variability in terms of spatial distribution, appearance and shape, e.g. wrists and ankles. |
|
|
Address |
Lille; France; May 2019 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
FG |
|
|
Notes |
HUPBA; no proj |
Approved |
no |
|
|
Call Number |
Admin @ si @ SMO2019 |
Serial |
3326 |
|
Permanent link to this record |
|
|
|
|
Author |
Adriana Romero; Nicolas Ballas; Samira Ebrahimi Kahou; Antoine Chassang; Carlo Gatta; Yoshua Bengio |
|
|
Title |
FitNets: Hints for Thin Deep Nets |
Type |
Conference Article |
|
Year |
2015 |
Publication |
3rd International Conference on Learning Representations ICLR2015 |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
Computer Science ; Learning; Computer Science ;Neural and Evolutionary Computing |
|
|
Abstract |
While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this paper, we extend this idea to allow the training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. This allows one to train deeper students that can generalize better or run faster, a trade-off that is controlled by the chosen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teacher network. |
|
|
Address |
San Diego; CA; May 2015 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICLR |
|
|
Notes |
MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ RBK2015 |
Serial |
2593 |
|
Permanent link to this record |
|
|
|
|
Author |
Alloy Das; Sanket Biswas; Umapada Pal; Josep Llados |
|
|
Title |
Diving into the Depths of Spotting Text in Multi-Domain Noisy Scenes |
Type |
Conference Article |
|
Year |
2024 |
Publication |
IEEE International Conference on Robotics and Automation in PACIFICO |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
When used in a real-world noisy environment, the capacity to generalize to multiple domains is essential for any autonomous scene text spotting system. However, existing state-of-the-art methods employ pretraining and fine-tuning strategies on natural scene datasets, which do not exploit the feature interaction across other complex domains. In this work, we explore and investigate the problem of domain-agnostic scene text spotting, i.e., training a model on multi-domain source data such that it can directly generalize to target domains rather than being specialized for a specific domain or scenario. In this regard, we present the community a text spotting validation benchmark called Under-Water Text (UWT) for noisy underwater scenes to establish an important case study. Moreover, we also design an efficient super-resolution based end-to-end transformer baseline called DA-TextSpotter which achieves comparable or superior performance over existing text spotting architectures for both regular and arbitrary-shaped scene text spotting benchmarks in terms of both accuracy and model efficiency. The dataset, code and pre-trained models will be released upon acceptance. |
|
|
Address |
Yokohama; Japan; May 2024 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICRA |
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
Admin @ si @ DBP2024 |
Serial |
3979 |
|
Permanent link to this record |
|
|
|
|
Author |
Manuel Carbonell; Joan Mas; Mauricio Villegas; Alicia Fornes; Josep Llados |
|
|
Title |
End-to-End Handwritten Text Detection and Transcription in Full Pages |
Type |
Conference Article |
|
Year |
2019 |
Publication |
2nd International Workshop on Machine Learning |
Abbreviated Journal |
|
|
|
Volume |
5 |
Issue |
|
Pages |
29-34 |
|
|
Keywords |
Handwritten Text Recognition; Layout Analysis; Text segmentation; Deep Neural Networks; Multi-task learning |
|
|
Abstract |
When transcribing handwritten document images, inaccuracies in the text segmentation step often cause errors in the subsequent transcription step. For this reason, some recent methods propose to perform the recognition at paragraph level. But still, errors in the segmentation of paragraphs can affect
the transcription performance. In this work, we propose an end-to-end framework to transcribe full pages. The joint text detection and transcription allows to remove the layout analysis requirement at test time. The experimental results show that our approach can achieve comparable results to models that assume
segmented paragraphs, and suggest that joining the two tasks brings an improvement over doing the two tasks separately. |
|
|
Address |
Sydney; Australia; September 2019 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICDAR WML |
|
|
Notes |
DAG; 600.140; 601.311; 600.140 |
Approved |
no |
|
|
Call Number |
Admin @ si @ CMV2019 |
Serial |
3353 |
|
Permanent link to this record |
|
|
|
|
Author |
Marc Masana; Idoia Ruiz; Joan Serrat; Joost Van de Weijer; Antonio Lopez |
|
|
Title |
Metric Learning for Novelty and Anomaly Detection |
Type |
Conference Article |
|
Year |
2018 |
Publication |
29th British Machine Vision Conference |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
When neural networks process images which do not resemble the distribution seen during training, so called out-of-distribution images, they often make wrong predictions, and do so too confidently. The capability to detect out-of-distribution images is therefore crucial for many real-world applications. We divide out-of-distribution detection between novelty detection ---images of classes which are not in the training set but are related to those---, and anomaly detection ---images with classes which are unrelated to the training set. By related we mean they contain the same type of objects, like digits in MNIST and SVHN. Most existing work has focused on anomaly detection, and has addressed this problem considering networks trained with the cross-entropy loss. Differently from them, we propose to use metric learning which does not have the drawback of the softmax layer (inherent to cross-entropy methods), which forces the network to divide its prediction power over the learned classes. We perform extensive experiments and evaluate both novelty and anomaly detection, even in a relevant application such as traffic sign recognition, obtaining comparable or better results than previous works. |
|
|
Address |
Newcastle; uk; September 2018 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
BMVC |
|
|
Notes |
LAMP; ADAS; 601.305; 600.124; 600.106; 602.200; 600.120; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ MRS2018 |
Serial |
3156 |
|
Permanent link to this record |
|
|
|
|
Author |
Javier Vazquez; J. Kevin O'Regan; Maria Vanrell; Graham D. Finlayson |
|
|
Title |
A new spectrally sharpened basis to predict colour naming, unique hues, and hue cancellation |
Type |
Journal Article |
|
Year |
2012 |
Publication |
Journal of Vision |
Abbreviated Journal |
VSS |
|
|
Volume |
12 |
Issue |
6 (7) |
Pages |
1-14 |
|
|
Keywords |
|
|
|
Abstract |
When light is reflected off a surface, there is a linear relation between the three human photoreceptor responses to the incoming light and the three photoreceptor responses to the reflected light. Different colored surfaces have different linear relations. Recently, Philipona and O'Regan (2006) showed that when this relation is singular in a mathematical sense, then the surface is perceived as having a highly nameable color. Furthermore, white light reflected by that surface is perceived as corresponding precisely to one of the four psychophysically measured unique hues. However, Philipona and O'Regan's approach seems unrelated to classical psychophysical models of color constancy. In this paper we make this link. We begin by transforming cone sensors to spectrally sharpened counterparts. In sharp color space, illumination change can be modeled by simple von Kries type scalings of response values within each of the spectrally sharpened response channels. In this space, Philipona and O'Regan's linear relation is captured by a simple Land-type color designator defined by dividing reflected light by incident light. This link between Philipona and O'Regan's theory and Land's notion of color designator gives the model biological plausibility. We then show that Philipona and O'Regan's singular surfaces are surfaces which are very close to activating only one or only two of such newly defined spectrally sharpened sensors, instead of the usual three. Closeness to zero is quantified in a new simplified measure of singularity which is also shown to relate to the chromaticness of colors. As in Philipona and O'Regan's original work, our new theory accounts for a large variety of psychophysical color data. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
CIC |
Approved |
no |
|
|
Call Number |
Admin @ si @ VOV2012 |
Serial |
1998 |
|
Permanent link to this record |