|   | 
Details
   web
Records
Author Susana Alvarez; Maria Vanrell
Title Texton theory revisited: a bag-of-words approach to combine textons Type Journal Article
Year 2012 Publication Pattern Recognition Abbreviated Journal PR
Volume 45 Issue 12 Pages 4312-4325
Keywords
Abstract The aim of this paper is to revisit an old theory of texture perception and
update its computational implementation by extending it to colour. With this in mind we try to capture the optimality of perceptual systems. This is achieved in the proposed approach by sharing well-known early stages of the visual processes and extracting low-dimensional features that perfectly encode adequate properties for a large variety of textures without needing further learning stages. We propose several descriptors in a bag-of-words framework that are derived from different quantisation models on to the feature spaces. Our perceptual features are directly given by the shape and colour attributes of image blobs, which are the textons. In this way we avoid learning visual words and directly build the vocabularies on these lowdimensionaltexton spaces. Main differences between proposed descriptors rely on how co-occurrence of blob attributes is represented in the vocabularies. Our approach overcomes current state-of-art in colour texture description which is proved in several experiments on large texture datasets.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0031-3203 ISBN Medium
Area Expedition Conference
Notes CIC Approved no
Call Number (up) Admin @ si @ AlV2012a Serial 2130
Permanent link to this record
 

 
Author Ariel Amato
Title Moving cast shadow detection Type Journal Article
Year 2014 Publication Electronic letters on computer vision and image analysis Abbreviated Journal ELCVIA
Volume 13 Issue 2 Pages 70-71
Keywords
Abstract Motion perception is an amazing innate ability of the creatures on the planet. This adroitness entails a functional advantage that enables species to compete better in the wild. The motion perception ability is usually employed at different levels, allowing from the simplest interaction with the ’physis’ up to the most transcendental survival tasks. Among the five classical perception system , vision is the most widely used in the motion perception field. Millions years of evolution have led to a highly specialized visual system in humans, which is characterized by a tremendous accuracy as well as an extraordinary robustness. Although humans and an immense diversity of species can distinguish moving object with a seeming simplicity, it has proven to be a difficult and non trivial problem from a computational perspective. In the field of Computer Vision, the detection of moving objects is a challenging and fundamental research area. This can be referred to as the ’origin’ of vast and numerous vision-based research sub-areas. Nevertheless, from the bottom to the top of this hierarchical analysis, the foundations still relies on when and where motion has occurred in an image. Pixels corresponding to moving objects in image sequences can be identified by measuring changes in their values. However, a pixel’s value (representing a combination of color and brightness) could also vary due to other factors such as: variation in scene illumination, camera noise and nonlinear sensor responses among others. The challenge lies in detecting if the changes in pixels’ value are caused by a genuine object movement or not. An additional challenging aspect in motion detection is represented by moving cast shadows. The paradox arises because a moving object and its cast shadow share similar motion patterns. However, a moving cast shadow is not a moving object. In fact, a shadow represents a photometric illumination effect caused by the relative position of the object with respect to the light sources. Shadow detection methods are mainly divided in two domains depending on the application field. One normally consists of static images where shadows are casted by static objects, whereas the second one is referred to image sequences where shadows are casted by moving objects. For the first case, shadows can provide additional geometric and semantic cues about shape and position of its casting object as well as the localization of the light source. Although the previous information can be extracted from static images as well as video sequences, the main focus in the second area is usually change detection, scene matching or surveillance. In this context, a shadow can severely affect with the analysis and interpretation of the scene. The work done in the thesis is focused on the second case, thus it addresses the problem of detection and removal of moving cast shadows in video sequences in order to enhance the detection of moving object.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE Approved no
Call Number (up) Admin @ si @ Ama2014 Serial 2870
Permanent link to this record
 

 
Author Ariel Amato; Mikhail Mozerov; Andrew Bagdanov; Jordi Gonzalez
Title Accurate Moving Cast Shadow Suppression Based on Local Color Constancy detection Type Journal Article
Year 2011 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP
Volume 20 Issue 10 Pages 2954 - 2966
Keywords
Abstract This paper describes a novel framework for detection and suppression of properly shadowed regions for most possible scenarios occurring in real video sequences. Our approach requires no prior knowledge about the scene, nor is it restricted to specific scene structures. Furthermore, the technique can detect both achromatic and chromatic shadows even in the presence of camouflage that occurs when foreground regions are very similar in color to shadowed regions. The method exploits local color constancy properties due to reflectance suppression over shadowed regions. To detect shadowed regions in a scene, the values of the background image are divided by values of the current frame in the RGB color space. We show how this luminance ratio can be used to identify segments with low gradient constancy, which in turn distinguish shadows from foreground. Experimental results on a collection of publicly available datasets illustrate the superior performance of our method compared with the most sophisticated, state-of-the-art shadow detection algorithms. These results show that our approach is robust and accurate over a broad range of shadow types and challenging video conditions.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1057-7149 ISBN Medium
Area Expedition Conference
Notes ISE Approved no
Call Number (up) Admin @ si @ AMB2011 Serial 1716
Permanent link to this record
 

 
Author Egils Avots; Meysam Madadi; Sergio Escalera; Jordi Gonzalez; Xavier Baro; Paul Pallin; Gholamreza Anbarjafari
Title From 2D to 3D geodesic-based garment matching Type Journal Article
Year 2019 Publication Multimedia Tools and Applications Abbreviated Journal MTAP
Volume 78 Issue 18 Pages 25829–25853
Keywords Shape matching; Geodesic distance; Texture mapping; RGBD image processing; Gaussian mixture model
Abstract A new approach for 2D to 3D garment retexturing is proposed based on Gaussian mixture models and thin plate splines (TPS). An automatically segmented garment of an individual is matched to a new source garment and rendered, resulting in augmented images in which the target garment has been retextured using the texture of the source garment. We divide the problem into garment boundary matching based on Gaussian mixture models and then interpolate inner points using surface topology extracted through geodesic paths, which leads to a more realistic result than standard approaches. We evaluated and compared our system quantitatively by root mean square error (RMS) and qualitatively using the mean opinion score (MOS), showing the benefits of the proposed methodology on our gathered dataset.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HuPBA; ISE; 600.098; 600.119; 602.133 Approved no
Call Number (up) Admin @ si @ AME2019 Serial 3317
Permanent link to this record
 

 
Author O.F.Ahmad; Y.Mori; M.Misawa; S.Kudo; J.T.Anderson; Jorge Bernal
Title Establishing key research questions for the implementation of artificial intelligence in colonoscopy: a modified Delphi method Type Journal Article
Year 2021 Publication Endoscopy Abbreviated Journal END
Volume 53 Issue 9 Pages 893-901
Keywords
Abstract BACKGROUND : Artificial intelligence (AI) research in colonoscopy is progressing rapidly but widespread clinical implementation is not yet a reality. We aimed to identify the top implementation research priorities. METHODS : An established modified Delphi approach for research priority setting was used. Fifteen international experts, including endoscopists and translational computer scientists/engineers, from nine countries participated in an online survey over 9 months. Questions related to AI implementation in colonoscopy were generated as a long-list in the first round, and then scored in two subsequent rounds to identify the top 10 research questions. RESULTS : The top 10 ranked questions were categorized into five themes. Theme 1: clinical trial design/end points (4 questions), related to optimum trial designs for polyp detection and characterization, determining the optimal end points for evaluation of AI, and demonstrating impact on interval cancer rates. Theme 2: technological developments (3 questions), including improving detection of more challenging and advanced lesions, reduction of false-positive rates, and minimizing latency. Theme 3: clinical adoption/integration (1 question), concerning the effective combination of detection and characterization into one workflow. Theme 4: data access/annotation (1 question), concerning more efficient or automated data annotation methods to reduce the burden on human experts. Theme 5: regulatory approval (1 question), related to making regulatory approval processes more efficient. CONCLUSIONS : This is the first reported international research priority setting exercise for AI in colonoscopy. The study findings should be used as a framework to guide future research with key stakeholders to accelerate the clinical implementation of AI in endoscopy.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE Approved no
Call Number (up) Admin @ si @ AMM2021 Serial 3670
Permanent link to this record
 

 
Author Jaume Amores
Title Multiple Instance Classification: review, taxonomy and comparative study Type Journal Article
Year 2013 Publication Artificial Intelligence Abbreviated Journal AI
Volume 201 Issue Pages 81-105
Keywords Multi-instance learning; Codebook; Bag-of-Words
Abstract Multiple Instance Learning (MIL) has become an important topic in the pattern recognition community, and many solutions to this problemhave been proposed until now. Despite this fact, there is a lack of comparative studies that shed light into the characteristics and behavior of the different methods. In this work we provide such an analysis focused on the classification task (i.e.,leaving out other learning tasks such as regression). In order to perform our study, we implemented
fourteen methods grouped into three different families. We analyze the performance of the approaches across a variety of well-known databases, and we also study their behavior in synthetic scenarios in order to highlight their characteristics. As a result of this analysis, we conclude that methods that extract global bag-level information show a clearly superior performance in general. In this sense, the analysis permits us to understand why some types of methods are more successful than others, and it permits us to establish guidelines in the design of new MIL
methods.
Address
Corporate Author Thesis
Publisher Elsevier Science Publishers Ltd. Essex, UK Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0004-3702 ISBN Medium
Area Expedition Conference
Notes ADAS; 601.042; 600.057 Approved no
Call Number (up) Admin @ si @ Amo2013 Serial 2273
Permanent link to this record
 

 
Author Jaume Amores
Title MILDE: multiple instance learning by discriminative embedding Type Journal Article
Year 2015 Publication Knowledge and Information Systems Abbreviated Journal KAIS
Volume 42 Issue 2 Pages 381-407
Keywords Multi-instance learning; Codebook; Bag of words
Abstract While the objective of the standard supervised learning problem is to classify feature vectors, in the multiple instance learning problem, the objective is to classify bags, where each bag contains multiple feature vectors. This represents a generalization of the standard problem, and this generalization becomes necessary in many real applications such as drug activity prediction, content-based image retrieval, and others. While the existing paradigms are based on learning the discriminant information either at the instance level or at the bag level, we propose to incorporate both levels of information. This is done by defining a discriminative embedding of the original space based on the responses of cluster-adapted instance classifiers. Results clearly show the advantage of the proposed method over the state of the art, where we tested the performance through a variety of well-known databases that come from real problems, and we also included an analysis of the performance using synthetically generated data.
Address
Corporate Author Thesis
Publisher Springer London Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0219-1377 ISBN Medium
Area Expedition Conference
Notes ADAS; 601.042; 600.057; 600.076 Approved no
Call Number (up) Admin @ si @ Amo2015 Serial 2383
Permanent link to this record
 

 
Author Eduardo Aguilar; Bhalaji Nagarajan; Beatriz Remeseiro; Petia Radeva
Title Bayesian deep learning for semantic segmentation of food images Type Journal Article
Year 2022 Publication Computers and Electrical Engineering Abbreviated Journal CEE
Volume 103 Issue Pages 108380
Keywords Deep learning; Uncertainty quantification; Bayesian inference; Image segmentation; Food analysis
Abstract Deep learning has provided promising results in various applications; however, algorithms tend to be overconfident in their predictions, even though they may be entirely wrong. Particularly for critical applications, the model should provide answers only when it is very sure of them. This article presents a Bayesian version of two different state-of-the-art semantic segmentation methods to perform multi-class segmentation of foods and estimate the uncertainty about the given predictions. The proposed methods were evaluated on three public pixel-annotated food datasets. As a result, we can conclude that Bayesian methods improve the performance achieved by the baseline architectures and, in addition, provide information to improve decision-making. Furthermore, based on the extracted uncertainty map, we proposed three measures to rank the images according to the degree of noisy annotations they contained. Note that the top 135 images ranked by one of these measures include more than half of the worst-labeled food images.
Address October 2022
Corporate Author Thesis
Publisher Science Direct Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB Approved no
Call Number (up) Admin @ si @ ANR2022 Serial 3763
Permanent link to this record
 

 
Author Eduardo Aguilar; Beatriz Remeseiro; Marc Bolaños; Petia Radeva
Title Grab, Pay, and Eat: Semantic Food Detection for Smart Restaurants Type Journal Article
Year 2018 Publication IEEE Transactions on Multimedia Abbreviated Journal
Volume 20 Issue 12 Pages 3266 - 3275
Keywords
Abstract The increase in awareness of people towards their nutritional habits has drawn considerable attention to the field of automatic food analysis. Focusing on self-service restaurants environment, automatic food analysis is not only useful for extracting nutritional information from foods selected by customers, it is also of high interest to speed up the service solving the bottleneck produced at the cashiers in times of high demand. In this paper, we address the problem of automatic food tray analysis in canteens and restaurants environment, which consists in predicting multiple foods placed on a tray image. We propose a new approach for food analysis based on convolutional neural networks, we name Semantic Food Detection, which integrates in the same framework food localization, recognition and segmentation. We demonstrate that our method improves the state of the art food detection by a considerable margin on the public dataset UNIMIB2016 achieving about 90% in terms of F-measure, and thus provides a significant technological advance towards the automatic billing in restaurant environments.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB; no proj Approved no
Call Number (up) Admin @ si @ ARB2018 Serial 3236
Permanent link to this record
 

 
Author David Aldavert; Marçal Rusiñol; Ricardo Toledo; Josep Llados
Title A Study of Bag-of-Visual-Words Representations for Handwritten Keyword Spotting Type Journal Article
Year 2015 Publication International Journal on Document Analysis and Recognition Abbreviated Journal IJDAR
Volume 18 Issue 3 Pages 223-234
Keywords Bag-of-Visual-Words; Keyword spotting; Handwritten documents; Performance evaluation
Abstract The Bag-of-Visual-Words (BoVW) framework has gained popularity among the document image analysis community, specifically as a representation of handwritten words for recognition or spotting purposes. Although in the computer vision field the BoVW method has been greatly improved, most of the approaches in the document image analysis domain still rely on the basic implementation of the BoVW method disregarding such latest refinements. In this paper, we present a review of those improvements and its application to the keyword spotting task. We thoroughly evaluate their impact against a baseline system in the well-known George Washington dataset and compare the obtained results against nine state-of-the-art keyword spotting methods. In addition, we also compare both the baseline and improved systems with the methods presented at the Handwritten Keyword Spotting Competition 2014.
Address
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1433-2833 ISBN Medium
Area Expedition Conference
Notes DAG; ADAS; 600.055; 600.061; 601.223; 600.077; 600.097 Approved no
Call Number (up) Admin @ si @ ART2015 Serial 2679
Permanent link to this record
 

 
Author Cristhian A. Aguilera-Carrasco; Angel Sappa; Cristhian Aguilera; Ricardo Toledo
Title Cross-Spectral Local Descriptors via Quadruplet Network Type Journal Article
Year 2017 Publication Sensors Abbreviated Journal SENS
Volume 17 Issue 4 Pages 873
Keywords
Abstract This paper presents a novel CNN-based architecture, referred to as Q-Net, to learn local feature descriptors that are useful for matching image patches from two different spectral bands. Given correctly matched and non-matching cross-spectral image pairs, a quadruplet network is trained to map input image patches to a common Euclidean space, regardless of the input spectral band. Our approach is inspired by the recent success of triplet networks in the visible spectrum, but adapted for cross-spectral scenarios, where, for each matching pair, there are always two possible non-matching patches: one for each spectrum. Experimental evaluations on a public cross-spectral VIS-NIR dataset shows that the proposed approach improves the state-of-the-art. Moreover, the proposed technique can also be used in mono-spectral settings, obtaining a similar performance to triplet network descriptors, but requiring less training data.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS; 600.086; 600.118 Approved no
Call Number (up) Admin @ si @ ASA2017 Serial 2914
Permanent link to this record
 

 
Author Susana Alvarez; Anna Salvatella; Maria Vanrell; Xavier Otazu
Title Low-dimensional and Comprehensive Color Texture Description Type Journal Article
Year 2012 Publication Computer Vision and Image Understanding Abbreviated Journal CVIU
Volume 116 Issue I Pages 54-67
Keywords
Abstract Image retrieval can be dealt by combining standard descriptors, such as those of MPEG-7, which are defined independently for each visual cue (e.g. SCD or CLD for Color, HTD for texture or EHD for edges).
A common problem is to combine similarities coming from descriptors representing different concepts in different spaces. In this paper we propose a color texture description that bypasses this problem from its inherent definition. It is based on a low dimensional space with 6 perceptual axes. Texture is described in a 3D space derived from a direct implementation of the original Julesz’s Texton theory and color is described in a 3D perceptual space. This early fusion through the blob concept in these two bounded spaces avoids the problem and allows us to derive a sparse color-texture descriptor that achieves similar performance compared to MPEG-7 in image retrieval. Moreover, our descriptor presents comprehensive qualities since it can also be applied either in segmentation or browsing: (a) a dense image representation is defined from the descriptor showing a reasonable performance in locating texture patterns included in complex images; and (b) a vocabulary of basic terms is derived to build an intermediate level descriptor in natural language improving browsing by bridging semantic gap
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1077-3142 ISBN Medium
Area Expedition Conference
Notes CAT;CIC Approved no
Call Number (up) Admin @ si @ ASV2012 Serial 1827
Permanent link to this record
 

 
Author Aymen Azaza; Joost Van de Weijer; Ali Douik; Marc Masana
Title Context Proposals for Saliency Detection Type Journal Article
Year 2018 Publication Computer Vision and Image Understanding Abbreviated Journal CVIU
Volume 174 Issue Pages 1-11
Keywords
Abstract One of the fundamental properties of a salient object region is its contrast
with the immediate context. The problem is that numerous object regions
exist which potentially can all be salient. One way to prevent an exhaustive
search over all object regions is by using object proposal algorithms. These
return a limited set of regions which are most likely to contain an object. Several saliency estimation methods have used object proposals. However, they focus on the saliency of the proposal only, and the importance of its immediate context has not been evaluated.
In this paper, we aim to improve salient object detection. Therefore, we extend object proposal methods with context proposals, which allow to incorporate the immediate context in the saliency computation. We propose several saliency features which are computed from the context proposals. In the experiments, we evaluate five object proposal methods for the task of saliency segmentation, and find that Multiscale Combinatorial Grouping outperforms the others. Furthermore, experiments show that the proposed context features improve performance, and that our method matches results on the FT datasets and obtains competitive results on three other datasets (PASCAL-S, MSRA-B and ECSSD).
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes LAMP; 600.109; 600.109; 600.120 Approved no
Call Number (up) Admin @ si @ AWD2018 Serial 3241
Permanent link to this record
 

 
Author Sumit K. Banchhor; Tadashi Araki; Narendra D. Londhe; Nobutaka Ikeda; Petia Radeva; Ayman El-Baz; Luca Saba; Andrew Nicolaides; Shoaib Shafique; John R. Laird; Jasjit S. Suri
Title Five multiresolution-based calcium volume measurement techniques from coronary IVUS videos: A comparative approach Type Journal Article
Year 2016 Publication Computer Methods and Programs in Biomedicine Abbreviated Journal CMPB
Volume 134 Issue Pages 237-258
Keywords
Abstract BACKGROUND AND OBJECTIVE:
Fast intravascular ultrasound (IVUS) video processing is required for calcium volume computation during the planning phase of percutaneous coronary interventional (PCI) procedures. Nonlinear multiresolution techniques are generally applied to improve the processing time by down-sampling the video frames.
METHODS:
This paper presents four different segmentation methods for calcium volume measurement, namely Threshold-based, Fuzzy c-Means (FCM), K-means, and Hidden Markov Random Field (HMRF) embedded with five different kinds of multiresolution techniques (bilinear, bicubic, wavelet, Lanczos, and Gaussian pyramid). This leads to 20 different kinds of combinations. IVUS image data sets consisting of 38,760 IVUS frames taken from 19 patients were collected using 40 MHz IVUS catheter (Atlantis® SR Pro, Boston Scientific®, pullback speed of 0.5 mm/sec.). The performance of these 20 systems is compared with and without multiresolution using the following metrics: (a) computational time; (b) calcium volume; (c) image quality degradation ratio; and (d) quality assessment ratio.
RESULTS:
Among the four segmentation methods embedded with five kinds of multiresolution techniques, FCM segmentation combined with wavelet-based multiresolution gave the best performance. FCM and wavelet experienced the highest percentage mean improvement in computational time of 77.15% and 74.07%, respectively. Wavelet interpolation experiences the highest mean precision-of-merit (PoM) of 94.06 ± 3.64% and 81.34 ± 16.29% as compared to other multiresolution techniques for volume level and frame level respectively. Wavelet multiresolution technique also experiences the highest Jaccard Index and Dice Similarity of 0.7 and 0.8, respectively. Multiresolution is a nonlinear operation which introduces bias and thus degrades the image. The proposed system also provides a bias correction approach to enrich the system, giving a better mean calcium volume similarity for all the multiresolution-based segmentation methods. After including the bias correction, bicubic interpolation gives the largest increase in mean calcium volume similarity of 4.13% compared to the rest of the multiresolution techniques. The system is automated and can be adapted in clinical settings.
CONCLUSIONS:
We demonstrated the time improvement in calcium volume computation without compromising the quality of IVUS image. Among the 20 different combinations of multiresolution with calcium volume segmentation methods, the FCM embedded with wavelet-based multiresolution gave the best performance.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB; Approved no
Call Number (up) Admin @ si @ BAL2016 Serial 2830
Permanent link to this record
 

 
Author R.A.Bendezu; E.Barba; E.Burri; D.Cisternas; Carolina Malagelada; Santiago Segui; Anna Accarino; S.Quiroga; E.Monclus; I.Navazo
Title Intestinal gas content and distribution in health and in patients with functional gut symptoms Type Journal Article
Year 2015 Publication Neurogastroenterology & Motility Abbreviated Journal NEUMOT
Volume 27 Issue 9 Pages 1249-1257
Keywords
Abstract BACKGROUND:
The precise relation of intestinal gas to symptoms, particularly abdominal bloating and distension remains incompletely elucidated. Our aim was to define the normal values of intestinal gas volume and distribution and to identify abnormalities in relation to functional-type symptoms.
METHODS:
Abdominal computed tomography scans were evaluated in healthy subjects (n = 37) and in patients in three conditions: basal (when they were feeling well; n = 88), during an episode of abdominal distension (n = 82) and after a challenge diet (n = 24). Intestinal gas content and distribution were measured by an original analysis program. Identification of patients outside the normal range was performed by machine learning techniques (one-class classifier). Results are expressed as median (IQR) or mean ± SE, as appropriate.
KEY RESULTS:
In healthy subjects the gut contained 95 (71, 141) mL gas distributed along the entire lumen. No differences were detected between patients studied under asymptomatic basal conditions and healthy subjects. However, either during a spontaneous bloating episode or once challenged with a flatulogenic diet, luminal gas was found to be increased and/or abnormally distributed in about one-fourth of the patients. These patients detected outside the normal range by the classifier exhibited a significantly greater number of abnormal features than those within the normal range (3.7 ± 0.4 vs 0.4 ± 0.1; p < 0.001).
CONCLUSIONS & INFERENCES:
The analysis of a large cohort of subjects using original techniques provides unique and heretofore unavailable information on the volume and distribution of intestinal gas in normal conditions and in relation to functional gastrointestinal symptoms.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB Approved no
Call Number (up) Admin @ si @ BBB2015 Serial 2667
Permanent link to this record