toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author C. Alejandro Parraga; Javier Vazquez; Maria Vanrell edit  openurl
  Title A new cone activation-based natural images dataset Type Journal Article
  Year 2009 Publication Perception Abbreviated Journal PER  
  Volume 36 Issue Pages 180  
  Keywords  
  Abstract We generated a new dataset of digital natural images where each colour plane corresponds to the human LMS (long-, medium-, short-wavelength) cone activations. The images were chosen to represent five different visual environments (eg forest, seaside, mountain snow, urban, motorways) and were taken under natural illumination at different times of day. At the bottom-left corner of each picture there was a matte grey ball of approximately constant spectral reflectance (across the camera's response spectrum,) and nearly Lambertian reflective properties, which allows to compute (and remove, if necessary) the illuminant's colour and intensity. The camera (Sigma Foveon SD10) was calibrated by measuring its sensor's spectral responses using a set of 31 spectrally narrowband interference filters. This allowed conversion of the final camera-dependent RGB colour space into the Smith and Pokorny (1975) cone activation space by means of a polynomial transformation, optimised for a set of 1269 Munsell chip reflectances. This new method is an improvement over the usual 3 × 3 matrix transformation which is only accurate for spectrally-narrowband colours. The camera-to-LMS transformation can be recalculated to consider other non-human visual systems. The dataset is available to download from our website.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up)  
  Notes CIC Approved no  
  Call Number CAT @ cat @ PVV2009 Serial 1193  
Permanent link to this record
 

 
Author Joost Van de Weijer; Cordelia Schmid; Jakob Verbeek; Diane Larlus edit  url
doi  openurl
  Title Learning Color Names for Real-World Applications Type Journal Article
  Year 2009 Publication IEEE Transaction in Image Processing Abbreviated Journal TIP  
  Volume 18 Issue 7 Pages 1512–1524  
  Keywords  
  Abstract Color names are required in real-world applications such as image retrieval and image annotation. Traditionally, they are learned from a collection of labelled color chips. These color chips are labelled with color names within a well-defined experimental setup by human test subjects. However naming colors in real-world images differs significantly from this experimental setting. In this paper, we investigate how color names learned from color chips compare to color names learned from real-world images. To avoid hand labelling real-world images with color names we use Google Image to collect a data set. Due to limitations of Google Image this data set contains a substantial quantity of wrongly labelled data. We propose several variants of the PLSA model to learn color names from this noisy data. Experimental results show that color names learned from real-world images significantly outperform color names learned from labelled color chips for both image retrieval and image annotation.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1057-7149 ISBN Medium  
  Area Expedition Conference (up)  
  Notes Approved no  
  Call Number CAT @ cat @ WSV2009 Serial 1195  
Permanent link to this record
 

 
Author Mariano Vazquez; Ruth Aris; Guillaume Hozeaux; R.Aubry; P.Villar;Jaume Garcia ; Debora Gil; Francesc Carreras edit   pdf
url  doi
openurl 
  Title A massively parallel computational electrophysiology model of the heart Type Journal Article
  Year 2011 Publication International Journal for Numerical Methods in Biomedical Engineering Abbreviated Journal IJNMBE  
  Volume 27 Issue Pages 1911-1929  
  Keywords computational electrophysiology; parallelization; finite element methods  
  Abstract This paper presents a patient-sensitive simulation strategy capable of using the most efficient way the high-performance computational resources. The proposed strategy directly involves three different players: Computational Mechanics Scientists (CMS), Image Processing Scientists and Cardiologists, each one mastering its own expertise area within the project. This paper describes the general integrative scheme but focusing on the CMS side presents a massively parallel implementation of computational electrophysiology applied to cardiac tissue simulation. The paper covers different angles of the computational problem: equations, numerical issues, the algorithm and parallel implementation. The proposed methodology is illustrated with numerical simulations testing all the different possibilities, ranging from small domains up to very large ones. A key issue is the almost ideal scalability not only for large and complex problems but also for medium-size meshes. The explicit formulation is particularly well suited for solving this highly transient problems, with very short time-scale.  
  Address Swansea (UK)  
  Corporate Author John Wiley & Sons, Ltd. Thesis  
  Publisher John Wiley & Sons, Ltd. Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up)  
  Notes IAM Approved no  
  Call Number IAM @ iam @ VAH2011 Serial 1198  
Permanent link to this record
 

 
Author Mikhail Mozerov; Ignasi Rius; Xavier Roca; Jordi Gonzalez edit   pdf
url  doi
openurl 
  Title Nonlinear synchronization for automatic learning of 3D pose variability in human motion sequences Type Journal Article
  Year 2010 Publication EURASIP Journal on Advances in Signal Processing Abbreviated Journal EURASIPJ  
  Volume Issue Pages  
  Keywords  
  Abstract Article ID 507247
A dense matching algorithm that solves the problem of synchronizing prerecorded human motion sequences, which show different speeds and accelerations, is proposed. The approach is based on minimization of MRF energy and solves the problem by using Dynamic Programming. Additionally, an optimal sequence is automatically selected from the input dataset to be a time-scale pattern for all other sequences. The paper utilizes an action specific model which automatically learns the variability of 3D human postures observed in a set of training sequences. The model is trained using the public CMU motion capture dataset for the walking action, and a mean walking performance is automatically learnt. Additionally, statistics about the observed variability of the postures and motion direction are also computed at each time step. The synchronized motion sequences are used to learn a model of human motion for action recognition and full-body tracking purposes.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1110-8657 ISBN Medium  
  Area Expedition Conference (up)  
  Notes ISE Approved no  
  Call Number ISE @ ise @ MRR2010 Serial 1208  
Permanent link to this record
 

 
Author Pau Baiget edit  openurl
  Title Modeling Human Behavior for Image Sequence Understanding and Generation Type Book Whole
  Year 2009 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract The comprehension of animal behavior, especially human behavior, is one of the most ancient and studied problems since the beginning of civilization. The big list of factors that interact to determine a person action require the collaboration of different disciplines, such as psichology, biology, or sociology. In the last years the analysis of human behavior has received great attention also from the computer vision community, given the latest advances in the acquisition of human motion data from image sequences.

Despite the increasing availability of that data, there still exists a gap towards obtaining a conceptual representation of the obtained observations. Human behavior analysis is based on a qualitative interpretation of the results, and therefore the assignment of concepts to quantitative data is linked to a certain ambiguity.

This Thesis tackles the problem of obtaining a proper representation of human behavior in the contexts of computer vision and animation. On the one hand, a good behavior model should permit the recognition and explanation the observed activity in image sequences. On the other hand, such a model must allow the generation of new synthetic instances, which model the behavior of virtual agents.

First, we propose methods to automatically learn the models from observations. Given a set of quantitative results output by a vision system, a normal behavior model is learnt. This results provides a tool to determine the normality or abnormality of future observations. However, machine learning methods are unable to provide a richer description of the observations. We confront this problem by means of a new method that incorporates prior knowledge about the enviornment and about the expected behaviors. This framework, formed by the reasoning engine FMTL and the modeling tool SGT allows the generation of conceptual descriptions of activity in new image sequences. Finally, we demonstrate the suitability of the proposed framework to simulate behavior of virtual agents, which are introduced into real image sequences and interact with observed real agents, thereby easing the generation of augmented reality sequences.

The set of approaches presented in this Thesis has a growing set of potential applications. The analysis and description of behavior in image sequences has its principal application in the domain of smart video--surveillance, in order to detect suspicious or dangerous behaviors. Other applications include automatic sport commentaries, elderly monitoring, road traffic analysis, and the development of semantic video search engines. Alternatively, behavioral virtual agents allow to simulate accurate real situations, such as fires or crowds. Moreover, the inclusion of virtual agents into real image sequences has been widely deployed in the games and cinema industries.
 
  Address Bellaterra (Spain)  
  Corporate Author Thesis Ph.D. thesis  
  Publisher Ediciones Graficas Rey Place of Publication Editor Jordi Gonzalez;Xavier Roca  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up)  
  Notes Approved no  
  Call Number Admin @ si @ Bai2009 Serial 1210  
Permanent link to this record
 

 
Author Jordi Gonzalez; Dani Rowe; Javier Varona; Xavier Roca edit  doi
openurl 
  Title Understanding Dynamic Scenes based on Human Sequence Evaluation Type Journal Article
  Year 2009 Publication Image and Vision Computing Abbreviated Journal IMAVIS  
  Volume 27 Issue 10 Pages 1433–1444  
  Keywords Image Sequence Evaluation; High-level processing of monitored scenes; Segmentation and tracking in complex scenes; Event recognition in dynamic scenes; Human motion understanding; Human behaviour interpretation; Natural-language text generation; Realistic demonstrators  
  Abstract In this paper, a Cognitive Vision System (CVS) is presented, which explains the human behaviour of monitored scenes using natural-language texts. This cognitive analysis of human movements recorded in image sequences is here referred to as Human Sequence Evaluation (HSE) which defines a set of transformation modules involved in the automatic generation of semantic descriptions from pixel values. In essence, the trajectories of human agents are obtained to generate textual interpretations of their motion, and also to infer the conceptual relationships of each agent w.r.t. its environment. For this purpose, a human behaviour model based on Situation Graph Trees (SGTs) is considered, which permits both bottom-up (hypothesis generation) and top-down (hypothesis refinement) analysis of dynamic scenes. The resulting system prototype interprets different kinds of behaviour and reports textual descriptions in multiple languages.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up)  
  Notes ISE Approved no  
  Call Number ISE @ ise @ GRV2009 Serial 1211  
Permanent link to this record
 

 
Author Carles Fernandez; Pau Baiget; Xavier Roca; Jordi Gonzalez edit  openurl
  Title Exploiting Natural Language Generation in Scene Interpretation Type Book Chapter
  Year 2009 Publication Human–Centric Interfaces for Ambient Intelligence Abbreviated Journal  
  Volume 4 Issue Pages 71–93  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier Science and Tech Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up)  
  Notes ISE Approved no  
  Call Number ISE @ ise @ FBR2009 Serial 1212  
Permanent link to this record
 

 
Author Fadi Dornaika; Bogdan Raducanu edit  doi
openurl 
  Title Three-Dimensional Face Pose Detection and Tracking Using Monocular Videos: Tool and Application Type Journal Article
  Year 2009 Publication IEEE Transactions on Systems, Man and Cybernetics part B Abbreviated Journal TSMCB  
  Volume 39 Issue 4 Pages 935–944  
  Keywords  
  Abstract Recently, we have proposed a real-time tracker that simultaneously tracks the 3-D head pose and facial actions in monocular video sequences that can be provided by low quality cameras. This paper has two main contributions. First, we propose an automatic 3-D face pose initialization scheme for the real-time tracker by adopting a 2-D face detector and an eigenface system. Second, we use the proposed methods-the initialization and tracking-for enhancing the human-machine interaction functionality of an AIBO robot. More precisely, we show how the orientation of the robot's camera (or any active vision system) can be controlled through the estimation of the user's head pose. Applications based on head-pose imitation such as telepresence, virtual reality, and video games can directly exploit the proposed techniques. Experiments on real videos confirm the robustness and usefulness of the proposed methods.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up)  
  Notes OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ DoR2009a Serial 1218  
Permanent link to this record
 

 
Author Oriol Ramos Terrades; Ernest Valveny; Salvatore Tabbone edit  doi
openurl 
  Title Optimal Classifier Fusion in a Non-Bayesian Probabilistic Framework Type Journal Article
  Year 2009 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
  Volume 31 Issue 9 Pages 1630–1644  
  Keywords  
  Abstract The combination of the output of classifiers has been one of the strategies used to improve classification rates in general purpose classification systems. Some of the most common approaches can be explained using the Bayes' formula. In this paper, we tackle the problem of the combination of classifiers using a non-Bayesian probabilistic framework. This approach permits us to derive two linear combination rules that minimize misclassification rates under some constraints on the distribution of classifiers. In order to show the validity of this approach we have compared it with other popular combination rules from a theoretical viewpoint using a synthetic data set, and experimentally using two standard databases: the MNIST handwritten digit database and the GREC symbol database. Results on the synthetic data set show the validity of the theoretical approach. Indeed, results on real data show that the proposed methods outperform other common combination schemes.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0162-8828 ISBN Medium  
  Area Expedition Conference (up)  
  Notes DAG Approved no  
  Call Number DAG @ dag @ RVT2009 Serial 1220  
Permanent link to this record
 

 
Author Angel Sappa; Niki Aifanti; Sotiris Malassiotis; Michael G. Strintzis edit  openurl
  Title Prior Knowledge Based Motion Model Representation Type Book Chapter
  Year 2009 Publication Progress in Computer Vision and Image Analysis Abbreviated Journal  
  Volume 16 Issue Pages  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor Horst Bunke; JuanJose Villanueva; Gemma Sanchez  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up)  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ SAM2009 Serial 1235  
Permanent link to this record
 

 
Author Carme Julia; Angel Sappa; Felipe Lumbreras; Joan Serrat; Antonio Lopez edit   pdf
doi  openurl
  Title Predicting Missing Ratings in Recommender Systems: Adapted Factorization Approach Type Journal Article
  Year 2009 Publication International Journal of Electronic Commerce Abbreviated Journal  
  Volume 14 Issue 1 Pages 89-108  
  Keywords  
  Abstract The paper presents a factorization-based approach to make predictions in recommender systems. These systems are widely used in electronic commerce to help customers find products according to their preferences. Taking into account the customer's ratings of some products available in the system, the recommender system tries to predict the ratings the customer would give to other products in the system. The proposed factorization-based approach uses all the information provided to compute the predicted ratings, in the same way as approaches based on Singular Value Decomposition (SVD). The main advantage of this technique versus SVD-based approaches is that it can deal with missing data. It also has a smaller computational cost. Experimental results with public data sets are provided to show that the proposed adapted factorization approach gives better predicted ratings than a widely used SVD-based approach.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1086-4415 ISBN Medium  
  Area Expedition Conference (up)  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ JSL2009b Serial 1237  
Permanent link to this record
 

 
Author Dena Bazazian edit  isbn
openurl 
  Title Fully Convolutional Networks for Text Understanding in Scene Images Type Book Whole
  Year 2018 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Text understanding in scene images has gained plenty of attention in the computer vision community and it is an important task in many applications as text carries semantically rich information about scene content and context. For instance, reading text in a scene can be applied to autonomous driving, scene understanding or assisting visually impaired people. The general aim of scene text understanding is to localize and recognize text in scene images. Text regions are first localized in the original image by a trained detector model and afterwards fed into a recognition module. The tasks of localization and recognition are highly correlated since an inaccurate localization can affect the recognition task.
The main purpose of this thesis is to devise efficient methods for scene text understanding. We investigate how the latest results on deep learning can advance text understanding pipelines. Recently, Fully Convolutional Networks (FCNs) and derived methods have achieved a significant performance on semantic segmentation and pixel level classification tasks. Therefore, we took benefit of the strengths of FCN approaches in order to detect text in natural scenes. In this thesis we have focused on two challenging tasks of scene text understanding which are Text Detection and Word Spotting. For the task of text detection, we have proposed an efficient text proposal technique in scene images. We have considered the Text Proposals method as the baseline which is an approach to reduce the search space of possible text regions in an image. In order to improve the Text Proposals method we combined it with Fully Convolutional Networks to efficiently reduce the number of proposals while maintaining the same level of accuracy and thus gaining a significant speed up. Our experiments demonstrate that this text proposal approach yields significantly higher recall rates than the line based text localization techniques, while also producing better-quality localization. We have also applied this technique on compressed images such as videos from wearable egocentric cameras. For the task of word spotting, we have introduced a novel mid-level word representation method. We have proposed a technique to create and exploit an intermediate representation of images based on text attributes which roughly correspond to character probability maps. Our representation extends the concept of Pyramidal Histogram Of Characters (PHOC) by exploiting Fully Convolutional Networks to derive a pixel-wise mapping of the character distribution within candidate word regions. We call this representation the Soft-PHOC. Furthermore, we show how to use Soft-PHOC descriptors for word spotting tasks through an efficient text line proposal algorithm. To evaluate the detected text, we propose a novel line based evaluation along with the classic bounding box based approach. We test our method on incidental scene text images which comprises real-life scenarios such as urban scenes. The importance of incidental scene text images is due to the complexity of backgrounds, perspective, variety of script and language, short text and little linguistic context. All of these factors together makes the incidental scene text images challenging.
 
  Address November 2018  
  Corporate Author Thesis Ph.D. thesis  
  Publisher Ediciones Graficas Rey Place of Publication Editor Dimosthenis Karatzas;Andrew Bagdanov  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-84-948531-1-1 Medium  
  Area Expedition Conference (up)  
  Notes DAG; 600.121 Approved no  
  Call Number Admin @ si @ Baz2018 Serial 3220  
Permanent link to this record
 

 
Author Arnau Ramisa; Adriana Tapus; David Aldavert; Ricardo Toledo; Ramon Lopez de Mantaras edit  doi
openurl 
  Title Robust Vision-Based Localization using Combinations of Local Feature Regions Detectors Type Journal Article
  Year 2009 Publication Autonomous Robots Abbreviated Journal AR  
  Volume 27 Issue 4 Pages 373-385  
  Keywords  
  Abstract This paper presents a vision-based approach for mobile robot localization. The model of the environment is topological. The new approach characterizes a place using a signature. This signature consists of a constellation of descriptors computed over different types of local affine covariant regions extracted from an omnidirectional image acquired rotating a standard camera with a pan-tilt unit. This type of representation permits a reliable and distinctive environment modelling. Our objectives were to validate the proposed method in indoor environments and, also, to find out if the combination of complementary local feature region detectors improves the localization versus using a single region detector. Our experimental results show that if false matches are effectively rejected, the combination of different covariant affine region detectors increases notably the performance of the approach by combining the different strengths of the individual detectors. In order to reduce the localization time, two strategies are evaluated: re-ranking the map nodes using a global similarity measure and using standard perspective view field of 45°.
In order to systematically test topological localization methods, another contribution proposed in this work is a novel method to see the degradation in localization performance as the robot moves away from the point where the original signature was acquired. This allows to know the robustness of the proposed signature. In order for this to be effective, it must be done in several, variated, environments that test all the possible situations in which the robot may have to perform localization.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0929-5593 ISBN Medium  
  Area Expedition Conference (up)  
  Notes ADAS Approved no  
  Call Number Admin @ si @ RTA2009 Serial 1245  
Permanent link to this record
 

 
Author Carlo Gatta; Oriol Pujol; Oriol Rodriguez-Leor; J. M. Ferre; Petia Radeva edit  doi
openurl 
  Title Fast Rigid Registration of Vascular Structures in IVUS Sequences Type Journal Article
  Year 2009 Publication IEEE Transactions on Information Technology in Biomedicine Abbreviated Journal  
  Volume 13 Issue 6 Pages 106-1011  
  Keywords  
  Abstract Intravascular ultrasound (IVUS) technology permits visualization of high-resolution images of internal vascular structures. IVUS is a unique image-guiding tool to display longitudinal view of the vessels, and estimate the length and size of vascular structures with the goal of accurate diagnosis. Unfortunately, due to pulsatile contraction and expansion of the heart, the captured images are affected by different motion artifacts that make visual inspection difficult. In this paper, we propose an efficient algorithm that aligns vascular structures and strongly reduces the saw-shaped oscillation, simplifying the inspection of longitudinal cuts; it reduces the motion artifacts caused by the displacement of the catheter in the short-axis plane and the catheter rotation due to vessel tortuosity. The algorithm prototype aligns 3.16 frames/s and clearly outperforms state-of-the-art methods with similar computational cost. The speed of the algorithm is crucial since it allows to inspect the corrected sequence during patient intervention. Moreover, we improved an indirect methodology for IVUS rigid registration algorithm evaluation.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1089-7771 ISBN Medium  
  Area Expedition Conference (up)  
  Notes MILAB;HuPBA Approved no  
  Call Number BCNPCL @ bcnpcl @ GPL2009 Serial 1250  
Permanent link to this record
 

 
Author Fosca De Iorio; Carolina Malagelada; Fernando Azpiroz; M. Maluenda; C. Violanti; Laura Igual; Jordi Vitria; Juan R. Malagelada edit  doi
openurl 
  Title Intestinal motor activity, endoluminal motion and transit Type Journal Article
  Year 2009 Publication Neurogastroenterology & Motility Abbreviated Journal NEUMOT  
  Volume 21 Issue 12 Pages 1264–e119  
  Keywords  
  Abstract A programme for evaluation of intestinal motility has been recently developed based on endoluminal image analysis using computer vision methodology and machine learning techniques. Our aim was to determine the effect of intestinal muscle inhibition on wall motion, dynamics of luminal content and transit in the small bowel. Fourteen healthy subjects ingested the endoscopic capsule (Pillcam, Given Imaging) in fasting conditions. Seven of them received glucagon (4.8 microg kg(-1) bolus followed by a 9.6 microg kg(-1) h(-1) infusion during 1 h) and in the other seven, fasting activity was recorded, as controls. This dose of glucagon has previously shown to inhibit both tonic and phasic intestinal motor activity. Endoluminal image and displacement was analyzed by means of a computer vision programme specifically developed for the evaluation of muscular activity (contractile and non-contractile patterns), intestinal contents, endoluminal motion and transit. Thirty-minute periods before, during and after glucagon infusion were analyzed and compared with equivalent periods in controls. No differences were found in the parameters measured during the baseline (pretest) periods when comparing glucagon and control experiments. During glucagon infusion, there was a significant reduction in contractile activity (0.2 +/- 0.1 vs 4.2 +/- 0.9 luminal closures per min, P < 0.05; 0.4 +/- 0.1 vs 3.4 +/- 1.2% of images with radial wrinkles, P < 0.05) and a significant reduction of endoluminal motion (82 +/- 9 vs 21 +/- 10% of static images, P < 0.05). Endoluminal image analysis, by means of computer vision and machine learning techniques, can reliably detect reduced intestinal muscle activity and motion.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up)  
  Notes OR;MILAB;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ DMA2009 Serial 1251  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: