toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Sergio Escalera; Alicia Fornes; O. Pujol; Petia Radeva; Gemma Sanchez; Josep Llados edit  doi
openurl 
  Title Blurred Shape Model for Binary and Grey-level Symbol Recognition Type Journal Article
  Year 2009 Publication Pattern Recognition Letters Abbreviated Journal PRL  
  Volume 30 Issue 15 Pages 1424–1433  
  Keywords  
  Abstract (down) Many symbol recognition problems require the use of robust descriptors in order to obtain rich information of the data. However, the research of a good descriptor is still an open issue due to the high variability of symbols appearance. Rotation, partial occlusions, elastic deformations, intra-class and inter-class variations, or high variability among symbols due to different writing styles, are just a few problems. In this paper, we introduce a symbol shape description to deal with the changes in appearance that these types of symbols suffer. The shape of the symbol is aligned based on principal components to make the recognition invariant to rotation and reflection. Then, we present the Blurred Shape Model descriptor (BSM), where new features encode the probability of appearance of each pixel that outlines the symbols shape. Moreover, we include the new descriptor in a system to deal with multi-class symbol categorization problems. Adaboost is used to train the binary classifiers, learning the BSM features that better split symbol classes. Then, the binary problems are embedded in an Error-Correcting Output Codes framework (ECOC) to deal with the multi-class case. The methodology is evaluated on different synthetic and real data sets. State-of-the-art descriptors and classifiers are compared, showing the robustness and better performance of the present scheme to classify symbols with high variability of appearance.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA; DAG; MILAB Approved no  
  Call Number BCNPCL @ bcnpcl @ EFP2009a Serial 1180  
Permanent link to this record
 

 
Author I. Sorodoc; S. Pezzelle; A. Herbelot; Mariella Dimiccoli; R. Bernardi edit  url
doi  openurl
  Title Learning quantification from images: A structured neural architecture Type Journal Article
  Year 2018 Publication Natural Language Engineering Abbreviated Journal NLE  
  Volume 24 Issue 3 Pages 363-392  
  Keywords  
  Abstract (down) Major advances have recently been made in merging language and vision representations. Most tasks considered so far have confined themselves to the processing of objects and lexicalised relations amongst objects (content words). We know, however, that humans (even pre-school children) can abstract over raw multimodal data to perform certain types of higher level reasoning, expressed in natural language by function words. A case in point is given by their ability to learn quantifiers, i.e. expressions like few, some and all. From formal semantics and cognitive linguistics, we know that quantifiers are relations over sets which, as a simplification, we can see as proportions. For instance, in most fish are red, most encodes the proportion of fish which are red fish. In this paper, we study how well current neural network strategies model such relations. We propose a task where, given an image and a query expressed by an object–property pair, the system must return a quantifier expressing which proportions of the queried object have the queried property. Our contributions are twofold. First, we show that the best performance on this task involves coupling state-of-the-art attention mechanisms with a network architecture mirroring the logical structure assigned to quantifiers by classic linguistic formalisation. Second, we introduce a new balanced dataset of image scenarios associated with quantification queries, which we hope will foster further research in this area.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; no menciona Approved no  
  Call Number Admin @ si @ SPH2018 Serial 3021  
Permanent link to this record
 

 
Author Amir A.Amini; Yasheng Chen; Mohamed Elayyadi; Petia Radeva edit   pdf
openurl 
  Title Tag Surface Reconstruction and Tracking of Myocardial Beads from SPAMM-MRI with Parametric B-Spline Surfaces Type Journal
  Year 2001 Publication IEEE Transactions on Medical Imaging Abbreviated Journal TMI  
  Volume 20 Issue 2 Pages 94–103  
  Keywords B-spline surfaces, cardiac motion, myocardial beads, myocardial infarction, tagged MRI.  
  Abstract (down) Magnetic resonance imaging (MRI) is unique in its ability to noninvasively and selectively alter tissue magnetization, and create tag planes intersecting image slices. The resulting grid of signal voids allows for tracking deformations of tissues in otherwise homogeneous-signal myocardial regions. In this paper, we propose a specific spatial modulation of magnetization (SPAMM) imaging protocol together with efficient techniques for measurement of three-dimensional (3-D) motion of material points of the human heart (referred to as myocardial beads) from images collected with the SPAMM method. The techniques make use of tagged images in orthogonal views by explicitly reconstructing 3-D B-spline surface representation of tag planes (tag planes in two orthogonal orientations intersecting the short-axis (SA) image slices and tag planes in an orientation orthogonal to the short-axis tag planes intersecting long-axis (LA) image slices). The developed methods allow for viewing deformations of 3-D tag surfaces, spatial correspondence of long-axis and short-axis image slice and tag positions, as well as nonrigid movement of myocardial beads as a function of time.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB Approved no  
  Call Number BCNPCL @ bcnpcl @ ACE2001; IAM @ iam @ ACE2001 Serial 180  
Permanent link to this record
 

 
Author G. Zahnd; Simone Balocco; A. Serusclat; P. Moulin; M. Orkisz; D. Vray edit  doi
openurl 
  Title Progressive attenuation of the longitudinal kinetics in the common carotid artery: preliminary in vivo assessment Ultrasound in Medicine and Biology Type Journal Article
  Year 2015 Publication Ultrasound in Medicine and Biology Abbreviated Journal UMB  
  Volume 41 Issue 1 Pages 339-345  
  Keywords Arterial stiffness; Atherosclerosis; Common carotid artery; Longitudinal kinetics; Motion tracking; Ultrasound imaging  
  Abstract (down) Longitudinal kinetics (LOKI) of the arterial wall consists of the shearing motion of the intima-media complex over the adventitia layer in the direction parallel to the blood flow during the cardiac cycle. The aim of this study was to investigate the local variability of LOKI amplitude along the length of the vessel. By use of a previously validated motion-estimation framework, 35 in vivo longitudinal B-mode ultrasound cine loops of healthy common carotid arteries were analyzed. Results indicated that LOKI amplitude is progressively attenuated along the length of the artery, as it is larger in regions located on the proximal side of the image (i.e., toward the heart) and smaller in regions located on the distal side of the image (i.e., toward the head), with an average attenuation coefficient of -2.5 ± 2.0%/mm. Reported for the first time in this study, this phenomenon is likely to be of great importance in improving understanding of atherosclerosis mechanisms, and has the potential to be a novel index of arterial stiffness.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB Approved no  
  Call Number Admin @ si @ ZBS2014 Serial 2556  
Permanent link to this record
 

 
Author Pedro Martins; Paulo Carvalho; Carlo Gatta edit   pdf
doi  openurl
  Title Context-aware features and robust image representations Type Journal Article
  Year 2014 Publication Journal of Visual Communication and Image Representation Abbreviated Journal JVCIR  
  Volume 25 Issue 2 Pages 339-348  
  Keywords  
  Abstract (down) Local image features are often used to efficiently represent image content. The limited number of types of features that a local feature extractor responds to might be insufficient to provide a robust image representation. To overcome this limitation, we propose a context-aware feature extraction formulated under an information theoretic framework. The algorithm does not respond to a specific type of features; the idea is to retrieve complementary features which are relevant within the image context. We empirically validate the method by investigating the repeatability, the completeness, and the complementarity of context-aware features on standard benchmarks. In a comparison with strictly local features, we show that our context-aware features produce more robust image representations. Furthermore, we study the complementarity between strictly local features and context-aware ones to produce an even more robust representation.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes LAMP; 600.079;MILAB Approved no  
  Call Number Admin @ si @ MCG2014 Serial 2467  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: