toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author (down) Rada Deeb; Damien Muselet; Mathieu Hebert; Alain Tremeau; Joost Van de Weijer edit   pdf
openurl 
  Title 3D color charts for camera spectral sensitivity estimation Type Conference Article
  Year 2017 Publication 28th British Machine Vision Conference Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Estimating spectral data such as camera sensor responses or illuminant spectral power distribution from raw RGB camera outputs is crucial in many computer vision applications.
Usually, 2D color charts with various patches of known spectral reflectance are
used as reference for such purpose. Deducing n-D spectral data (n»3) from 3D RGB inputs is an ill-posed problem that requires a high number of inputs. Unfortunately, most of the natural color surfaces have spectral reflectances that are well described by low-dimensional linear models, i.e. each spectral reflectance can be approximated by a weighted sum of the others. It has been shown that adding patches to color charts does not help in practice, because the information they add is redundant with the information provided by the first set of patches. In this paper, we propose to use spectral data of
higher dimensionality by using 3D color charts that create inter-reflections between the surfaces. These inter-reflections produce multiplications between natural spectral curves and so provide non-linear spectral curves. We show that such data provide enough information for accurate spectral data estimation.
 
  Address London; September 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference BMVC  
  Notes LAMP; 600.109; 600.120 Approved no  
  Call Number Admin @ si @ DMH2017b Serial 3037  
Permanent link to this record
 

 
Author (down) R.A.Bendezu; E.Barba; E.Burri; D.Cisternas; Carolina Malagelada; Santiago Segui; Anna Accarino; S.Quiroga; E.Monclus; I.Navazo edit  doi
openurl 
  Title Intestinal gas content and distribution in health and in patients with functional gut symptoms Type Journal Article
  Year 2015 Publication Neurogastroenterology & Motility Abbreviated Journal NEUMOT  
  Volume 27 Issue 9 Pages 1249-1257  
  Keywords  
  Abstract BACKGROUND:
The precise relation of intestinal gas to symptoms, particularly abdominal bloating and distension remains incompletely elucidated. Our aim was to define the normal values of intestinal gas volume and distribution and to identify abnormalities in relation to functional-type symptoms.
METHODS:
Abdominal computed tomography scans were evaluated in healthy subjects (n = 37) and in patients in three conditions: basal (when they were feeling well; n = 88), during an episode of abdominal distension (n = 82) and after a challenge diet (n = 24). Intestinal gas content and distribution were measured by an original analysis program. Identification of patients outside the normal range was performed by machine learning techniques (one-class classifier). Results are expressed as median (IQR) or mean ± SE, as appropriate.
KEY RESULTS:
In healthy subjects the gut contained 95 (71, 141) mL gas distributed along the entire lumen. No differences were detected between patients studied under asymptomatic basal conditions and healthy subjects. However, either during a spontaneous bloating episode or once challenged with a flatulogenic diet, luminal gas was found to be increased and/or abnormally distributed in about one-fourth of the patients. These patients detected outside the normal range by the classifier exhibited a significantly greater number of abnormal features than those within the normal range (3.7 ± 0.4 vs 0.4 ± 0.1; p < 0.001).
CONCLUSIONS & INFERENCES:
The analysis of a large cohort of subjects using original techniques provides unique and heretofore unavailable information on the volume and distribution of intestinal gas in normal conditions and in relation to functional gastrointestinal symptoms.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB Approved no  
  Call Number Admin @ si @ BBB2015 Serial 2667  
Permanent link to this record
 

 
Author (down) R. Xandri edit  openurl
  Title Un metode de vectoritzacio basat en l’aprimament Type Report
  Year 2002 Publication CVC Technical Report # 62 Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address CVC (UAB)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number Admin @ si @ Xan2002 Serial 331  
Permanent link to this record
 

 
Author (down) R. Valenti; Theo Gevers edit  doi
openurl 
  Title Combining Head Pose and Eye Location Information for Gaze Estimation Type Journal Article
  Year 2012 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP  
  Volume 21 Issue 2 Pages 802-815  
  Keywords  
  Abstract Impact factor 2010: 2.92
Impact factor 2011/12?: 3.32
Head pose and eye location for gaze estimation have been separately studied in numerous works in the literature. Previous research shows that satisfactory accuracy in head pose and eye location estimation can be achieved in constrained settings. However, in the presence of nonfrontal faces, eye locators are not adequate to accurately locate the center of the eyes. On the other hand, head pose estimation techniques are able to deal with these conditions; hence, they may be suited to enhance the accuracy of eye localization. Therefore, in this paper, a hybrid scheme is proposed to combine head pose and eye location information to obtain enhanced gaze estimation. To this end, the transformation matrix obtained from the head pose is used to normalize the eye regions, and in turn, the transformation matrix generated by the found eye location is used to correct the pose estimation procedure. The scheme is designed to enhance the accuracy of eye location estimations, particularly in low-resolution videos, to extend the operative range of the eye locators, and to improve the accuracy of the head pose tracker. These enhanced estimations are then combined to obtain a novel visual gaze estimation system, which uses both eye location and head information to refine the gaze estimates. From the experimental results, it can be derived that the proposed unified scheme improves the accuracy of eye estimations by 16% to 23%. Furthermore, it considerably extends its operating range by more than 15° by overcoming the problems introduced by extreme head poses. Moreover, the accuracy of the head pose tracker is improved by 12% to 24%. Finally, the experimentation on the proposed combined gaze estimation system shows that it is accurate (with a mean error between 2° and 5°) and that it can be used in cases where classic approaches would fail without imposing restraints on the position of the head.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1057-7149 ISBN Medium  
  Area Expedition Conference  
  Notes ALTRES;ISE Approved no  
  Call Number Admin @ si @ VaG 2012b Serial 1851  
Permanent link to this record
 

 
Author (down) R. Valenti; N. Sebe; Theo Gevers edit  url
doi  openurl
  Title What are you looking at? Improving Visual gaze Estimation by Saliency Type Journal Article
  Year 2012 Publication International Journal of Computer Vision Abbreviated Journal IJCV  
  Volume 98 Issue 3 Pages 324-334  
  Keywords  
  Abstract Impact factor 2010: 5.15
Impact factor 2011/12?: 5.36
In this paper we present a novel mechanism to obtain enhanced gaze estimation for subjects looking at a scene or an image. The system makes use of prior knowledge about the scene (e.g. an image on a computer screen), to define a probability map of the scene the subject is gazing at, in order to find the most probable location. The proposed system helps in correcting the fixations which are erroneously estimated by the gaze estimation device by employing a saliency framework to adjust the resulting gaze point vector. The system is tested on three scenarios: using eye tracking data, enhancing a low accuracy webcam based eye tracker, and using a head pose tracker. The correlation between the subjects in the commercial eye tracking data is improved by an average of 13.91%. The correlation on the low accuracy eye gaze tracker is improved by 59.85%, and for the head pose tracker we obtain an improvement of 10.23%. These results show the potential of the system as a way to enhance and self-calibrate different visual gaze estimation systems.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0920-5691 ISBN Medium  
  Area Expedition Conference  
  Notes ALTRES;ISE Approved no  
  Call Number Admin @ si @ VSG2012 Serial 1848  
Permanent link to this record
 

 
Author (down) R. Valenti; Theo Gevers edit  doi
openurl 
  Title Accurate Eye Center Location through Invariant Isocentric Patterns Type Journal Article
  Year 2012 Publication IEEE Transaction on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
  Volume 34 Issue 9 Pages 1785-1798  
  Keywords  
  Abstract Impact factor 2010: 5.308
Impact factor 2011/12?: 5.96
Locating the center of the eyes allows for valuable information to be captured and used in a wide range of applications. Accurate eye center location can be determined using commercial eye-gaze trackers, but additional constraints and expensive hardware make these existing solutions unattractive and impossible to use on standard (i.e., visible wavelength), low-resolution images of eyes. Systems based solely on appearance are proposed in the literature, but their accuracy does not allow us to accurately locate and distinguish eye centers movements in these low-resolution settings. Our aim is to bridge this gap by locating the center of the eye within the area of the pupil on low-resolution images taken from a webcam or a similar device. The proposed method makes use of isophote properties to gain invariance to linear lighting changes (contrast and brightness), to achieve in-plane rotational invariance, and to keep low-computational costs. To further gain scale invariance, the approach is applied to a scale space pyramid. In this paper, we extensively test our approach for its robustness to changes in illumination, head pose, scale, occlusion, and eye rotation. We demonstrate that our system can achieve a significant improvement in accuracy over state-of-the-art techniques for eye center location in standard low-resolution imagery.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0162-8828 ISBN Medium  
  Area Expedition Conference  
  Notes ALTRES;ISE Approved no  
  Call Number Admin @ si @ VaG 2012a Serial 1849  
Permanent link to this record
 

 
Author (down) R. Herault; Franck Davoine; Fadi Dornaika; Y. Grandvalet edit  openurl
  Title Simultaneous and robust face and facial action tracking Type Miscellaneous
  Year 2006 Publication 15eme Congres Francophone AFRIF–AFIA de Reconnaissance des Formes et Intelligence Artificielle (RFIA´06) Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Tours (France)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number Admin @ si @ HDD2006 Serial 735  
Permanent link to this record
 

 
Author (down) R. de Nijs; Sebastian Ramos; Gemma Roig; Xavier Boix; Luc Van Gool; K. Kühnlenz. edit   pdf
openurl 
  Title On-line Semantic Perception Using Uncertainty Type Conference Article
  Year 2012 Publication International Conference on Intelligent Robots and Systems Abbreviated Journal IROS  
  Volume Issue Pages 4185-4191  
  Keywords Semantic Segmentation  
  Abstract Visual perception capabilities are still highly unreliable in unconstrained settings, and solutions might not beaccurate in all regions of an image. Awareness of the uncertainty of perception is a fundamental requirement for proper high level decision making in a robotic system. Yet, the uncertainty measure is often sacrificed to account for dependencies between object/region classifiers. This is the case of Conditional Random Fields (CRFs), the success of which stems from their ability to infer the most likely world configuration, but they do not directly allow to estimate the uncertainty of the solution. In this paper, we consider the setting of assigning semantic labels to the pixels of an image sequence. Instead of using a CRF, we employ a Perturb-and-MAP Random Field, a recently introduced probabilistic model that allows performing fast approximate sampling from its probability density function. This allows to effectively compute the uncertainty of the solution, indicating the reliability of the most likely labeling in each region of the image. We report results on the CamVid dataset, a standard benchmark for semantic labeling of urban image sequences. In our experiments, we show the benefits of exploiting the uncertainty by putting more computational effort on the regions of the image that are less reliable, and use more efficient techniques for other regions, showing little decrease of performance  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IROS  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ NRR2012 Serial 2378  
Permanent link to this record
 

 
Author (down) R. Clariso; David Masip; A. Rius edit  url
openurl 
  Title Student projects empowering mobile learning in higher education Type Journal
  Year 2014 Publication Revista de Universidad y Sociedad del Conocimiento Abbreviated Journal RUSC  
  Volume 11 Issue Pages 192-207  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1698-580X ISBN Medium  
  Area Expedition Conference  
  Notes OR;MV Approved no  
  Call Number Admin @ si @ CMR2014 Serial 2619  
Permanent link to this record
 

 
Author (down) R. Bertrand; P. Gomez-Krämer; Oriol Ramos Terrades; P. Franco; Jean-Marc Ogier edit   pdf
doi  openurl
  Title A System Based On Intrinsic Features for Fraudulent Document Detection Type Conference Article
  Year 2013 Publication 12th International Conference on Document Analysis and Recognition Abbreviated Journal  
  Volume Issue Pages 106-110  
  Keywords paper document; document analysis; fraudulent document; forgery; fake  
  Abstract Paper documents still represent a large amount of information supports used nowadays and may contain critical data. Even though official documents are secured with techniques such as printed patterns or artwork, paper documents suffer froma lack of security.
However, the high availability of cheap scanning and printing hardware allows non-experts to easily create fake documents. As the use of a watermarking system added during the document production step is hardly possible, solutions have to be proposed to distinguish a genuine document from a forged one.
In this paper, we present an automatic forgery detection method based on document’s intrinsic features at character level. This method is based on the one hand on outlier character detection in a discriminant feature space and on the other hand on the detection of strictly similar characters. Therefore, a feature set iscomputed for all characters. Then, based on a distance between characters of the same class.
 
  Address Washington; USA; August 2013  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1520-5363 ISBN Medium  
  Area Expedition Conference ICDAR  
  Notes DAG; 600.061 Approved no  
  Call Number Admin @ si @ BGR2013a Serial 2332  
Permanent link to this record
 

 
Author (down) R. Bertrand; Oriol Ramos Terrades; P. Gomez-Kramer; P. Franco; Jean-Marc Ogier edit  doi
openurl 
  Title A Conditional Random Field model for font forgery detection Type Conference Article
  Year 2015 Publication 13th International Conference on Document Analysis and Recognition ICDAR2015 Abbreviated Journal  
  Volume Issue Pages 576 - 580  
  Keywords  
  Abstract Nowadays, document forgery is becoming a real issue. A large amount of documents that contain critical information as payment slips, invoices or contracts, are constantly subject to fraudster manipulation because of the lack of security regarding this kind of document. Previously, a system to detect fraudulent documents based on its intrinsic features has been presented. It was especially designed to retrieve copy-move forgery and imperfection due to fraudster manipulation. However, when a set of characters is not present in the original document, copy-move forgery is not feasible. Hence, the fraudster will use a text toolbox to add or modify information in the document by imitating the font or he will cut and paste characters from another document where the font properties are similar. This often results in font type errors. Thus, a clue to detect document forgery consists of finding characters, words or sentences in a document with font properties different from their surroundings. To this end, we present in this paper an automatic forgery detection method based on document font features. Using the Conditional Random Field a measurement of probability that a character belongs to a specific font is made by comparing the character font features to a knowledge database. Then, the character is classified as a genuine or a fake one by comparing its probability to belong to a certain font type with those of the neighboring characters.  
  Address Nancy; France; August 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICDAR  
  Notes DAG; 600.077 Approved no  
  Call Number Admin @ si @ BRG2015 Serial 2725  
Permanent link to this record
 

 
Author (down) Quentin Angermann; Jorge Bernal; Cristina Sanchez Montes; Maroua Hammami; Gloria Fernandez Esparrach; Xavier Dray; Olivier Romain; F. Javier Sanchez; Aymeric Histace edit   pdf
openurl 
  Title Real-Time Polyp Detection in Colonoscopy Videos: A Preliminary Study For Adapting Still Frame-based Methodology To Video Sequences Analysis Type Conference Article
  Year 2017 Publication 31st International Congress and Exhibition on Computer Assisted Radiology and Surgery Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Barcelona; Spain; June 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CARS  
  Notes MV; no menciona Approved no  
  Call Number Admin @ si @ ABS2017 Serial 2947  
Permanent link to this record
 

 
Author (down) Quentin Angermann; Jorge Bernal; Cristina Sanchez Montes; Maroua Hammami; Gloria Fernandez Esparrach; Xavier Dray; Olivier Romain; F. Javier Sanchez; Aymeric Histace edit  openurl
  Title Clinical Usability Quantification Of a Real-Time Polyp Detection Method In Videocolonoscopy Type Conference Article
  Year 2017 Publication 25th United European Gastroenterology Week Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Barcelona, October 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ESGE  
  Notes MV; no menciona Approved no  
  Call Number Admin @ si @ ABS2017c Serial 2978  
Permanent link to this record
 

 
Author (down) Quentin Angermann; Jorge Bernal; Cristina Sanchez Montes; Gloria Fernandez Esparrach; Xavier Gray; Olivier Romain; F. Javier Sanchez; Aymeric Histace edit   pdf
openurl 
  Title Towards Real-Time Polyp Detection in Colonoscopy Videos: Adapting Still Frame-Based Methodologies for Video Sequences Analysis Type Conference Article
  Year 2017 Publication 4th International Workshop on Computer Assisted and Robotic Endoscopy Abbreviated Journal  
  Volume Issue Pages 29-41  
  Keywords Polyp detection; colonoscopy; real time; spatio temporal coherence  
  Abstract Colorectal cancer is the second cause of cancer death in United States: precursor lesions (polyps) detection is key for patient survival. Though colonoscopy is the gold standard screening tool, some polyps are still missed. Several computational systems have been proposed but none of them are used in the clinical room mainly due to computational constraints. Besides, most of them are built over still frame databases, decreasing their performance on video analysis due to the lack of output stability and not coping with associated variability on image quality and polyp appearance. We propose a strategy to adapt these methods to video analysis by adding a spatio-temporal stability module and studying a combination of features to capture polyp appearance variability. We validate our strategy, incorporated on a real-time detection method, on a public video database. Resulting method detects all
polyps under real time constraints, increasing its performance due to our
adaptation strategy.
 
  Address Quebec; Canada; September 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CARE  
  Notes MV; 600.096; 600.075 Approved no  
  Call Number Admin @ si @ ABS2017b Serial 2977  
Permanent link to this record
 

 
Author (down) Quan-sen Sun; Zhong Jin; Pheng-ann Heng; De-shen Xia edit  openurl
  Title A novel feature fusion method based on partial least squares regression Type Book Chapter
  Year 2005 Publication Pattern Recognition and Data Mining, Lecture Notes in Computer Science, 3686: 268–277 Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Bath (United Kingdom)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number Admin @ si @ SJH2005 Serial 626  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: