toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author (up) Fernando Vilariño; Panagiota Spyridonos; Jordi Vitria; Fernando Azpiroz; Petia Radeva edit   pdf
openurl 
  Title Cascade analysis for intestinal contraction detection Type Conference Article
  Year 2006 Publication 20th International Congress and exhibition Computer Assisted Radiology and Surgery Abbreviated Journal  
  Volume Issue Pages 9-10  
  Keywords intestine video analysis, anisotropic features, support vector machine, cascade of classifiers  
  Abstract In this work, we address the study of intestinal contractions in a novel approach based on a machine learning framework to process data from Wireless Capsule Video Endoscopy. Wireless endoscopy represents a unique way to visualize the intestine motility by creating long videos to visualize intestine dynamics. In this paper we argue that to analyze huge amount of wireless endoscopy data and define robust methods for contraction detection we should base our approach on sophisticated machine learning techniques. In particular, we propose a cascade of classifiers in order to remove different physiological phenomenon and obtain the motility pattern of small intestines. Our results show obtaining high specificity and sensitivity rates that highlight the high efficiency of the selected approach and support the feasibility of the proposed methodology in the automatic detection and analysis of intestine contractions.  
  Address Osaka (Japan)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area 800 Expedition Conference CARS  
  Notes MV;OR;MILAB;SIAI Approved no  
  Call Number BCNPCL @ bcnpcl @ VSV2006a; IAM @ iam @ VSV2006h Serial 726  
Permanent link to this record
 

 
Author (up) Fernando Vilariño; Panagiota Spyridonos; Jordi Vitria; Fernando Azpiroz; Petia Radeva edit   pdf
doi  isbn
openurl 
  Title Automatic Detection of Intestinal Juices in Wireless Capsule Video Endoscopy Type Conference Article
  Year 2006 Publication 18th International Conference on Pattern Recognition Abbreviated Journal  
  Volume 4 Issue Pages 719-722  
  Keywords Clinical diagnosis , Endoscopes , Fluids and secretions , Gabor filters , Hospitals , Image sequence analysis , Intestines , Lighting , Shape , Visualization  
  Abstract Wireless capsule video endoscopy is a novel and challenging clinical technique, whose major reported drawback relates to the high amount of time needed for video visualization. In this paper, we propose a method for the rejection of the parts of the video resulting not valid for analysis by means of automatic detection of intestinal juices. We applied Gabor filters for the characterization of the bubble-like shape of intestinal juices in fasting patients. Our method achieves a significant reduction in visualization time, with no relevant loss of valid frames. The proposed approach is easily extensible to other image analysis scenarios where the described pattern of bubbles can be found.  
  Address Hong Kong  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1051-4651 ISBN 0-7695-2521-0 Medium  
  Area 800 Expedition Conference ICPR  
  Notes MV;OR;MILAB;SIAI Approved no  
  Call Number BCNPCL @ bcnpcl @ VSV2006b; IAM @ iam @ VSV2006g Serial 727  
Permanent link to this record
 

 
Author (up) Fernando Vilariño; Panagiota Spyridonos; Petia Radeva; Jordi Vitria; Fernando Azpiroz; Juan Malagelada edit   pdf
url  openurl
  Title Method for automatic classification of in vivo images Type Patent
  Year 2010 Publication US 2010/0046816 Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract A method for automatically detecting a post-duodenal boundary in an image stream of the gastrointestinal (GI) tract. The image stream is sampled to obtain a reduced set of images for processing. The reduced set of images is filtered to remove non-valid frames or non-valid portions of frames, thereby generating a filtered set of valid images. A polar representation of the valid images is generated. Textural features of the polar representation are processed to detect the post-duodenal boundary of the GI tract.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area 800 Expedition Conference  
  Notes MV;OR;MILAB;SIAI Approved no  
  Call Number IAM @ iam @ VSR2010 Serial 1702  
Permanent link to this record
 

 
Author (up) Fernando Vilariño; Panagiota Spyridonos; Petia Radeva; Jordi Vitria; Fernando Azpiroz; Juan Malagelada edit   pdf
url  openurl
  Title Device, system and method for measurement and analysis of contractile activity Type Patent
  Year 2009 Publication US 2009/0202117 A1 Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract A method and system for determining intestinal dysfunction condition are provided by classifying and analyzing image frames captured in-vivo. The method and system also relate to the detection of contractile activity in intestinal tracts, to automatic detection of video image frames taken in the gastrointestinal tract including contractile activity, and more particularly to measurement and analysis of contractile activity of the GI tract based on image intensity of in vivo image data.  
  Address Pearl Cohen Zedek Latzer  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area 800 Expedition Conference  
  Notes MV;OR;MILAB;SIAI Approved no  
  Call Number IAM @ iam @ VSR2009 Serial 1704  
Permanent link to this record
 

 
Author (up) Fernando Vilariño; Petia Radeva edit  url
openurl 
  Title Patch-Optimized Discriminant Active Contours for Medical Image Segmentation. Type Conference Article
  Year 2002 Publication Iberoamerican Conference on Artificial Intelligence Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Sevilla, Espanya  
  Corporate Author Thesis  
  Publisher Springer Verlag Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IBERAMIA  
  Notes MV;MILAB;SIAI Approved no  
  Call Number BCNPCL @ bcnpcl @ ViR2002; IAM @ iam @ VRa2003 Serial 320  
Permanent link to this record
 

 
Author (up) Fernando Vilariño; Petia Radeva edit  openurl
  Title Cardiac Segmentation with Discriminant Active Contours Type Book Chapter
  Year 2003 Publication Abbreviated Journal  
  Volume Issue Pages 211–217  
  Keywords  
  Abstract Dynamic tracking of heart moving is one relevant target in medical imag- ing and can be helpful for analyzing heart dynamics in the study of several cardiac diseases. For this aim, a previous segmentation problem of such structures is stated, based on certain relevant features (like edges or intensity levels, textures, etc.) Clas- sical active models have been used, but they fail when overlapping structures or not well-defined contours are present. Automatic feature learning systems may be a pow- erful tool. Discriminant active contours present optimal results in this kind of problem. They are a kind of deformable models that converge to an optimal object segmenta- tion that dynamically adapts to the object contour. The feature space is designed from a filter bank in order to guarantee the search and learning of the set of relevant fea- tures for optimal classification on each part of the object. Tracking of target evolution is obtained through the whole set of images, using information from the actual and previous stages. Feedback systems are implemented to guarantee the minimum well- separable classification set in each segmentation step. Our implementation has been proved with several series of Magnetic Resonance with improved results in segmenta- tion in comparison to previous methods.  
  Address Palma de Mallorca  
  Corporate Author Thesis  
  Publisher IOS Press Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CCIA  
  Notes MV;MILAB;SIAI Approved no  
  Call Number BCNPCL @ bcnpcl @ ViR2003; IAM @ iam @ VRa2003 Serial 426  
Permanent link to this record
 

 
Author (up) Fernando Vilariño; Stephan Ameling; Gerard Lacey; Stephen Patchett; Hugh Mulcahy edit  openurl
  Title Eye Tracking Search Patterns in Expert and Trainee Colonoscopists: A Novel Method of Assessing Endoscopic Competency? Type Journal Article
  Year 2009 Publication Gastrointestinal Endoscopy Abbreviated Journal GI  
  Volume 69 Issue 5 Pages 370  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area 800 Expedition Conference  
  Notes MV;SIAI Approved no  
  Call Number fernando @ fernando @ Serial 2420  
Permanent link to this record
 

 
Author (up) Ferran Diego edit  openurl
  Title Alignment of Videos Recorded from Moving Vehicles Type Report
  Year 2007 Publication CVC Technical Report #111 Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address CVC (UAB)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number Admin @ si @ DiS2007 Serial 825  
Permanent link to this record
 

 
Author (up) Ferran Diego edit  openurl
  Title Probabilistic Alignment of Video Sequences Recorded by Moving Cameras Type Book Whole
  Year 2011 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Video alignment consists of integrating multiple video sequences recorded independently into a single video sequence. This means to register both in time (synchronize
frames) and space (image registration) so that the two videos sequences can be fused
or compared pixel–wise. In spite of being relatively unknown, many applications today may benefit from the availability of robust and efficient video alignment methods.
For instance, video surveillance requires to integrate video sequences that are recorded
of the same scene at different times in order to detect changes. The problem of aligning videos has been addressed before, but in the relatively simple cases of fixed or rigidly attached cameras and simultaneous acquisition. In addition, most works rely
on restrictive assumptions which reduce its difficulty such as linear time correspondence or the knowledge of the complete trajectories of corresponding scene points on the images; to some extent, these assumptions limit the practical applicability of the solutions developed until now. In this thesis, we focus on the challenging problem of aligning sequences recorded at different times from independent moving cameras following similar but not coincident trajectories. More precisely, this thesis covers four studies that advance the state-of-the-art in video alignment. First, we focus on analyzing and developing a probabilistic framework for video alignment, that is, a principled way to integrate multiple observations and prior information. In this way, two different approaches are presented to exploit the combination of several purely visual features (image–intensities, visual words and dense motion field descriptor), and
global positioning system (GPS) information. Second, we focus on reformulating the
problem into a single alignment framework since previous works on video alignment
adopt a divide–and–conquer strategy, i.e., first solve the synchronization, and then
register corresponding frames. This also generalizes the ’classic’ case of fixed geometric transform and linear time mapping. Third, we focus on exploiting directly the
time domain of the video sequences in order to avoid exhaustive cross–frame search.
This provides relevant information used for learning the temporal mapping between
pairs of video sequences. Finally, we focus on adapting these methods to the on–line
setting for road detection and vehicle geolocation. The qualitative and quantitative
results presented in this thesis on a variety of real–world pairs of video sequences show that the proposed method is: robust to varying imaging conditions, different image
content (e.g., incoming and outgoing vehicles), variations on camera velocity, and
different scenarios (indoor and outdoor) going beyond the state–of–the–art. Moreover, the on–line video alignment has been successfully applied for road detection and
vehicle geolocation achieving promising results.
 
  Address  
  Corporate Author Thesis Ph.D. thesis  
  Publisher Ediciones Graficas Rey Place of Publication Editor Joan Serrat  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number Admin @ si @ Die2011 Serial 1787  
Permanent link to this record
 

 
Author (up) Ferran Diego; Daniel Ponsa; Joan Serrat; Antonio Lopez edit   pdf
openurl 
  Title Video Alignment for Difference-spotting Type Miscellaneous
  Year 2008 Publication Proceedings of the ECCV workshop on Multi–camera and Multi–modal Sensor Fusion Algorithms and Applications (M2SFA2 2008), Marseille (France) Abbreviated Journal  
  Volume Issue Pages  
  Keywords video alignment  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ DPS2008 Serial 1079  
Permanent link to this record
 

 
Author (up) Ferran Diego; Daniel Ponsa; Joan Serrat; Antonio Lopez edit  openurl
  Title Video alignment for automotive applications Type Miscellaneous
  Year 2009 Publication BMVA one–day technical meeting on vision for automotive applications Abbreviated Journal  
  Volume Issue Pages  
  Keywords video alignment  
  Abstract  
  Address London, UK  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ DPS2009 Serial 1271  
Permanent link to this record
 

 
Author (up) Ferran Diego; Daniel Ponsa; Joan Serrat; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title Vehicle geolocalization based on video synchronization Type Conference Article
  Year 2010 Publication 13th Annual International Conference on Intelligent Transportation Systems Abbreviated Journal  
  Volume Issue Pages 1511–1516  
  Keywords video alignment  
  Abstract TC8.6
This paper proposes a novel method for estimating the geospatial localization of a vehicle. I uses as input a georeferenced video sequence recorded by a forward-facing camera attached to the windscreen. The core of the proposed method is an on-line video synchronization which finds out the corresponding frame in the georeferenced video sequence to the one recorded at each time by the camera on a second drive through the same track. Once found the corresponding frame in the georeferenced video sequence, we transfer its geospatial information of this frame. The key advantages of this method are: 1) the increase of the update rate and the geospatial accuracy with regard to a standard low-cost GPS and 2) the ability to localize a vehicle even when a GPS is not available or is not reliable enough, like in certain urban areas. Experimental results for an urban environments are presented, showing an average of relative accuracy of 1.5 meters.
 
  Address Madeira Island (Portugal)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 2153-0009 ISBN 978-1-4244-7657-2 Medium  
  Area Expedition Conference ITSC  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ DPS2010 Serial 1423  
Permanent link to this record
 

 
Author (up) Ferran Diego; Daniel Ponsa; Joan Serrat; Antonio Lopez edit   pdf
openurl 
  Title Video Alignment for Change Detection Type Journal Article
  Year 2011 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP  
  Volume 20 Issue 7 Pages 1858-1869  
  Keywords video alignment  
  Abstract In this work, we address the problem of aligning two video sequences. Such alignment refers to synchronization, i.e., the establishment of temporal correspondence between frames of the first and second video, followed by spatial registration of all the temporally corresponding frames. Video synchronization and alignment have been attempted before, but most often in the relatively simple cases of fixed or rigidly attached cameras and simultaneous acquisition. In addition, restrictive assumptions have been applied, including linear time correspondence or the knowledge of the complete trajectories of corresponding scene points; to some extent, these assumptions limit the practical applicability of any solutions developed. We intend to solve the more general problem of aligning video sequences recorded by independently moving cameras that follow similar trajectories, based only on the fusion of image intensity and GPS information. The novelty of our approach is to pose the synchronization as a MAP inference problem on a Bayesian network including the observations from these two sensor types, which have been proved complementary. Alignment results are presented in the context of videos recorded from vehicles driving along the same track at different times, for different road types. In addition, we explore two applications of the proposed video alignment method, both based on change detection between aligned videos. One is the detection of vehicles, which could be of use in ADAS. The other is online difference spotting videos of surveillance rounds.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; IF Approved no  
  Call Number DPS 2011; ADAS @ adas @ dps2011 Serial 1705  
Permanent link to this record
 

 
Author (up) Ferran Diego; G.D. Evangelidis; Joan Serrat edit   pdf
url  openurl
  Title Night-time outdoor surveillance by mobile cameras Type Conference Article
  Year 2012 Publication 1st International Conference on Pattern Recognition Applications and Methods Abbreviated Journal  
  Volume 2 Issue Pages 365-371  
  Keywords  
  Abstract This paper addresses the problem of video surveillance by mobile cameras. We present a method that allows online change detection in night-time outdoor surveillance. Because of the camera movement, background frames are not available and must be “localized” in former sequences and registered with the current frames. To this end, we propose a Frame Localization And Registration (FLAR) approach that solves the problem efficiently. Frames of former sequences define a database which is queried by current frames in turn. To quickly retrieve nearest neighbors, database is indexed through a visual dictionary method based on the SURF descriptor. Furthermore, the frame localization is benefited by a temporal filter that exploits the temporal coherence of videos. Next, the recently proposed ECC alignment scheme is used to spatially register the synchronized frames. Finally, change detection methods apply to aligned frames in order to mark suspicious areas. Experiments with real night sequences recorded by in-vehicle cameras demonstrate the performance of the proposed method and verify its efficiency and effectiveness against other methods.  
  Address Algarve, Portugal  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICPRAM  
  Notes ADAS Approved no  
  Call Number Admin @ si @ DES2012 Serial 2035  
Permanent link to this record
 

 
Author (up) Ferran Diego; Joan Serrat; Antonio Lopez edit   pdf
doi  openurl
  Title Joint spatio-temporal alignment of sequences Type Journal Article
  Year 2013 Publication IEEE Transactions on Multimedia Abbreviated Journal TMM  
  Volume 15 Issue 6 Pages 1377-1387  
  Keywords video alignment  
  Abstract Video alignment is important in different areas of computer vision such as wide baseline matching, action recognition, change detection, video copy detection and frame dropping prevention. Current video alignment methods usually deal with a relatively simple case of fixed or rigidly attached cameras or simultaneous acquisition. Therefore, in this paper we propose a joint video alignment for bringing two video sequences into a spatio-temporal alignment. Specifically, the novelty of the paper is to formulate the video alignment to fold the spatial and temporal alignment into a single alignment framework. This simultaneously satisfies a frame-correspondence and frame-alignment similarity; exploiting the knowledge among neighbor frames by a standard pairwise Markov random field (MRF). This new formulation is able to handle the alignment of sequences recorded at different times by independent moving cameras that follows a similar trajectory, and also generalizes the particular cases that of fixed geometric transformation and/or linear temporal mapping. We conduct experiments on different scenarios such as sequences recorded simultaneously or by moving cameras to validate the robustness of the proposed approach. The proposed method provides the highest video alignment accuracy compared to the state-of-the-art methods on sequences recorded from vehicles driving along the same track at different times.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1520-9210 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number Admin @ si @ DSL2013; ADAS @ adas @ Serial 2228  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: