|
Records |
Links |
|
Author |
Andrew Nolan; Daniel Serrano; Aura Hernandez-Sabate; Daniel Ponsa; Antonio Lopez |
![download PDF file pdf](img/file_PDF.gif)
|
|
Title |
Obstacle mapping module for quadrotors on outdoor Search and Rescue operations |
Type |
Conference Article |
|
Year |
2013 |
Publication |
International Micro Air Vehicle Conference and Flight Competition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
UAV |
|
|
Abstract |
Obstacle avoidance remains a challenging task for Micro Aerial Vehicles (MAV), due to their limited payload capacity to carry advanced sensors. Unlike larger vehicles, MAV can only carry light weight sensors, for instance a camera, which is our main assumption in this work. We explore passive monocular depth estimation and propose a novel method Position Aided Depth Estimation
(PADE). We analyse PADE performance and compare it against the extensively used Time To Collision (TTC). We evaluate the accuracy, robustness to noise and speed of three Optical Flow (OF) techniques, combined with both depth estimation methods. Our results show PADE is more accurate than TTC at depths between 0-12 meters and is less sensitive to noise. Our findings highlight the potential application of PADE for MAV to perform safe autonomous navigation in
unknown and unstructured environments. |
|
|
Address |
Toulouse; France; September 2013 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor ![sorted by Editor field, ascending order (up)](img/sort_asc.gif) |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
IMAV |
|
|
Notes |
ADAS; 600.054; 600.057;IAM |
Approved |
no |
|
|
Call Number |
Admin @ si @ NSH2013 |
Serial |
2371 |
|
Permanent link to this record |
|
|
|
|
Author |
Anastasios Doulamis; Nikolaos Doulamis; Marco Bertini; Jordi Gonzalez; Thomas B. Moeslund |
![download PDF file pdf](img/file_PDF.gif)
|
|
Title |
Analysis and Retrieval of Tracked Events and Motion in Imagery Streams |
Type |
Miscellaneous |
|
Year |
2013 |
Publication |
ACM/IEEE international workshop on Analysis and retrieval of tracked events and motion in imagery stream |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
Barcelona; October 2013 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor ![sorted by Editor field, ascending order (up)](img/sort_asc.gif) |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ISE |
Approved |
no |
|
|
Call Number |
Admin @ si @ DDB2013 |
Serial |
2372 |
|
Permanent link to this record |
|
|
|
|
Author |
Lluis Pere de las Heras; Ahmed Sheraz; Marcus Liwicki; Ernest Valveny; Gemma Sanchez |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Statistical Segmentation and Structural Recognition for Floor Plan Interpretation |
Type |
Journal Article |
|
Year |
2014 |
Publication |
International Journal on Document Analysis and Recognition |
Abbreviated Journal |
IJDAR |
|
|
Volume |
17 |
Issue |
3 |
Pages |
221-237 |
|
|
Keywords |
|
|
|
Abstract |
A generic method for floor plan analysis and interpretation is presented in this article. The method, which is mainly inspired by the way engineers draw and interpret floor plans, applies two recognition steps in a bottom-up manner. First, basic building blocks, i.e., walls, doors, and windows are detected using a statistical patch-based segmentation approach. Second, a graph is generated, and structural pattern recognition techniques are applied to further locate the main entities, i.e., rooms of the building. The proposed approach is able to analyze any type of floor plan regardless of the notation used. We have evaluated our method on different publicly available datasets of real architectural floor plans with different notations. The overall detection and recognition accuracy is about 95 %, which is significantly better than any other state-of-the-art method. Our approach is generic enough such that it could be easily adopted to the recognition and interpretation of any other printed machine-generated structured documents. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Berlin Heidelberg |
Place of Publication |
|
Editor ![sorted by Editor field, ascending order (up)](img/sort_asc.gif) |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1433-2833 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG; ADAS; 600.076; 600.077 |
Approved |
no |
|
|
Call Number |
HSL2014 |
Serial |
2370 |
|
Permanent link to this record |
|
|
|
|
Author |
H. Emrah Tasli; Cevahir Çigla; Theo Gevers; A. Aydin Alatan |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Super pixel extraction via convexity induced boundary adaptation |
Type |
Conference Article |
|
Year |
2013 |
Publication |
14th IEEE International Conference on Multimedia and Expo |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
1-6 |
|
|
Keywords |
|
|
|
Abstract |
This study presents an efficient super-pixel extraction algorithm with major contributions to the state-of-the-art in terms of accuracy and computational complexity. Segmentation accuracy is improved through convexity constrained geodesic distance utilization; while computational efficiency is achieved by replacing complete region processing with boundary adaptation idea. Starting from the uniformly distributed rectangular equal-sized super-pixels, region boundaries are adapted to intensity edges iteratively by assigning boundary pixels to the most similar neighboring super-pixels. At each iteration, super-pixel regions are updated and hence progressively converging to compact pixel groups. Experimental results with state-of-the-art comparisons, validate the performance of the proposed technique in terms of both accuracy and speed. |
|
|
Address |
San Jose; USA; July 2013 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor ![sorted by Editor field, ascending order (up)](img/sort_asc.gif) |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1945-7871 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICME |
|
|
Notes |
ALTRES;ISE |
Approved |
no |
|
|
Call Number |
Admin @ si @ TÇG2013 |
Serial |
2367 |
|
Permanent link to this record |
|
|
|
|
Author |
H. Emrah Tasli; Jan van Gemert; Theo Gevers |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Spot the differences: from a photograph burst to the single best picture |
Type |
Conference Article |
|
Year |
2013 |
Publication |
21ST ACM International Conference on Multimedia |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
729-732 |
|
|
Keywords |
|
|
|
Abstract |
With the rise of the digital camera, people nowadays typically take several near-identical photos of the same scene to maximize the chances of a good shot. This paper proposes a user-friendly tool for exploring a personal photo gallery for selecting or even creating the best shot of a scene between its multiple alternatives. This functionality is realized through a graphical user interface where the best viewpoint can be selected from a generated panorama of the scene. Once the viewpoint is selected, the user is able to go explore possible alternatives coming from the other images. Using this tool, one can explore a photo gallery efficiently. Moreover, additional compositions from other images are also possible. With such additional compositions, one can go from a burst of photographs to the single best one. Even funny compositions of images, where you can duplicate a person in the same image, are possible with our proposed tool. |
|
|
Address |
Barcelona |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor ![sorted by Editor field, ascending order (up)](img/sort_asc.gif) |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ACM-MM |
|
|
Notes |
ALTRES;ISE |
Approved |
no |
|
|
Call Number |
TGG2013 |
Serial |
2368 |
|
Permanent link to this record |
|
|
|
|
Author |
Sezer Karaoglu; Jan van Gemert; Theo Gevers |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Con-text: text detection using background connectivity for fine-grained object classification |
Type |
Conference Article |
|
Year |
2013 |
Publication |
21ST ACM International Conference on Multimedia |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
757-760 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor ![sorted by Editor field, ascending order (up)](img/sort_asc.gif) |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ACM-MM |
|
|
Notes |
ALTRES;ISE |
Approved |
no |
|
|
Call Number |
Admin @ si @ KGG2013 |
Serial |
2369 |
|
Permanent link to this record |
|
|
|
|
Author |
Ivo Everts; Jan van Gemert; Theo Gevers |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Evaluation of Color STIPs for Human Action Recognition |
Type |
Conference Article |
|
Year |
2013 |
Publication |
IEEE Conference on Computer Vision and Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
2850-2857 |
|
|
Keywords |
|
|
|
Abstract |
This paper is concerned with recognizing realistic human actions in videos based on spatio-temporal interest points (STIPs). Existing STIP-based action recognition approaches operate on intensity representations of the image data. Because of this, these approaches are sensitive to disturbing photometric phenomena such as highlights and shadows. Moreover, valuable information is neglected by discarding chromaticity from the photometric representation. These issues are addressed by Color STIPs. Color STIPs are multi-channel reformulations of existing intensity-based STIP detectors and descriptors, for which we consider a number of chromatic representations derived from the opponent color space. This enhanced modeling of appearance improves the quality of subsequent STIP detection and description. Color STIPs are shown to substantially outperform their intensity-based counterparts on the challenging UCF~sports, UCF11 and UCF50 action recognition benchmarks. Moreover, the results show that color STIPs are currently the single best low-level feature choice for STIP-based approaches to human action recognition. |
|
|
Address |
Portland; oregon; June 2013 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor ![sorted by Editor field, ascending order (up)](img/sort_asc.gif) |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1063-6919 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CVPR |
|
|
Notes |
ALTRES;ISE |
Approved |
no |
|
|
Call Number |
Admin @ si @ EGG2013 |
Serial |
2364 |
|
Permanent link to this record |
|
|
|
|
Author |
Fares Alnajar; Theo Gevers; Roberto Valenti; Sennay Ghebreab |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Calibration-free Gaze Estimation using Human Gaze Patterns |
Type |
Conference Article |
|
Year |
2013 |
Publication |
15th IEEE International Conference on Computer Vision |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
137-144 |
|
|
Keywords |
|
|
|
Abstract |
We present a novel method to auto-calibrate gaze estimators based on gaze patterns obtained from other viewers. Our method is based on the observation that the gaze patterns of humans are indicative of where a new viewer will look at [12]. When a new viewer is looking at a stimulus, we first estimate a topology of gaze points (initial gaze points). Next, these points are transformed so that they match the gaze patterns of other humans to find the correct gaze points. In a flexible uncalibrated setup with a web camera and no chin rest, the proposed method was tested on ten subjects and ten images. The method estimates the gaze points after looking at a stimulus for a few seconds with an average accuracy of 4.3 im. Although the reported performance is lower than what could be achieved with dedicated hardware or calibrated setup, the proposed method still provides a sufficient accuracy to trace the viewer attention. This is promising considering the fact that auto-calibration is done in a flexible setup , without the use of a chin rest, and based only on a few seconds of gaze initialization data. To the best of our knowledge, this is the first work to use human gaze patterns in order to auto-calibrate gaze estimators. |
|
|
Address |
Sydney |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor ![sorted by Editor field, ascending order (up)](img/sort_asc.gif) |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICCV |
|
|
Notes |
ALTRES;ISE |
Approved |
no |
|
|
Call Number |
Admin @ si @ AGV2013 |
Serial |
2365 |
|
Permanent link to this record |
|
|
|
|
Author |
Hamdi Dibeklioglu; Albert Ali Salah; Theo Gevers |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Like Father, Like Son: Facial Expression Dynamics for Kinship Verification |
Type |
Conference Article |
|
Year |
2013 |
Publication |
15th IEEE International Conference on Computer Vision |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
1497-1504 |
|
|
Keywords |
|
|
|
Abstract |
Kinship verification from facial appearance is a difficult problem. This paper explores the possibility of employing facial expression dynamics in this problem. By using features that describe facial dynamics and spatio-temporal appearance over smile expressions, we show that it is possible to improve the state of the art in this problem, and verify that it is indeed possible to recognize kinship by resemblance of facial expressions. The proposed method is tested on different kin relationships. On the average, 72.89% verification accuracy is achieved on spontaneous smiles. |
|
|
Address |
Sydney |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor ![sorted by Editor field, ascending order (up)](img/sort_asc.gif) |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICCV |
|
|
Notes |
ALTRES;ISE |
Approved |
no |
|
|
Call Number |
Admin @ si @ DSG2013 |
Serial |
2366 |
|
Permanent link to this record |
|
|
|
|
Author |
Jorge Bernal |
![download PDF file pdf](img/file_PDF.gif)
|
|
Title |
Use of Projection and Back-projection Methods in Bidimensional Computed Tomography Image Reconstruction |
Type |
Report |
|
Year |
2009 |
Publication |
CVC Tecnical Report |
Abbreviated Journal |
|
|
|
Volume |
141 |
Issue |
|
Pages |
|
|
|
Keywords |
Projection, Back-projection, CT scan, Euclidean geometry, Radon transform |
|
|
Abstract |
One of the biggest drawbacks related to the use of CT scanners is the cost (in memory and in time) associated. In this project many methods to simulate their functioning, but in a more feasible way (taking an industrial point of view), will be studied.
The main group of techniques that are being used are the one entitled as ’back-projection’. The concept behind is to simulate the X ray emission in CT scans by lines that cross with the image we want to reconstruct.
In the first part of this document euclidean geometry is used to face the tasks of projec- tion and back-projection. After analysing the results achieved it has been proved that this approach does not lead to a fully perfect reconstruction (and also has some other problems related to running time and memory cost). Because of this in the second part of the document ’Filtered Back-projection’ method is introduced in order to improve the results.
Filtered Back-projection methods rely on mathematical transforms (Fourier, Radon) in order to provide more accurate results that can be obtained in much less time. The main cause of this better results is the use of a filtering process before the back-projection in order to avoid high frequency-caused errors.
As a result of this project two different implementations (one for each approach) had been implemented in order to compare their performance. |
|
|
Address |
|
|
|
Corporate Author |
Computer Vision Center |
Thesis |
Master's thesis |
|
|
Publisher |
|
Place of Publication |
Barcelona, Spain |
Editor ![sorted by Editor field, ascending order (up)](img/sort_asc.gif) |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
800 |
Expedition |
|
Conference |
|
|
|
Notes |
MV; |
Approved |
no |
|
|
Call Number |
IAM @ iam @ Ber2009 |
Serial |
1693 |
|
Permanent link to this record |
|
|
|
|
Author |
Jasper Uilings; Koen E.A. van de Sande; Theo Gevers; Arnold Smeulders |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Selective Search for Object Recognition |
Type |
Journal Article |
|
Year |
2013 |
Publication |
International Journal of Computer Vision |
Abbreviated Journal |
IJCV |
|
|
Volume |
104 |
Issue |
2 |
Pages |
154-171 |
|
|
Keywords |
|
|
|
Abstract |
This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 % recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http://disi.unitn.it/~uijlings/SelectiveSearch.html). |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor ![sorted by Editor field, ascending order (up)](img/sort_asc.gif) |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0920-5691 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ALTRES;ISE |
Approved |
no |
|
|
Call Number |
Admin @ si @ USG2013 |
Serial |
2362 |
|
Permanent link to this record |
|
|
|
|
Author |
Zeynep Yucel; Albert Ali Salah; Çetin Meriçli; Tekin Meriçli; Roberto Valenti; Theo Gevers |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Joint Attention by Gaze Interpolation and Saliency |
Type |
Journal |
|
Year |
2013 |
Publication |
IEEE Transactions on cybernetics |
Abbreviated Journal |
T-CIBER |
|
|
Volume |
43 |
Issue |
3 |
Pages |
829-842 |
|
|
Keywords |
|
|
|
Abstract |
Joint attention, which is the ability of coordination of a common point of reference with the communicating party, emerges as a key factor in various interaction scenarios. This paper presents an image-based method for establishing joint attention between an experimenter and a robot. The precise analysis of the experimenter's eye region requires stability and high-resolution image acquisition, which is not always available. We investigate regression-based interpolation of the gaze direction from the head pose of the experimenter, which is easier to track. Gaussian process regression and neural networks are contrasted to interpolate the gaze direction. Then, we combine gaze interpolation with image-based saliency to improve the target point estimates and test three different saliency schemes. We demonstrate the proposed method on a human-robot interaction scenario. Cross-subject evaluations, as well as experiments under adverse conditions (such as dimmed or artificial illumination or motion blur), show that our method generalizes well and achieves rapid gaze estimation for establishing joint attention. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor ![sorted by Editor field, ascending order (up)](img/sort_asc.gif) |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
2168-2267 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ALTRES;ISE |
Approved |
no |
|
|
Call Number |
Admin @ si @ YSM2013 |
Serial |
2363 |
|
Permanent link to this record |
|
|
|
|
Author |
Jorge Bernal; F. Javier Sanchez; Fernando Vilariño |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Integration of Valley Orientation Distribution for Polyp Region Identification in Colonoscopy |
Type |
Conference Article |
|
Year |
2011 |
Publication |
In MICCAI 2011 Workshop on Computational and Clinical Applications in Abdominal Imaging |
Abbreviated Journal |
|
|
|
Volume |
6668 |
Issue |
|
Pages |
76-83 |
|
|
Keywords |
|
|
|
Abstract |
This work presents a region descriptor based on the integration of the information that the depth of valleys image provides. The depth of valleys image is based on the presence of intensity valleys around polyps due to the image acquisition. Our proposed method consists of defining, for each point, a series of radial sectors around it and then accumulates the maxima of the depth of valleys image only if the orientation of the intensity valley coincides with the orientation of the sector above. We apply our descriptor to a prior segmentation of the images and we present promising results on polyp detection, outperforming other approaches that also integrate depth of valleys information. |
|
|
Address |
Toronto, Canada |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Link |
Place of Publication |
|
Editor ![sorted by Editor field, ascending order (up)](img/sort_asc.gif) |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
Lecture Notes in Computer Science |
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
800 |
Expedition |
|
Conference |
ABI |
|
|
Notes |
MV;SIAI |
Approved |
no |
|
|
Call Number |
IAM @ iam @ BSV2011d |
Serial |
1698 |
|
Permanent link to this record |
|
|
|
|
Author |
Jorge Bernal; F. Javier Sanchez; Fernando Vilariño |
![download PDF file pdf](img/file_PDF.gif)
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title |
Depth of Valleys Accumulation Algorithm for Object Detection |
Type |
Conference Article |
|
Year |
2011 |
Publication |
14th Congrès Català en Intel·ligencia Artificial |
Abbreviated Journal |
|
|
|
Volume |
1 |
Issue |
1 |
Pages |
71-80 |
|
|
Keywords |
Object Recognition, Object Region Identification, Image Analysis, Image Processing |
|
|
Abstract |
This work aims at detecting in which regions the objects in the image are by using information about the intensity of valleys, which appear to surround ob- jects in images where the source of light is in the line of direction than the camera. We present our depth of valleys accumulation method, which consists of two stages: first, the definition of the depth of valleys image which combines the output of a ridges and valleys detector with the morphological gradient to measure how deep is a point inside a valley and second, an algorithm that denotes points of the image as interior to objects those which are inside complete or incomplete boundaries in the depth of valleys image. To evaluate the performance of our method we have tested it on several application domains. Our results on object region identification are promising, specially in the field of polyp detection in colonoscopy videos, and we also show its applicability in different areas. |
|
|
Address |
Lleida |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor ![sorted by Editor field, ascending order (up)](img/sort_asc.gif) |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
978-1-60750-841-0 |
Medium |
|
|
|
Area |
800 |
Expedition |
|
Conference |
CCIA |
|
|
Notes |
MV;SIAI |
Approved |
no |
|
|
Call Number |
IAM @ iam @ BSV2011b |
Serial |
1699 |
|
Permanent link to this record |
|
|
|
|
Author |
Petia Radeva; Jordi Vitria; Fernando Vilariño; Panagiota Spyridonos; Fernando Azpiroz; Juan Malagelada; Fosca de Iorio; Anna Accarino |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Cascade analysis for intestinal contraction detection |
Type |
Patent |
|
Year |
2009 |
Publication |
US 2009/0284589 A1 |
Abbreviated Journal |
USPO |
|
|
Volume |
|
Issue |
|
Pages |
1-25 |
|
|
Keywords |
|
|
|
Abstract |
A method and system cascade analysisi for intestinal contraction detection is provided by extracting from image frames captured in-vivo. The method and system also relate to the detection of turbid liquids in intestinal tracts, to automatic detection of video image frames taken in the gastrointestinal tract including a field of view obstructed by turbid media, and more particulary, to extraction of image data obstructed by turbid media. |
|
|
Address |
|
|
|
Corporate Author |
US Patent Office |
Thesis |
|
|
|
Publisher |
US Patent Office |
Place of Publication |
|
Editor ![sorted by Editor field, ascending order (up)](img/sort_asc.gif) |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB; OR; MV;SIAI |
Approved |
no |
|
|
Call Number |
IAM @ iam @ RVV2009 |
Serial |
1700 |
|
Permanent link to this record |