Home | << 1 2 3 4 5 6 7 8 9 10 >> [11–12] |
Records | |||||
---|---|---|---|---|---|
Author | David Geronimo; Joan Serrat; Antonio Lopez; Ramon Baldrich | ||||
Title | Traffic sign recognition for computer vision project-based learning | Type | Journal Article | ||
Year | 2013 | Publication | IEEE Transactions on Education | Abbreviated Journal | T-EDUC |
Volume | 56 | Issue | 3 | Pages | 364-371 |
Keywords | traffic signs | ||||
Abstract | This paper presents a graduate course project on computer vision. The aim of the project is to detect and recognize traffic signs in video sequences recorded by an on-board vehicle camera. This is a demanding problem, given that traffic sign recognition is one of the most challenging problems for driving assistance systems. Equally, it is motivating for the students given that it is a real-life problem. Furthermore, it gives them the opportunity to appreciate the difficulty of real-world vision problems and to assess the extent to which this problem can be solved by modern computer vision and pattern classification techniques taught in the classroom. The learning objectives of the course are introduced, as are the constraints imposed on its design, such as the diversity of students' background and the amount of time they and their instructors dedicate to the course. The paper also describes the course contents, schedule, and how the project-based learning approach is applied. The outcomes of the course are discussed, including both the students' marks and their personal feedback. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0018-9359 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS; CIC | Approved | no | ||
Call Number | Admin @ si @ GSL2013; ADAS @ adas @ | Serial | 2160 | ||
Permanent link to this record | |||||
Author | Ariel Amato; Angel Sappa; Alicia Fornes; Felipe Lumbreras; Josep Llados | ||||
Title | Divide and Conquer: Atomizing and Parallelizing A Task in A Mobile Crowdsourcing Platform | Type | Conference Article | ||
Year | 2013 | Publication | 2nd International ACM Workshop on Crowdsourcing for Multimedia | Abbreviated Journal | |
Volume | Issue | Pages | 21-22 | ||
Keywords | |||||
Abstract | In this paper we present some conclusions about the advantages of having an efficient task formulation when a crowdsourcing platform is used. In particular we show how the task atomization and distribution can help to obtain results in an efficient way. Our proposal is based on a recursive splitting of the original task into a set of smaller and simpler tasks. As a result both more accurate and faster solutions are obtained. Our evaluation is performed on a set of ancient documents that need to be digitized. | ||||
Address | Barcelona; October 2013 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-4503-2396-3 | Medium | ||
Area | Expedition | Conference | CrowdMM | ||
Notes | ADAS; ISE; DAG; 600.054; 600.055; 600.045; 600.061; 602.006 | Approved | no | ||
Call Number | Admin @ si @ SLA2013 | Serial | 2335 | ||
Permanent link to this record | |||||
Author | Angel Sappa; Jordi Vitria | ||||
Title | Multimodal Interaction in Image and Video Applications | Type | Book Whole | ||
Year | 2013 | Publication | Multimodal Interaction in Image and Video Applications | Abbreviated Journal | |
Volume | 48 | Issue | Pages | ||
Keywords | |||||
Abstract | Book Series Intelligent Systems Reference Library | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Springer Berlin Heidelberg | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1868-4394 | ISBN | 978-3-642-35931-6 | Medium | |
Area | Expedition | Conference | |||
Notes | ADAS; OR;MV | Approved | no | ||
Call Number | Admin @ si @ SaV2013 | Serial | 2199 | ||
Permanent link to this record | |||||
Author | Jose Manuel Alvarez; Theo Gevers; Ferran Diego; Antonio Lopez | ||||
Title | Road Geometry Classification by Adaptative Shape Models | Type | Journal Article | ||
Year | 2013 | Publication | IEEE Transactions on Intelligent Transportation Systems | Abbreviated Journal | TITS |
Volume | 14 | Issue | 1 | Pages | 459-468 |
Keywords | road detection | ||||
Abstract | Vision-based road detection is important for different applications in transportation, such as autonomous driving, vehicle collision warning, and pedestrian crossing detection. Common approaches to road detection are based on low-level road appearance (e.g., color or texture) and neglect of the scene geometry and context. Hence, using only low-level features makes these algorithms highly depend on structured roads, road homogeneity, and lighting conditions. Therefore, the aim of this paper is to classify road geometries for road detection through the analysis of scene composition and temporal coherence. Road geometry classification is proposed by building corresponding models from training images containing prototypical road geometries. We propose adaptive shape models where spatial pyramids are steered by the inherent spatial structure of road images. To reduce the influence of lighting variations, invariant features are used. Large-scale experiments show that the proposed road geometry classifier yields a high recognition rate of 73.57% ± 13.1, clearly outperforming other state-of-the-art methods. Including road shape information improves road detection results over existing appearance-based methods. Finally, it is shown that invariant features and temporal information provide robustness against disturbing imaging conditions. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1524-9050 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS;ISE | Approved | no | ||
Call Number | Admin @ si @ AGD2013;; ADAS @ adas @ | Serial | 2269 | ||
Permanent link to this record | |||||
Author | Jose Manuel Alvarez; Theo Gevers; Antonio Lopez | ||||
Title | Evaluating Color Representation for Online Road Detection | Type | Conference Article | ||
Year | 2013 | Publication | ICCV Workshop on Computer Vision in Vehicle Technology: From Earth to Mars | Abbreviated Journal | |
Volume | Issue | Pages | 594-595 | ||
Keywords | |||||
Abstract | Detecting traversable road areas ahead a moving vehicle is a key process for modern autonomous driving systems. Most existing algorithms use color to classify pixels as road or background. These algorithms reduce the effect of lighting variations and weather conditions by exploiting the discriminant/invariant properties of different color representations. However, up to date, no comparison between these representations have been conducted. Therefore, in this paper, we perform an evaluation of existing color representations for road detection. More specifically, we focus on color planes derived from RGB data and their most com-
mon combinations. The evaluation is done on a set of 7000 road images acquired using an on-board camera in different real-driving situations. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVVT:E2M | ||
Notes | ADAS;ISE | Approved | no | ||
Call Number | Admin @ si @ AGL2013 | Serial | 2794 | ||
Permanent link to this record | |||||
Author | H. Emrah Tasli; Cevahir Çigla; Theo Gevers; A. Aydin Alatan | ||||
Title | Super pixel extraction via convexity induced boundary adaptation | Type | Conference Article | ||
Year | 2013 | Publication | 14th IEEE International Conference on Multimedia and Expo | Abbreviated Journal | |
Volume | Issue | Pages | 1-6 | ||
Keywords | |||||
Abstract | This study presents an efficient super-pixel extraction algorithm with major contributions to the state-of-the-art in terms of accuracy and computational complexity. Segmentation accuracy is improved through convexity constrained geodesic distance utilization; while computational efficiency is achieved by replacing complete region processing with boundary adaptation idea. Starting from the uniformly distributed rectangular equal-sized super-pixels, region boundaries are adapted to intensity edges iteratively by assigning boundary pixels to the most similar neighboring super-pixels. At each iteration, super-pixel regions are updated and hence progressively converging to compact pixel groups. Experimental results with state-of-the-art comparisons, validate the performance of the proposed technique in terms of both accuracy and speed. | ||||
Address | San Jose; USA; July 2013 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1945-7871 | ISBN | Medium | ||
Area | Expedition | Conference | ICME | ||
Notes | ALTRES;ISE | Approved | no | ||
Call Number | Admin @ si @ TÇG2013 | Serial | 2367 | ||
Permanent link to this record | |||||
Author | H. Emrah Tasli; Jan van Gemert; Theo Gevers | ||||
Title | Spot the differences: from a photograph burst to the single best picture | Type | Conference Article | ||
Year | 2013 | Publication | 21ST ACM International Conference on Multimedia | Abbreviated Journal | |
Volume | Issue | Pages | 729-732 | ||
Keywords | |||||
Abstract | With the rise of the digital camera, people nowadays typically take several near-identical photos of the same scene to maximize the chances of a good shot. This paper proposes a user-friendly tool for exploring a personal photo gallery for selecting or even creating the best shot of a scene between its multiple alternatives. This functionality is realized through a graphical user interface where the best viewpoint can be selected from a generated panorama of the scene. Once the viewpoint is selected, the user is able to go explore possible alternatives coming from the other images. Using this tool, one can explore a photo gallery efficiently. Moreover, additional compositions from other images are also possible. With such additional compositions, one can go from a burst of photographs to the single best one. Even funny compositions of images, where you can duplicate a person in the same image, are possible with our proposed tool. | ||||
Address | Barcelona | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ACM-MM | ||
Notes | ALTRES;ISE | Approved | no | ||
Call Number | TGG2013 | Serial | 2368 | ||
Permanent link to this record | |||||
Author | Sezer Karaoglu; Jan van Gemert; Theo Gevers | ||||
Title | Con-text: text detection using background connectivity for fine-grained object classification | Type | Conference Article | ||
Year | 2013 | Publication | 21ST ACM International Conference on Multimedia | Abbreviated Journal | |
Volume | Issue | Pages | 757-760 | ||
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ACM-MM | ||
Notes | ALTRES;ISE | Approved | no | ||
Call Number | Admin @ si @ KGG2013 | Serial | 2369 | ||
Permanent link to this record | |||||
Author | Ivo Everts; Jan van Gemert; Theo Gevers | ||||
Title | Evaluation of Color STIPs for Human Action Recognition | Type | Conference Article | ||
Year | 2013 | Publication | IEEE Conference on Computer Vision and Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 2850-2857 | ||
Keywords | |||||
Abstract | This paper is concerned with recognizing realistic human actions in videos based on spatio-temporal interest points (STIPs). Existing STIP-based action recognition approaches operate on intensity representations of the image data. Because of this, these approaches are sensitive to disturbing photometric phenomena such as highlights and shadows. Moreover, valuable information is neglected by discarding chromaticity from the photometric representation. These issues are addressed by Color STIPs. Color STIPs are multi-channel reformulations of existing intensity-based STIP detectors and descriptors, for which we consider a number of chromatic representations derived from the opponent color space. This enhanced modeling of appearance improves the quality of subsequent STIP detection and description. Color STIPs are shown to substantially outperform their intensity-based counterparts on the challenging UCF~sports, UCF11 and UCF50 action recognition benchmarks. Moreover, the results show that color STIPs are currently the single best low-level feature choice for STIP-based approaches to human action recognition. | ||||
Address | Portland; oregon; June 2013 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1063-6919 | ISBN | Medium | ||
Area | Expedition | Conference | CVPR | ||
Notes | ALTRES;ISE | Approved | no | ||
Call Number | Admin @ si @ EGG2013 | Serial | 2364 | ||
Permanent link to this record | |||||
Author | Fares Alnajar; Theo Gevers; Roberto Valenti; Sennay Ghebreab | ||||
Title | Calibration-free Gaze Estimation using Human Gaze Patterns | Type | Conference Article | ||
Year | 2013 | Publication | 15th IEEE International Conference on Computer Vision | Abbreviated Journal | |
Volume | Issue | Pages | 137-144 | ||
Keywords | |||||
Abstract | We present a novel method to auto-calibrate gaze estimators based on gaze patterns obtained from other viewers. Our method is based on the observation that the gaze patterns of humans are indicative of where a new viewer will look at [12]. When a new viewer is looking at a stimulus, we first estimate a topology of gaze points (initial gaze points). Next, these points are transformed so that they match the gaze patterns of other humans to find the correct gaze points. In a flexible uncalibrated setup with a web camera and no chin rest, the proposed method was tested on ten subjects and ten images. The method estimates the gaze points after looking at a stimulus for a few seconds with an average accuracy of 4.3 im. Although the reported performance is lower than what could be achieved with dedicated hardware or calibrated setup, the proposed method still provides a sufficient accuracy to trace the viewer attention. This is promising considering the fact that auto-calibration is done in a flexible setup , without the use of a chin rest, and based only on a few seconds of gaze initialization data. To the best of our knowledge, this is the first work to use human gaze patterns in order to auto-calibrate gaze estimators. | ||||
Address | Sydney | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICCV | ||
Notes | ALTRES;ISE | Approved | no | ||
Call Number | Admin @ si @ AGV2013 | Serial | 2365 | ||
Permanent link to this record | |||||
Author | Hamdi Dibeklioglu; Albert Ali Salah; Theo Gevers | ||||
Title | Like Father, Like Son: Facial Expression Dynamics for Kinship Verification | Type | Conference Article | ||
Year | 2013 | Publication | 15th IEEE International Conference on Computer Vision | Abbreviated Journal | |
Volume | Issue | Pages | 1497-1504 | ||
Keywords | |||||
Abstract | Kinship verification from facial appearance is a difficult problem. This paper explores the possibility of employing facial expression dynamics in this problem. By using features that describe facial dynamics and spatio-temporal appearance over smile expressions, we show that it is possible to improve the state of the art in this problem, and verify that it is indeed possible to recognize kinship by resemblance of facial expressions. The proposed method is tested on different kin relationships. On the average, 72.89% verification accuracy is achieved on spontaneous smiles. | ||||
Address | Sydney | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICCV | ||
Notes | ALTRES;ISE | Approved | no | ||
Call Number | Admin @ si @ DSG2013 | Serial | 2366 | ||
Permanent link to this record | |||||
Author | Jasper Uilings; Koen E.A. van de Sande; Theo Gevers; Arnold Smeulders | ||||
Title | Selective Search for Object Recognition | Type | Journal Article | ||
Year | 2013 | Publication | International Journal of Computer Vision | Abbreviated Journal | IJCV |
Volume | 104 | Issue | 2 | Pages | 154-171 |
Keywords | |||||
Abstract | This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 % recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http://disi.unitn.it/~uijlings/SelectiveSearch.html). | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0920-5691 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ALTRES;ISE | Approved | no | ||
Call Number | Admin @ si @ USG2013 | Serial | 2362 | ||
Permanent link to this record | |||||
Author | Zeynep Yucel; Albert Ali Salah; Çetin Meriçli; Tekin Meriçli; Roberto Valenti; Theo Gevers | ||||
Title | Joint Attention by Gaze Interpolation and Saliency | Type | Journal | ||
Year | 2013 | Publication | IEEE Transactions on cybernetics | Abbreviated Journal | T-CIBER |
Volume | 43 | Issue | 3 | Pages | 829-842 |
Keywords | |||||
Abstract | Joint attention, which is the ability of coordination of a common point of reference with the communicating party, emerges as a key factor in various interaction scenarios. This paper presents an image-based method for establishing joint attention between an experimenter and a robot. The precise analysis of the experimenter's eye region requires stability and high-resolution image acquisition, which is not always available. We investigate regression-based interpolation of the gaze direction from the head pose of the experimenter, which is easier to track. Gaussian process regression and neural networks are contrasted to interpolate the gaze direction. Then, we combine gaze interpolation with image-based saliency to improve the target point estimates and test three different saliency schemes. We demonstrate the proposed method on a human-robot interaction scenario. Cross-subject evaluations, as well as experiments under adverse conditions (such as dimmed or artificial illumination or motion blur), show that our method generalizes well and achieves rapid gaze estimation for establishing joint attention. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 2168-2267 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ALTRES;ISE | Approved | no | ||
Call Number | Admin @ si @ YSM2013 | Serial | 2363 | ||
Permanent link to this record | |||||
Author | Shida Beigpour | ||||
Title | Illumination and object reflectance modeling | Type | Book Whole | ||
Year | 2013 | Publication | PhD Thesis, Universitat Autonoma de Barcelona-CVC | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | More realistic and accurate models of the scene illumination and object reflectance can greatly improve the quality of many computer vision and computer graphics tasks. Using such model, a more profound knowledge about the interaction of light with object surfaces can be established which proves crucial to a variety of computer vision applications. In the current work, we investigate the various existing approaches to illumination and reflectance modeling and form an analysis on their shortcomings in capturing the complexity of real-world scenes. Based on this analysis we propose improvements to different aspects of reflectance and illumination estimation in order to more realistically model the real-world scenes in the presence of complex lighting phenomena (i.e, multiple illuminants, interreflections and shadows). Moreover, we captured our own multi-illuminant dataset which consists of complex scenes and illumination conditions both outdoor and in laboratory conditions. In addition we investigate the use of synthetic data to facilitate the construction of datasets and improve the process of obtaining ground-truth information. | ||||
Address | Barcelona | ||||
Corporate Author | Thesis | Ph.D. thesis | |||
Publisher | Ediciones Graficas Rey | Place of Publication | Editor | Joost Van de Weijer;Ernest Valveny | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | CIC | Approved | no | ||
Call Number | Admin @ si @ Bei2013 | Serial | 2267 | ||
Permanent link to this record | |||||
Author | Olivier Penacchio; Xavier Otazu; Laura Dempere-Marco | ||||
Title | A Neurodynamical Model of Brightness Induction in V1 | Type | Journal Article | ||
Year | 2013 | Publication | PloS ONE | Abbreviated Journal | Plos |
Volume | 8 | Issue | 5 | Pages | e64086 |
Keywords | |||||
Abstract | Brightness induction is the modulation of the perceived intensity of an area by the luminance of surrounding areas. Recent neurophysiological evidence suggests that brightness information might be explicitly represented in V1, in contrast to the more common assumption that the striate cortex is an area mostly responsive to sensory information. Here we investigate possible neural mechanisms that offer a plausible explanation for such phenomenon. To this end, a neurodynamical model which is based on neurophysiological evidence and focuses on the part of V1 responsible for contextual influences is presented. The proposed computational model successfully accounts for well known psychophysical effects for static contexts and also for brightness induction in dynamic contexts defined by modulating the luminance of surrounding areas. This work suggests that intra-cortical interactions in V1 could, at least partially, explain brightness induction effects and reveals how a common general architecture may account for several different fundamental processes, such as visual saliency and brightness induction, which emerge early in the visual processing pathway. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | CIC | Approved | no | ||
Call Number | Admin @ si @ POD2013 | Serial | 2242 | ||
Permanent link to this record |