Home | [1–10] << 11 12 >> |
Records | |||||
---|---|---|---|---|---|
Author | S.Grau; Ana Puig; Sergio Escalera; Maria Salamo | ||||
Title | Intelligent Interactive Volume Classification | Type | Conference Article | ||
Year | 2013 | Publication | Pacific Graphics | Abbreviated Journal | |
Volume | 32 | Issue | 7 | Pages | 23-28 |
Keywords | |||||
Abstract | This paper defines an intelligent and interactive framework to classify multiple regions of interest from the original data on demand, without requiring any preprocessing or previous segmentation. The proposed intelligent and interactive approach is divided in three stages: visualize, training and testing. First, users visualize and label some samples directly on slices of the volume. Training and testing are based on a framework of Error Correcting Output Codes and Adaboost classifiers that learn to classify each region the user has painted. Later, at the testing stage, each classifier is directly applied on the rest of samples and combined to perform multi-class labeling, being used in the final rendering. We also parallelized the training stage using a GPU-based implementation for
obtaining a rapid interaction and classification. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-3-905674-50-7 | Medium | ||
Area | Expedition | Conference | PG | ||
Notes | HuPBA; 600.046;MILAB | Approved | no | ||
Call Number | Admin @ si @ GPE2013b | Serial | 2355 | ||
Permanent link to this record | |||||
Author | S.Grau; Anna Puig; Sergio Escalera; Maria Salamo; Oscar Amoros | ||||
Title | Efficient complementary viewpoint selection in volume rendering | Type | Conference Article | ||
Year | 2013 | Publication | 21st WSCG Conference on Computer Graphics, | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | Dual camera; Visualization; Interactive Interfaces; Dynamic Time Warping. | ||||
Abstract | A major goal of visualization is to appropriately express knowledge of scientific data. Generally, gathering visual information contained in the volume data often requires a lot of expertise from the final user to setup the parameters of the visualization. One way of alleviating this problem is to provide the position of inner structures with different viewpoint locations to enhance the perception and construction of the mental image. To this end, traditional illustrations use two or three different views of the regions of interest. Similarly, with the aim of assisting the users to easily place a good viewpoint location, this paper proposes an automatic and interactive method that locates different complementary viewpoints from a reference camera in volume datasets. Specifically, the proposed method combines the quantity of information each camera provides for each structure and the shape similarity of the projections of the remaining viewpoints based on Dynamic Time Warping. The selected complementary viewpoints allow a better understanding of the focused structure in several applications. Thus, the user interactively receives feedback based on several viewpoints that helps him to understand the visual information. A live-user evaluation on different data sets show a good convergence to useful complementary viewpoints. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-808694374-9 | Medium | ||
Area | Expedition | Conference | WSCG | ||
Notes | HuPBA; 600.046;MILAB | Approved | no | ||
Call Number | Admin @ si @ GPE2013a | Serial | 2255 | ||
Permanent link to this record | |||||
Author | David Vazquez | ||||
Title | Domain Adaptation of Virtual and Real Worlds for Pedestrian Detection | Type | Book Whole | ||
Year | 2013 | Publication | PhD Thesis, Universitat de Barcelona-CVC | Abbreviated Journal | |
Volume | 1 | Issue | 1 | Pages | 1-105 |
Keywords | Pedestrian Detection; Domain Adaptation | ||||
Abstract | Pedestrian detection is of paramount interest for many applications, e.g. Advanced Driver Assistance Systems, Intelligent Video Surveillance and Multimedia systems. Most promising pedestrian detectors rely on appearance-based classifiers trained with annotated data. However, the required annotation step represents an intensive and subjective task for humans, what makes worth to minimize their intervention in this process by using computational tools like realistic virtual worlds. The reason to use these kind of tools relies in the fact that they allow the automatic generation of precise and rich annotations of visual information. Nevertheless, the use of this kind of data comes with the following question: can a pedestrian appearance model learnt with virtual-world data work successfully for pedestrian detection in real-world scenarios?. To answer this question, we conduct different experiments that suggest a positive answer. However, the pedestrian classifiers trained with virtual-world data can suffer the so called dataset shift problem as real-world based classifiers does. Accordingly, we have designed different domain adaptation techniques to face this problem, all of them integrated in a same framework (V-AYLA). We have explored different methods to train a domain adapted pedestrian classifiers by collecting a few pedestrian samples from the target domain (real world) and combining them with many samples of the source domain (virtual world). The extensive experiments we present show that pedestrian detectors developed within the V-AYLA framework do achieve domain adaptation. Ideally, we would like to adapt our system without any human intervention. Therefore, as a first proof of concept we also propose an unsupervised domain adaptation technique that avoids human intervention during the adaptation process. To the best of our knowledge, this Thesis work is the first demonstrating adaptation of virtual and real worlds for developing an object detector. Last but not least, we also assessed a different strategy to avoid the dataset shift that consists in collecting real-world samples and retrain with them in such a way that no bounding boxes of real-world pedestrians have to be provided. We show that the generated classifier is competitive with respect to the counterpart trained with samples collected by manually annotating pedestrian bounding boxes. The results presented on this Thesis not only end with a proposal for adapting a virtual-world pedestrian detector to the real world, but also it goes further by pointing out a new methodology that would allow the system to adapt to different situations, which we hope will provide the foundations for future research in this unexplored area. | ||||
Address | Barcelona | ||||
Corporate Author | Thesis | Ph.D. thesis | |||
Publisher | Ediciones Graficas Rey | Place of Publication | Barcelona | Editor | Antonio Lopez;Daniel Ponsa |
Language | English | Summary Language | Original Title | ||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-84-940530-1-6 | Medium | ||
Area | Expedition | Conference | |||
Notes | adas | Approved | yes | ||
Call Number | ADAS @ adas @ Vaz2013 | Serial | 2276 | ||
Permanent link to this record | |||||
Author | Naveen Onkarappa | ||||
Title | Optical Flow in Driver Assistance Systems | Type | Book Whole | ||
Year | 2013 | Publication | PhD Thesis, Universitat Autonoma de Barcelona-CVC | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Motion perception is one of the most important attributes of the human brain. Visual motion perception consists in inferring speed and direction of elements in a scene based on visual inputs. Analogously, computer vision is assisted by motion cues in the scene. Motion detection in computer vision is useful in solving problems such as segmentation, depth from motion, structure from motion, compression, navigation and many others. These problems are common in several applications, for instance, video surveillance, robot navigation and advanced driver assistance systems (ADAS). One of the most widely used techniques for motion detection is the optical flow estimation. The work in this thesis attempts to make optical flow suitable for the requirements and conditions of driving scenarios. In this context, a novel space-variant representation called reverse log-polar representation is proposed that is shown to be better than the traditional log-polar space-variant representation for ADAS. The space-variant representations reduce the amount of data to be processed. Another major contribution in this research is related to the analysis of the influence of specific characteristics from driving scenarios on the optical flow accuracy. Characteristics such as vehicle speed and
road texture are considered in the aforementioned analysis. From this study, it is inferred that the regularization weight has to be adapted according to the required error measure and for different speeds and road textures. It is also shown that polar represented optical flow suits driving scenarios where predominant motion is translation. Due to the requirements of such a study and by the lack of needed datasets a new synthetic dataset is presented; it contains: i) sequences of different speeds and road textures in an urban scenario; ii) sequences with complex motion of an on-board camera; and iii) sequences with additional moving vehicles in the scene. The ground-truth optical flow is generated by the ray-tracing technique. Further, few applications of optical flow in ADAS are shown. Firstly, a robust RANSAC based technique to estimate horizon line is proposed. Then, an egomotion estimation is presented to compare the proposed space-variant representation with the classical one. As a final contribution, a modification in the regularization term is proposed that notably improves the results in the ADAS applications. This adaptation is evaluated using a state of the art optical flow technique. The experiments on a public dataset (KITTI) validate the advantages of using the proposed modification. |
||||
Address | Bellaterra | ||||
Corporate Author | Thesis | Ph.D. thesis | |||
Publisher | Ediciones Graficas Rey | Place of Publication | Editor | Angel Sappa | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-84-940902-1-9 | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS | Approved | no | ||
Call Number | Admin @ si @ Nav2013 | Serial | 2447 | ||
Permanent link to this record | |||||
Author | Jorge Bernal; David Vazquez (eds) | ||||
Title | Computer vision Trends and Challenges | Type | Book Whole | ||
Year | 2013 | Publication | Computer vision Trends and Challenges | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | CVCRD; Computer Vision | ||||
Abstract | This book contains the papers presented at the Eighth CVC Workshop on Computer Vision Trends and Challenges (CVCR&D'2013). The workshop was held at the Computer Vision Center (Universitat Autònoma de Barcelona), the October 25th, 2013. The CVC workshops provide an excellent opportunity for young researchers and project engineers to share new ideas and knowledge about the progress of their work, and also, to discuss about challenges and future perspectives. In addition, the workshop is the welcome event for new people that recently have joined the institute.
The program of CVCR&D is organized in a single-track single-day workshop. It comprises several sessions dedicated to specific topics. For each session, a doctor working on the topic introduces the general research lines. The PhD students expose their specific research. A poster session will be held for open questions. Session topics cover the current research lines and development projects of the CVC: Medical Imaging, Medical Imaging, Color & Texture Analysis, Object Recognition, Image Sequence Evaluation, Advanced Driver Assistance Systems, Machine Vision, Document Analysis, Pattern Recognition and Applications. We want to thank all paper authors and Program Committee members. Their contribution shows that the CVC has a dynamic, active, and promising scientific community. We hope you all enjoy this Eighth workshop and we are looking forward to meeting you and new people next year in the Ninth CVCR&D. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | Jorge Bernal; David Vazquez | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-84-940902-2-6 | Medium | ||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | ADAS @ adas @ BeV2013 | Serial | 2339 | ||
Permanent link to this record | |||||
Author | Carles Fernandez; Jordi Gonzalez; Joao Manuel R. S. Taveres; Xavier Roca | ||||
Title | Towards Ontological Cognitive System | Type | Book Chapter | ||
Year | 2013 | Publication | Topics in Medical Image Processing and Computational Vision | Abbreviated Journal | |
Volume | 8 | Issue | Pages | 87-99 | |
Keywords | |||||
Abstract | The increasing ubiquitousness of digital information in our daily lives has positioned video as a favored information vehicle, and given rise to an astonishing generation of social media and surveillance footage. This raises a series of technological demands for automatic video understanding and management, which together with the compromising attentional limitations of human operators, have motivated the research community to guide its steps towards a better attainment of such capabilities. As a result, current trends on cognitive vision promise to recognize complex events and self-adapt to different environments, while managing and integrating several types of knowledge. Future directions suggest to reinforce the multi-modal fusion of information sources and the communication with end-users. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Springer Netherlands | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 2212-9391 | ISBN | 978-94-007-0725-2 | Medium | |
Area | Expedition | Conference | |||
Notes | ISE; 605.203; 302.018; 600.049 | Approved | no | ||
Call Number | Admin @ si @ FGT2013 | Serial | 2287 | ||
Permanent link to this record | |||||
Author | Carles Sanchez; Debora Gil; Antoni Rosell; Albert Andaluz; F. Javier Sanchez | ||||
Title | Segmentation of Tracheal Rings in Videobronchoscopy combining Geometry and Appearance | Type | Conference Article | ||
Year | 2013 | Publication | Proceedings of the International Conference on Computer Vision Theory and Applications | Abbreviated Journal | |
Volume | 1 | Issue | Pages | 153--161 | |
Keywords | Video-bronchoscopy, tracheal ring segmentation, trachea geometric and appearance model | ||||
Abstract | Videobronchoscopy is a medical imaging technique that allows interactive navigation inside the respiratory pathways and minimal invasive interventions. Tracheal procedures are ordinary interventions that require measurement of the percentage of obstructed pathway for injury (stenosis) assessment. Visual assessment of stenosis in videobronchoscopic sequences requires high expertise of trachea anatomy and is prone to human error. Accurate detection of tracheal rings is the basis for automated estimation of the size of stenosed trachea. Processing of videobronchoscopic images acquired at the operating room is a challenging task due to the wide range of artifacts and acquisition conditions. We present a model of the geometric-appearance of tracheal rings for its detection in videobronchoscopic videos. Experiments on sequences acquired at the operating room, show a performance close to inter-observer variability | ||||
Address | Barcelona; February 2013 | ||||
Corporate Author | Thesis | ||||
Publisher | SciTePress | Place of Publication | Portugal | Editor | Sebastiano Battiato and José Braz |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-989-8565-47-1 | Medium | ||
Area | 800 | Expedition | Conference | VISAPP | |
Notes | IAM;MV; 600.044; 600.047; 600.060; 605.203 | Approved | no | ||
Call Number | IAM @ iam @ SGR2013 | Serial | 2123 | ||
Permanent link to this record |