Home | << 1 2 3 4 5 6 7 8 9 10 >> [11–12] |
Records | |||||
---|---|---|---|---|---|
Author | Jaume Amores; David Geronimo; Antonio Lopez | ||||
Title | Multiple instance and active learning for weakly-supervised object-class segmentation | Type | Conference Article | ||
Year | 2010 | Publication | 3rd IEEE International Conference on Machine Vision | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | Multiple Instance Learning; Active Learning; Object-class segmentation. | ||||
Abstract | In object-class segmentation, one of the most tedious tasks is to manually segment many object examples in order to learn a model of the object category. Yet, there has been little research on reducing the degree of manual annotation for
object-class segmentation. In this work we explore alternative strategies which do not require full manual segmentation of the object in the training set. In particular, we study the use of bounding boxes as a coarser and much cheaper form of segmentation and we perform a comparative study of several Multiple-Instance Learning techniques that allow to obtain a model with this type of weak annotation. We show that some of these methods can be competitive, when used with coarse segmentations, with methods that require full manual segmentation of the objects. Furthermore, we show how to use active learning combined with this weakly supervised strategy. As we see, this strategy permits to reduce the amount of annotation and optimize the number of examples that require full manual segmentation in the training set. |
||||
Address | Hong-Kong | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICMV | ||
Notes | ADAS | Approved | no | ||
Call Number | ADAS @ adas @ AGL2010b | Serial | 1429 | ||
Permanent link to this record | |||||
Author | Joan Serrat; Antonio Lopez | ||||
Title | Deteccion automatica de lineas de carril para la asistencia a la conduccion | Type | Miscellaneous | ||
Year | 2010 | Publication | UAB Divulga – Revista de divulgacion cientifica | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | La detección por cámara de las líneas de carril en las carreteras puede ser una solución asequible a los riesgos de conducción generados por los adelantamientos o las salidas de carril. Este trabajo propone un sistema que funciona en tiempo real y que obtiene muy buenos resultados. El sistema está preparado para identificar las líneas en condiciones de visibilidad poco favorables, como puede ser la conducción nocturna o con otros vehículos que dificulten la visión. | ||||
Address | Bellaterra (Spain) | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ADAS | Approved | no | ||
Call Number | ADAS @ adas @ SeL2010 | Serial | 1430 | ||
Permanent link to this record | |||||
Author | Jose Manuel Alvarez | ||||
Title | Combining Context and Appearance for Road Detection | Type | Book Whole | ||
Year | 2010 | Publication | PhD Thesis, Universitat Autonoma de Barcelona-CVC | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Road traffic crashes have become a major cause of death and injury throughout the world.
Hence, in order to improve road safety, the automobile manufacture is moving towards the development of vehicles with autonomous functionalities such as keeping in the right lane, safe distance keeping between vehicles or regulating the speed of the vehicle according to the traffic conditions. A key component of these systems is vision–based road detection that aims to detect the free road surface ahead the moving vehicle. Detecting the road using a monocular vision system is very challenging since the road is an outdoor scenario imaged from a mobile platform. Hence, the detection algorithm must be able to deal with continuously changing imaging conditions such as the presence ofdifferent objects (vehicles, pedestrians), different environments (urban, highways, off–road), different road types (shape, color), and different imaging conditions (varying illumination, different viewpoints and changing weather conditions). Therefore, in this thesis, we focus on vision–based road detection using a single color camera. More precisely, we first focus on analyzing and grouping pixels according to their low–level properties. In this way, two different approaches are presented to exploit color and photometric invariance. Then, we focus the research of the thesis on exploiting context information. This information provides relevant knowledge about the road not using pixel features from road regions but semantic information from the analysis of the scene. In this way, we present two different approaches to infer the geometry of the road ahead the moving vehicle. Finally, we focus on combining these context and appearance (color) approaches to improve the overall performance of road detection algorithms. The qualitative and quantitative results presented in this thesis on real–world driving sequences show that the proposed method is robust to varying imaging conditions, road types and scenarios going beyond the state–of–the–art. |
||||
Address | |||||
Corporate Author | Thesis | Ph.D. thesis | |||
Publisher | Ediciones Graficas Rey | Place of Publication | Editor | Antonio Lopez;Theo Gevers | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-84-937261-8-8 | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS | Approved | no | ||
Call Number | Admin @ si @ Alv2010 | Serial | 1454 | ||
Permanent link to this record | |||||
Author | Angel Sappa (ed) | ||||
Title | Computer Graphics and Imaging | Type | Book Whole | ||
Year | 2010 | Publication | Computer Graphics and Imaging | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | Angel Sappa | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978–0–88986–836–6 | Medium | ||
Area | Expedition | Conference | CGIM | ||
Notes | ADAS | Approved | no | ||
Call Number | ADAS @ adas @ Sap2010 | Serial | 1468 | ||
Permanent link to this record | |||||
Author | Monica Piñol | ||||
Title | Adaptative Vocabulary Tree for Image Classification using Reinforcement Learning | Type | Report | ||
Year | 2010 | Publication | CVC Technical Report | Abbreviated Journal | |
Volume | 162 | Issue | Pages | ||
Keywords | |||||
Abstract | |||||
Address | Bellaterra (Barcelona) | ||||
Corporate Author | Computer Vision Center | Thesis | Master's thesis | ||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ADAS | Approved | no | ||
Call Number | Admin @ si @ Piñ2010 | Serial | 1936 | ||
Permanent link to this record | |||||
Author | Josep M. Gonfaus; Xavier Boix; Joost Van de Weijer; Andrew Bagdanov; Joan Serrat; Jordi Gonzalez | ||||
Title | Harmony Potentials for Joint Classification and Segmentation | Type | Conference Article | ||
Year | 2010 | Publication | 23rd IEEE Conference on Computer Vision and Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 3280–3287 | ||
Keywords | |||||
Abstract | Hierarchical conditional random fields have been successfully applied to object segmentation. One reason is their ability to incorporate contextual information at different scales. However, these models do not allow multiple labels to be assigned to a single node. At higher scales in the image, this yields an oversimplified model, since multiple classes can be reasonable expected to appear within one region. This simplified model especially limits the impact that observations at larger scales may have on the CRF model. Neglecting the information at larger scales is undesirable since class-label estimates based on these scales are more reliable than at smaller, noisier scales. To address this problem, we propose a new potential, called harmony potential, which can encode any possible combination of class labels. We propose an effective sampling strategy that renders tractable the underlying optimization problem. Results show that our approach obtains state-of-the-art results on two challenging datasets: Pascal VOC 2009 and MSRC-21. | ||||
Address | San Francisco CA, USA | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1063-6919 | ISBN | 978-1-4244-6984-0 | Medium | |
Area | Expedition | Conference | CVPR | ||
Notes | ADAS;CIC;ISE | Approved | no | ||
Call Number | ADAS @ adas @ GBW2010 | Serial | 1296 | ||
Permanent link to this record | |||||
Author | Jose Manuel Alvarez; Theo Gevers; Antonio Lopez | ||||
Title | 3D Scene Priors for Road Detection | Type | Conference Article | ||
Year | 2010 | Publication | 23rd IEEE Conference on Computer Vision and Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 57–64 | ||
Keywords | road detection | ||||
Abstract | Vision-based road detection is important in different areas of computer vision such as autonomous driving, car collision warning and pedestrian crossing detection. However, current vision-based road detection methods are usually based on low-level features and they assume structured roads, road homogeneity, and uniform lighting conditions. Therefore, in this paper, contextual 3D information is used in addition to low-level cues. Low-level photometric invariant cues are derived from the appearance of roads. Contextual cues used include horizon lines, vanishing points, 3D scene layout and 3D road stages. Moreover, temporal road cues are included. All these cues are sensitive to different imaging conditions and hence are considered as weak cues. Therefore, they are combined to improve the overall performance of the algorithm. To this end, the low-level, contextual and temporal cues are combined in a Bayesian framework to classify road sequences. Large scale experiments on road sequences show that the road detection method is robust to varying imaging conditions, road types, and scenarios (tunnels, urban and highway). Further, using the combined cues outperforms all other individual cues. Finally, the proposed method provides highest road detection accuracy when compared to state-of-the-art methods. | ||||
Address | San Francisco; CA; USA; June 2010 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1063-6919 | ISBN | 978-1-4244-6984-0 | Medium | |
Area | Expedition | Conference | CVPR | ||
Notes | ADAS;ISE | Approved | no | ||
Call Number | ADAS @ adas @ AGL2010a | Serial | 1302 | ||
Permanent link to this record | |||||
Author | Jose Manuel Alvarez; Felipe Lumbreras; Theo Gevers; Antonio Lopez | ||||
Title | Geographic Information for vision-based Road Detection | Type | Conference Article | ||
Year | 2010 | Publication | IEEE Intelligent Vehicles Symposium | Abbreviated Journal | |
Volume | Issue | Pages | 621–626 | ||
Keywords | road detection | ||||
Abstract | Road detection is a vital task for the development of autonomous vehicles. The knowledge of the free road surface ahead of the target vehicle can be used for autonomous driving, road departure warning, as well as to support advanced driver assistance systems like vehicle or pedestrian detection. Using vision to detect the road has several advantages in front of other sensors: richness of features, easy integration, low cost or low power consumption. Common vision-based road detection approaches use low-level features (such as color or texture) as visual cues to group pixels exhibiting similar properties. However, it is difficult to foresee a perfect clustering algorithm since roads are in outdoor scenarios being imaged from a mobile platform. In this paper, we propose a novel high-level approach to vision-based road detection based on geographical information. The key idea of the algorithm is exploiting geographical information to provide a rough detection of the road. Then, this segmentation is refined at low-level using color information to provide the final result. The results presented show the validity of our approach. | ||||
Address | San Diego; CA; USA | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | IV | ||
Notes | ADAS;ISE | Approved | no | ||
Call Number | ADAS @ adas @ ALG2010 | Serial | 1428 | ||
Permanent link to this record | |||||
Author | Jose Manuel Alvarez; Theo Gevers; Antonio Lopez | ||||
Title | Learning photometric invariance for object detection | Type | Journal Article | ||
Year | 2010 | Publication | International Journal of Computer Vision | Abbreviated Journal | IJCV |
Volume | 90 | Issue | 1 | Pages | 45-61 |
Keywords | road detection | ||||
Abstract | Impact factor: 3.508 (the last available from JCR2009SCI). Position 4/103 in the category Computer Science, Artificial Intelligence. Quartile
Color is a powerful visual cue in many computer vision applications such as image segmentation and object recognition. However, most of the existing color models depend on the imaging conditions that negatively affect the performance of the task at hand. Often, a reflection model (e.g., Lambertian or dichromatic reflectance) is used to derive color invariant models. However, this approach may be too restricted to model real-world scenes in which different reflectance mechanisms can hold simultaneously. Therefore, in this paper, we aim to derive color invariance by learning from color models to obtain diversified color invariant ensembles. First, a photometrical orthogonal and non-redundant color model set is computed composed of both color variants and invariants. Then, the proposed method combines these color models to arrive at a diversified color ensemble yielding a proper balance between invariance (repeatability) and discriminative power (distinctiveness). To achieve this, our fusion method uses a multi-view approach to minimize the estimation error. In this way, the proposed method is robust to data uncertainty and produces properly diversified color invariant ensembles. Further, the proposed method is extended to deal with temporal data by predicting the evolution of observations over time. Experiments are conducted on three different image datasets to validate the proposed method. Both the theoretical and experimental results show that the method is robust against severe variations in imaging conditions. The method is not restricted to a certain reflection model or parameter tuning, and outperforms state-of-the-art detection techniques in the field of object, skin and road recognition. Considering sequential data, the proposed method (extended to deal with future observations) outperforms the other methods |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Springer US | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0920-5691 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS;ISE | Approved | no | ||
Call Number | ADAS @ adas @ AGL2010c | Serial | 1451 | ||
Permanent link to this record | |||||
Author | Albert Ali Salah; E. Pauwels; R. Tavenard; Theo Gevers | ||||
Title | T-Patterns Revisited: Mining for Temporal Patterns in Sensor Data | Type | Journal Article | ||
Year | 2010 | Publication | Sensors | Abbreviated Journal | SENS |
Volume | 10 | Issue | 8 | Pages | 7496-7513 |
Keywords | sensor networks; temporal pattern extraction; T-patterns; Lempel-Ziv; Gaussian mixture model; MERL motion data | ||||
Abstract | The trend to use large amounts of simple sensors as opposed to a few complex sensors to monitor places and systems creates a need for temporal pattern mining algorithms to work on such data. The methods that try to discover re-usable and interpretable patterns in temporal event data have several shortcomings. We contrast several recent approaches to the problem, and extend the T-Pattern algorithm, which was previously applied for detection of sequential patterns in behavioural sciences. The temporal complexity of the T-pattern approach is prohibitive in the scenarios we consider. We remedy this with a statistical model to obtain a fast and robust algorithm to find patterns in temporal data. We test our algorithm on a recent database collected with passive infrared sensors with millions of events. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ALTRES;ISE | Approved | no | ||
Call Number | Admin @ si @ SPT2010 | Serial | 1845 | ||
Permanent link to this record | |||||
Author | Koen E.A. van de Sande; Theo Gevers; C.G.M. Snoek | ||||
Title | Evaluating Color Descriptors for Object and Scene Recognition | Type | Journal Article | ||
Year | 2010 | Publication | IEEE Transaction on Pattern Analysis and Machine Intelligence | Abbreviated Journal | TPAMI |
Volume | 32 | Issue | 9 | Pages | 1582 - 1596 |
Keywords | |||||
Abstract | Impact factor: 5.308
Image category recognition is important to access visual information on the level of objects and scene types. So far, intensity-based descriptors have been widely used for feature extraction at salient points. To increase illumination invariance and discriminative power, color descriptors have been proposed. Because many different descriptors exist, a structured overview is required of color invariant descriptors in the context of image category recognition. Therefore, this paper studies the invariance properties and the distinctiveness of color descriptors (software to compute the color descriptors from this paper is available from http://www.colordescriptors.com) in a structured way. The analytical invariance properties of color descriptors are explored, using a taxonomy based on invariance properties with respect to photometric transformations, and tested experimentally using a data set with known illumination conditions. In addition, the distinctiveness of color descriptors is assessed experimentally using two benchmarks, one from the image domain and one from the video domain. From the theoretical and experimental results, it can be derived that invariance to light intensity changes and light color changes affects category recognition. The results further reveal that, for light intensity shifts, the usefulness of invariance is category-specific. Overall, when choosing a single descriptor and no prior knowledge about the data set and object and scene categories is available, the OpponentSIFT is recommended. Furthermore, a combined set of color descriptors outperforms intensity-based SIFT and improves category recognition by 8 percent on the PASCAL VOC 2007 and by 7 percent on the Mediamill Challenge. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0162-8828 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ALTRES;ISE | Approved | no | ||
Call Number | Admin @ si @ SGS2010 | Serial | 1846 | ||
Permanent link to this record | |||||
Author | O. Fors; J. Nuñez; Xavier Otazu; A. Prades; Robert D. Cardinal | ||||
Title | Improving the Ability of Image Sensors to Detect Faint Stars and Moving Objects Using Image Deconvolution Techniques | Type | Journal Article | ||
Year | 2010 | Publication | Sensors | Abbreviated Journal | SENS |
Volume | 10 | Issue | 3 | Pages | 1743–1752 |
Keywords | image processing; image deconvolution; faint stars; space debris; wavelet transform | ||||
Abstract | Abstract: In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | CIC | Approved | no | ||
Call Number | CAT @ cat @ FNO2010 | Serial | 1285 | ||
Permanent link to this record | |||||
Author | Naila Murray; Eduard Vazquez | ||||
Title | Lacuna Restoration: How to choose a neutral colour? | Type | Conference Article | ||
Year | 2010 | Publication | Proceedings of The CREATE 2010 Conference | Abbreviated Journal | |
Volume | Issue | Pages | 248–252 | ||
Keywords | |||||
Abstract | Painting restoration which involves filling in material loss (called lacuna) is a complex process. Several standard techniques exist to tackle lacuna restoration,
and this article focuses on those techniques that employ a “neutral” colour to mask the defect. Restoration experts often disagree on the choice of such a colour and in fact, the concept of a neutral colour is controversial. We posit that a neutral colour is one that attracts relatively little visual attention for a specific lacuna. We conducted an eye tracking experiment to compare two common neutral colour selection methods, specifically the most common local colour and the mean local colour. Results obtained demonstrate that the most common local colour triggers less visual attention in general. Notwithstanding, we have observed instances in which the most common colour triggers a significant amount of attention when subjects spent time resolving their confusion about whether or not a lacuna was part of the painting. |
||||
Address | Gjovik, Norway | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CREATE | ||
Notes | CIC | Approved | no | ||
Call Number | Admin @ si @ MuV2010 | Serial | 1297 | ||
Permanent link to this record | |||||
Author | Eduard Vazquez; Ramon Baldrich | ||||
Title | Non-supervised goodness measure for image segmentation | Type | Conference Article | ||
Year | 2010 | Publication | Proceedings of The CREATE 2010 Conference | Abbreviated Journal | |
Volume | Issue | Pages | 334–335 | ||
Keywords | |||||
Abstract | |||||
Address | Gjovik, Norway | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CREATE | ||
Notes | CIC | Approved | no | ||
Call Number | CAT @ cat @ VaB2010 | Serial | 1299 | ||
Permanent link to this record | |||||
Author | Jaime Moreno; Xavier Otazu; Maria Vanrell | ||||
Title | Local Perceptual Weighting in JPEG2000 for Color Images | Type | Conference Article | ||
Year | 2010 | Publication | 5th European Conference on Colour in Graphics, Imaging and Vision and 12th International Symposium on Multispectral Colour Science | Abbreviated Journal | |
Volume | Issue | Pages | 255–260 | ||
Keywords | |||||
Abstract | The aim of this work is to explain how to apply perceptual concepts to define a perceptual pre-quantizer and to improve JPEG2000 compressor. The approach consists in quantizing wavelet transform coefficients using some of the human visual system behavior properties. Noise is fatal to image compression performance, because it can be both annoying for the observer and consumes excessive bandwidth when the imagery is transmitted. Perceptual pre-quantization reduces unperceivable details and thus improve both visual impression and transmission properties. The comparison between JPEG2000 without and with perceptual pre-quantization shows that the latter is not favorable in PSNR, but the recovered image is more compressed at the same or even better visual quality measured with a weighted PSNR. Perceptual criteria were taken from the CIWaM (Chromatic Induction Wavelet Model). | ||||
Address | Joensuu, Finland | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 9781617388897 | Medium | ||
Area | Expedition | Conference | CGIV/MCS | ||
Notes | CIC | Approved | no | ||
Call Number | CAT @ cat @ MOV2010a | Serial | 1307 | ||
Permanent link to this record |