|
Records |
Links |
|
Author |
Patricia Suarez; Angel Sappa; Boris X. Vintimilla |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
![find record details (via OpenURL) openurl](http://refbase.cvc.uab.es/img/xref.gif)
|
|
Title |
Learning to Colorize Infrared Images |
Type |
Conference Article |
|
Year |
2017 |
Publication ![sorted by Publication field, descending order (down)](http://refbase.cvc.uab.es/img/sort_desc.gif) |
15th International Conference on Practical Applications of Agents and Multi-Agent System |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
CNN in multispectral imaging; Image colorization |
|
|
Abstract |
This paper focuses on near infrared (NIR) image colorization by using a Generative Adversarial Network (GAN) architecture model. The proposed architecture consists of two stages. Firstly, it learns to colorize the given input, resulting in a RGB image. Then, in the second stage, a discriminative model is used to estimate the probability that the generated image came from the training dataset, rather than the image automatically generated. The proposed model starts the learning process from scratch, because our set of images is very dierent from the dataset used in existing pre-trained models, so transfer learning strategies cannot be used. Infrared image colorization is an important problem when human perception need to be considered, e.g, in remote sensing applications. Experimental results with a large set of real images are provided showing the validity of the proposed approach. |
|
|
Address |
Porto; Portugal; June 2017 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
PAAMS |
|
|
Notes |
ADAS; MSIAU; 600.086; 600.122; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ |
Serial |
2919 |
|
Permanent link to this record |
|
|
|
|
Author |
Naveen Onkarappa; Angel Sappa |
![goto web page (via DOI) doi](http://refbase.cvc.uab.es/img/doi.gif)
![find record details (via OpenURL) openurl](http://refbase.cvc.uab.es/img/xref.gif)
|
|
Title |
Laplacian Derivative based Regularization for Optical Flow Estimation in Driving Scenario |
Type |
Conference Article |
|
Year |
2013 |
Publication ![sorted by Publication field, descending order (down)](http://refbase.cvc.uab.es/img/sort_desc.gif) |
15th International Conference on Computer Analysis of Images and Patterns |
Abbreviated Journal |
|
|
|
Volume |
8048 |
Issue |
|
Pages |
483-490 |
|
|
Keywords |
Optical flow; regularization; Driver Assistance Systems; Performance Evaluation |
|
|
Abstract |
Existing state of the art optical flow approaches, which are evaluated on standard datasets such as Middlebury, not necessarily have a similar performance when evaluated on driving scenarios. This drop on performance is due to several challenges arising on real scenarios during driving. Towards this direction, in this paper, we propose a modification to the regularization term in a variational optical flow formulation, that notably improves the results, specially in driving scenarios. The proposed modification consists on using the Laplacian derivatives of flow components in the regularization term instead of gradients of flow components. We show the improvements in results on a standard real image sequences dataset (KITTI). |
|
|
Address |
York; UK; August 2013 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Berlin Heidelberg |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN |
978-3-642-40245-6 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CAIP |
|
|
Notes |
ADAS; 600.055; 601.215 |
Approved |
no |
|
|
Call Number |
Admin @ si @ OnS2013b |
Serial |
2244 |
|
Permanent link to this record |
|
|
|
|
Author |
Marcelo D. Pistarelli; Angel Sappa; Ricardo Toledo |
![goto web page (via DOI) doi](http://refbase.cvc.uab.es/img/doi.gif)
![find record details (via OpenURL) openurl](http://refbase.cvc.uab.es/img/xref.gif)
|
|
Title |
Multispectral Stereo Image Correspondence |
Type |
Conference Article |
|
Year |
2013 |
Publication ![sorted by Publication field, descending order (down)](http://refbase.cvc.uab.es/img/sort_desc.gif) |
15th International Conference on Computer Analysis of Images and Patterns |
Abbreviated Journal |
|
|
|
Volume |
8048 |
Issue |
|
Pages |
217-224 |
|
|
Keywords |
|
|
|
Abstract |
This paper presents a novel multispectral stereo image correspondence approach. It is evaluated using a stereo rig constructed with a visible spectrum camera and a long wave infrared spectrum camera. The novelty of the proposed approach lies on the usage of Hough space as a correspondence search domain. In this way it avoids searching for correspondence in the original multispectral image domains, where information is low correlated, and a common domain is used. The proposed approach is intended to be used in outdoor urban scenarios, where images contain large amount of edges. These edges are used as distinctive characteristics for the matching in the Hough space. Experimental results are provided showing the validity of the proposed approach. |
|
|
Address |
York; uk; August 2013 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Berlin Heidelberg |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN |
978-3-642-40245-6 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CAIP |
|
|
Notes |
ADAS; 600.055 |
Approved |
no |
|
|
Call Number |
Admin @ si @ PST2013 |
Serial |
2561 |
|
Permanent link to this record |
|
|
|
|
Author |
Javier Marin; David Vazquez; Antonio Lopez; Jaume Amores; Bastian Leibe |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
![find record details (via OpenURL) openurl](http://refbase.cvc.uab.es/img/xref.gif)
|
|
Title |
Random Forests of Local Experts for Pedestrian Detection |
Type |
Conference Article |
|
Year |
2013 |
Publication ![sorted by Publication field, descending order (down)](http://refbase.cvc.uab.es/img/sort_desc.gif) |
15th IEEE International Conference on Computer Vision |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
2592 - 2599 |
|
|
Keywords |
ADAS; Random Forest; Pedestrian Detection |
|
|
Abstract |
Pedestrian detection is one of the most challenging tasks in computer vision, and has received a lot of attention in the last years. Recently, some authors have shown the advantages of using combinations of part/patch-based detectors in order to cope with the large variability of poses and the existence of partial occlusions. In this paper, we propose a pedestrian detection method that efficiently combines multiple local experts by means of a Random Forest ensemble. The proposed method works with rich block-based representations such as HOG and LBP, in such a way that the same features are reused by the multiple local experts, so that no extra computational cost is needed with respect to a holistic method. Furthermore, we demonstrate how to integrate the proposed approach with a cascaded architecture in order to achieve not only high accuracy but also an acceptable efficiency. In particular, the resulting detector operates at five frames per second using a laptop machine. We tested the proposed method with well-known challenging datasets such as Caltech, ETH, Daimler, and INRIA. The method proposed in this work consistently ranks among the top performers in all the datasets, being either the best method or having a small difference with the best one. |
|
|
Address |
Sydney; Australia; December 2013 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
IEEE |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1550-5499 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICCV |
|
|
Notes |
ADAS; 600.057; 600.054 |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ MVL2013 |
Serial |
2333 |
|
Permanent link to this record |
|
|
|
|
Author |
Gemma Roig; Xavier Boix; R. de Nijs; Sebastian Ramos; K. Kühnlenz; Luc Van Gool |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
![find record details (via OpenURL) openurl](http://refbase.cvc.uab.es/img/xref.gif)
|
|
Title |
Active MAP Inference in CRFs for Efficient Semantic Segmentation |
Type |
Conference Article |
|
Year |
2013 |
Publication ![sorted by Publication field, descending order (down)](http://refbase.cvc.uab.es/img/sort_desc.gif) |
15th IEEE International Conference on Computer Vision |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
2312 - 2319 |
|
|
Keywords |
Semantic Segmentation |
|
|
Abstract |
Most MAP inference algorithms for CRFs optimize an energy function knowing all the potentials. In this paper, we focus on CRFs where the computational cost of instantiating the potentials is orders of magnitude higher than MAP inference. This is often the case in semantic image segmentation, where most potentials are instantiated by slow classifiers fed with costly features. We introduce Active MAP inference 1) to on-the-fly select a subset of potentials to be instantiated in the energy function, leaving the rest of the parameters of the potentials unknown, and 2) to estimate the MAP labeling from such incomplete energy function. Results for semantic segmentation benchmarks, namely PASCAL VOC 2010 [5] and MSRC-21 [19], show that Active MAP inference achieves similar levels of accuracy but with major efficiency gains. |
|
|
Address |
Sydney; Australia; December 2013 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1550-5499 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICCV |
|
|
Notes |
ADAS; 600.057 |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ RBN2013 |
Serial |
2377 |
|
Permanent link to this record |
|
|
|
|
Author |
Felipe Codevilla; Antonio Lopez; Vladlen Koltun; Alexey Dosovitskiy |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
![find record details (via OpenURL) openurl](http://refbase.cvc.uab.es/img/xref.gif)
|
|
Title |
On Offline Evaluation of Vision-based Driving Models |
Type |
Conference Article |
|
Year |
2018 |
Publication ![sorted by Publication field, descending order (down)](http://refbase.cvc.uab.es/img/sort_desc.gif) |
15th European Conference on Computer Vision |
Abbreviated Journal |
|
|
|
Volume |
11219 |
Issue |
|
Pages |
246-262 |
|
|
Keywords |
Autonomous driving; deep learning |
|
|
Abstract |
Autonomous driving models should ideally be evaluated by deploying
them on a fleet of physical vehicles in the real world. Unfortunately, this approach is not practical for the vast majority of researchers. An attractive alternative is to evaluate models offline, on a pre-collected validation dataset with ground truth annotation. In this paper, we investigate the relation between various online and offline metrics for evaluation of autonomous driving models. We find that offline prediction error is not necessarily correlated with driving quality, and two models with identical prediction error can differ dramatically in their driving performance. We show that the correlation of offline evaluation with driving quality can be significantly improved by selecting an appropriate validation dataset and
suitable offline metrics. |
|
|
Address |
Munich; September 2018 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ECCV |
|
|
Notes |
ADAS; 600.124; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ CLK2018 |
Serial |
3162 |
|
Permanent link to this record |
|
|
|
|
Author |
Chris Bahnsen; David Vazquez; Antonio Lopez; Thomas B. Moeslund |
![goto web page url](http://refbase.cvc.uab.es/img/www.gif)
![find record details (via OpenURL) openurl](http://refbase.cvc.uab.es/img/xref.gif)
|
|
Title |
Learning to Remove Rain in Traffic Surveillance by Using Synthetic Data |
Type |
Conference Article |
|
Year |
2019 |
Publication ![sorted by Publication field, descending order (down)](http://refbase.cvc.uab.es/img/sort_desc.gif) |
14th International Conference on Computer Vision Theory and Applications |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
123-130 |
|
|
Keywords |
Rain Removal; Traffic Surveillance; Image Denoising |
|
|
Abstract |
Rainfall is a problem in automated traffic surveillance. Rain streaks occlude the road users and degrade the overall visibility which in turn decrease object detection performance. One way of alleviating this is by artificially removing the rain from the images. This requires knowledge of corresponding rainy and rain-free images. Such images are often produced by overlaying synthetic rain on top of rain-free images. However, this method fails to incorporate the fact that rain fall in the entire three-dimensional volume of the scene. To overcome this, we introduce training data from the SYNTHIA virtual world that models rain streaks in the entirety of a scene. We train a conditional Generative Adversarial Network for rain removal and apply it on traffic surveillance images from SYNTHIA and the AAU RainSnow datasets. To measure the applicability of the rain-removed images in a traffic surveillance context, we run the YOLOv2 object detection algorithm on the original and rain-removed frames. The results on SYNTHIA show an 8% increase in detection accuracy compared to the original rain image. Interestingly, we find that high PSNR or SSIM scores do not imply good object detection performance. |
|
|
Address |
Praga; Czech Republic; February 2019 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
VISIGRAPP |
|
|
Notes |
ADAS; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ BVL2019 |
Serial |
3256 |
|
Permanent link to this record |
|
|
|
|
Author |
Muhammad Anwer Rao; David Vazquez; Antonio Lopez |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
![find book details (via ISBN) isbn](http://refbase.cvc.uab.es/img/isbn.gif)
|
|
Title |
Color Contribution to Part-Based Person Detection in Different Types of Scenarios |
Type |
Conference Article |
|
Year |
2011 |
Publication ![sorted by Publication field, descending order (down)](http://refbase.cvc.uab.es/img/sort_desc.gif) |
14th International Conference on Computer Analysis of Images and Patterns |
Abbreviated Journal |
|
|
|
Volume |
6855 |
Issue |
II |
Pages |
463-470 |
|
|
Keywords |
Pedestrian Detection; Color |
|
|
Abstract |
Camera-based person detection is of paramount interest due to its potential applications. The task is diffcult because the great variety of backgrounds (scenarios, illumination) in which persons are present, as well as their intra-class variability (pose, clothe, occlusion). In fact, the class person is one of the included in the popular PASCAL visual object classes (VOC) challenge. A breakthrough for this challenge, regarding person detection, is due to Felzenszwalb et al. These authors proposed a part-based detector that relies on histograms of oriented gradients (HOG) and latent support vector machines (LatSVM) to learn a model of the whole human body and its constitutive parts, as well as their relative position. Since the approach of Felzenszwalb et al. appeared new variants have been proposed, usually giving rise to more complex models. In this paper, we focus on an issue that has not attracted suficient interest up to now. In particular, we refer to the fact that HOG is usually computed from RGB color space, but other possibilities exist and deserve the corresponding investigation. In this paper we challenge RGB space with the opponent color space (OPP), which is inspired in the human vision system.We will compute the HOG on top of OPP, then we train and test the part-based human classifer by Felzenszwalb et al. using PASCAL VOC challenge protocols and person database. Our experiments demonstrate that OPP outperforms RGB. We also investigate possible differences among types of scenarios: indoor, urban and countryside. Interestingly, our experiments suggest that the beneficts of OPP with respect to RGB mainly come for indoor and countryside scenarios, those in which the human visual system was designed by evolution. |
|
|
Address |
Seville, Spain |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer |
Place of Publication |
Berlin Heidelberg |
Editor |
P. Real, D. Diaz, H. Molina, A. Berciano, W. Kropatsch |
|
|
Language |
English |
Summary Language |
english |
Original Title |
Color Contribution to Part-Based Person Detection in Different Types of Scenarios |
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN |
978-3-642-23677-8 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CAIP |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ RVL2011b |
Serial |
1665 |
|
Permanent link to this record |
|
|
|
|
Author |
Aura Hernandez-Sabate; Debora Gil; David Roche; Monica M. S. Matsumoto; Sergio S. Furuie |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
![find record details (via OpenURL) openurl](http://refbase.cvc.uab.es/img/xref.gif)
|
|
Title |
Inferring the Performance of Medical Imaging Algorithms |
Type |
Conference Article |
|
Year |
2011 |
Publication ![sorted by Publication field, descending order (down)](http://refbase.cvc.uab.es/img/sort_desc.gif) |
14th International Conference on Computer Analysis of Images and Patterns |
Abbreviated Journal |
|
|
|
Volume |
6854 |
Issue |
|
Pages |
520-528 |
|
|
Keywords |
Validation, Statistical Inference, Medical Imaging Algorithms. |
|
|
Abstract |
Evaluation of the performance and limitations of medical imaging algorithms is essential to estimate their impact in social, economic or clinical aspects. However, validation of medical imaging techniques is a challenging task due to the variety of imaging and clinical problems involved, as well as, the difficulties for systematically extracting a reliable solely ground truth. Although specific validation protocols are reported in any medical imaging paper, there are still two major concerns: definition of standardized methodologies transversal to all problems and generalization of conclusions to the whole clinical data set.
We claim that both issues would be fully solved if we had a statistical model relating ground truth and the output of computational imaging techniques. Such a statistical model could conclude to what extent the algorithm behaves like the ground truth from the analysis of a sampling of the validation data set. We present a statistical inference framework reporting the agreement and describing the relationship of two quantities. We show its transversality by applying it to validation of two different tasks: contour segmentation and landmark correspondence. |
|
|
Address |
Sevilla |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer-Verlag Berlin Heidelberg |
Place of Publication |
Berlin |
Editor |
Pedro Real; Daniel Diaz-Pernil; Helena Molina-Abril; Ainhoa Berciano; Walter Kropatsch |
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
L |
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CAIP |
|
|
Notes |
IAM; ADAS |
Approved |
no |
|
|
Call Number |
IAM @ iam @ HGR2011 |
Serial |
1676 |
|
Permanent link to this record |
|
|
|
|
Author |
Naveen Onkarappa; Angel Sappa |
![goto web page (via DOI) doi](http://refbase.cvc.uab.es/img/doi.gif)
![find record details (via OpenURL) openurl](http://refbase.cvc.uab.es/img/xref.gif)
|
|
Title |
Space Variant Representations for Mobile Platform Vision Applications |
Type |
Conference Article |
|
Year |
2011 |
Publication ![sorted by Publication field, descending order (down)](http://refbase.cvc.uab.es/img/sort_desc.gif) |
14th International Conference on Computer Analysis of Images and Patterns |
Abbreviated Journal |
|
|
|
Volume |
6855 |
Issue |
II |
Pages |
146-154 |
|
|
Keywords |
|
|
|
Abstract |
The log-polar space variant representation, motivated by biological vision, has been widely studied in the literature. Its data reduction and invariance properties made it useful in many vision applications. However, due to its nature, it fails in preserving features in the periphery. In the current work, as an attempt to overcome this problem, we propose a novel space-variant representation. It is evaluated and proved to be better than the log-polar representation in preserving the peripheral information, crucial for on-board mobile vision applications. The evaluation is performed by comparing log-polar and the proposed representation once they are used for estimating dense optical flow. |
|
|
Address |
Seville, Spain |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Berlin Heidelberg |
Place of Publication |
|
Editor |
P. Real, D. Diaz, H. Molina, A. Berciano, W. Kropatsch |
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN |
978-3-642-23677-8 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CAIP |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
NaS2011; ADAS @ adas @ |
Serial |
1686 |
|
Permanent link to this record |