|
Records |
Links |
|
Author |
Eduard Vazquez; Theo Gevers; M. Lucassen; Joost Van de Weijer; Ramon Baldrich |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title ![sorted by Title field, descending order (down)](img/sort_desc.gif) |
Saliency of Color Image Derivatives: A Comparison between Computational Models and Human Perception |
Type |
Journal Article |
|
Year |
2010 |
Publication |
Journal of the Optical Society of America A |
Abbreviated Journal |
JOSA A |
|
|
Volume |
27 |
Issue |
3 |
Pages |
613–621 |
|
|
Keywords |
|
|
|
Abstract |
In this paper, computational methods are proposed to compute color edge saliency based on the information content of color edges. The computational methods are evaluated on bottom-up saliency in a psychophysical experiment, and on a more complex task of salient object detection in real-world images. The psychophysical experiment demonstrates the relevance of using information theory as a saliency processing model and that the proposed methods are significantly better in predicting color saliency (with a human-method correspondence up to 74.75% and an observer agreement of 86.8%) than state-of-the-art models. Furthermore, results from salient object detection confirm that an early fusion of color and contrast provide accurate performance to compute visual saliency with a hit rate up to 95.2%. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ISE;CIC |
Approved |
no |
|
|
Call Number |
CAT @ cat @ VGL2010 |
Serial |
1275 |
|
Permanent link to this record |
|
|
|
|
Author |
Carola Figueroa Flores; David Berga; Joost Van de Weijer; Bogdan Raducanu |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title ![sorted by Title field, descending order (down)](img/sort_desc.gif) |
Saliency for free: Saliency prediction as a side-effect of object recognition |
Type |
Journal Article |
|
Year |
2021 |
Publication |
Pattern Recognition Letters |
Abbreviated Journal |
PRL |
|
|
Volume |
150 |
Issue |
|
Pages |
1-7 |
|
|
Keywords |
Saliency maps; Unsupervised learning; Object recognition |
|
|
Abstract |
Saliency is the perceptual capacity of our visual system to focus our attention (i.e. gaze) on relevant objects instead of the background. So far, computational methods for saliency estimation required the explicit generation of a saliency map, process which is usually achieved via eyetracking experiments on still images. This is a tedious process that needs to be repeated for each new dataset. In the current paper, we demonstrate that is possible to automatically generate saliency maps without ground-truth. In our approach, saliency maps are learned as a side effect of object recognition. Extensive experiments carried out on both real and synthetic datasets demonstrated that our approach is able to generate accurate saliency maps, achieving competitive results when compared with supervised methods. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
LAMP; 600.147; 600.120 |
Approved |
no |
|
|
Call Number |
Admin @ si @ FBW2021 |
Serial |
3559 |
|
Permanent link to this record |
|
|
|
|
Author |
Carola Figueroa Flores; Abel Gonzalez-Garcia; Joost Van de Weijer; Bogdan Raducanu |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title ![sorted by Title field, descending order (down)](img/sort_desc.gif) |
Saliency for fine-grained object recognition in domains with scarce training data |
Type |
Journal Article |
|
Year |
2019 |
Publication |
Pattern Recognition |
Abbreviated Journal |
PR |
|
|
Volume |
94 |
Issue |
|
Pages |
62-73 |
|
|
Keywords |
|
|
|
Abstract |
This paper investigates the role of saliency to improve the classification accuracy of a Convolutional Neural Network (CNN) for the case when scarce training data is available. Our approach consists in adding a saliency branch to an existing CNN architecture which is used to modulate the standard bottom-up visual features from the original image input, acting as an attentional mechanism that guides the feature extraction process. The main aim of the proposed approach is to enable the effective training of a fine-grained recognition model with limited training samples and to improve the performance on the task, thereby alleviating the need to annotate a large dataset. The vast majority of saliency methods are evaluated on their ability to generate saliency maps, and not on their functionality in a complete vision pipeline. Our proposed pipeline allows to evaluate saliency methods for the high-level task of object recognition. We perform extensive experiments on various fine-grained datasets (Flowers, Birds, Cars, and Dogs) under different conditions and show that saliency can considerably improve the network’s performance, especially for the case of scarce training data. Furthermore, our experiments show that saliency methods that obtain improved saliency maps (as measured by traditional saliency benchmarks) also translate to saliency methods that yield improved performance gains when applied in an object recognition pipeline. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
LAMP; 600.109; 600.141; 600.120 |
Approved |
no |
|
|
Call Number |
Admin @ si @ FGW2019 |
Serial |
3264 |
|
Permanent link to this record |
|
|
|
|
Author |
Sangheeta Roy; Palaiahnakote Shivakumara; Namita Jain; Vijeta Khare; Anjan Dutta; Umapada Pal; Tong Lu |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title ![sorted by Title field, descending order (down)](img/sort_desc.gif) |
Rough-Fuzzy based Scene Categorization for Text Detection and Recognition in Video |
Type |
Journal Article |
|
Year |
2018 |
Publication |
Pattern Recognition |
Abbreviated Journal |
PR |
|
|
Volume |
80 |
Issue |
|
Pages |
64-82 |
|
|
Keywords |
Rough set; Fuzzy set; Video categorization; Scene image classification; Video text detection; Video text recognition |
|
|
Abstract |
Scene image or video understanding is a challenging task especially when number of video types increases drastically with high variations in background and foreground. This paper proposes a new method for categorizing scene videos into different classes, namely, Animation, Outlet, Sports, e-Learning, Medical, Weather, Defense, Economics, Animal Planet and Technology, for the performance improvement of text detection and recognition, which is an effective approach for scene image or video understanding. For this purpose, at first, we present a new combination of rough and fuzzy concept to study irregular shapes of edge components in input scene videos, which helps to classify edge components into several groups. Next, the proposed method explores gradient direction information of each pixel in each edge component group to extract stroke based features by dividing each group into several intra and inter planes. We further extract correlation and covariance features to encode semantic features located inside planes or between planes. Features of intra and inter planes of groups are then concatenated to get a feature matrix. Finally, the feature matrix is verified with temporal frames and fed to a neural network for categorization. Experimental results show that the proposed method outperforms the existing state-of-the-art methods, at the same time, the performances of text detection and recognition methods are also improved significantly due to categorization. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG; 600.097; 600.121 |
Approved |
no |
|
|
Call Number |
Admin @ si @ RSJ2018 |
Serial |
3096 |
|
Permanent link to this record |
|
|
|
|
Author |
Alicia Fornes; Josep Llados; Gemma Sanchez; Dimosthenis Karatzas |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title ![sorted by Title field, descending order (down)](img/sort_desc.gif) |
Rotation Invariant Hand-Drawn Symbol Recognition based on a Dynamic Time Warping Model |
Type |
Journal Article |
|
Year |
2010 |
Publication |
International Journal on Document Analysis and Recognition |
Abbreviated Journal |
IJDAR |
|
|
Volume |
13 |
Issue |
3 |
Pages |
229–241 |
|
|
Keywords |
|
|
|
Abstract |
One of the major difficulties of handwriting symbol recognition is the high variability among symbols because of the different writer styles. In this paper, we introduce a robust approach for describing and recognizing hand-drawn symbols tolerant to these writer style differences. This method, which is invariant to scale and rotation, is based on the dynamic time warping (DTW) algorithm. The symbols are described by vector sequences, a variation of the DTW distance is used for computing the matching distance, and K-Nearest Neighbor is used to classify them. Our approach has been evaluated in two benchmarking scenarios consisting of hand-drawn symbols. Compared with state-of-the-art methods for symbol recognition, our method shows higher tolerance to the irregular deformations induced by hand-drawn strokes. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer-Verlag |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1433-2833 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG; IF 2009: 1,213 |
Approved |
no |
|
|
Call Number |
DAG @ dag @ FLS2010a |
Serial |
1288 |
|
Permanent link to this record |
|
|
|
|
Author |
Fernando Vilariño; Ludmila I. Kuncheva; Petia Radeva |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title ![sorted by Title field, descending order (down)](img/sort_desc.gif) |
ROC curves and video analysis optimization in intestinal capsule endoscopy |
Type |
Journal Article |
|
Year |
2006 |
Publication |
Pattern Recognition Letters |
Abbreviated Journal |
PRL |
|
|
Volume |
27 |
Issue |
8 |
Pages |
875–881 |
|
|
Keywords |
ROC curves; Classification; Classifiers ensemble; Detection of intestinal contractions; Imbalanced classes; Wireless capsule endoscopy |
|
|
Abstract |
Wireless capsule endoscopy involves inspection of hours of video material by a highly qualified professional. Time episodes corresponding to intestinal contractions, which are of interest to the physician constitute about 1% of the video. The problem is to label automatically time episodes containing contractions so that only a fraction of the video needs inspection. As the classes of contraction and non-contraction images in the video are largely imbalanced, ROC curves are used to optimize the trade-off between false positive and false negative rates. Classifier ensemble methods and simple classifiers were examined. Our results reinforce the claims from recent literature that classifier ensemble methods specifically designed for imbalanced problems have substantial advantages over simple classifiers and standard classifier ensembles. By using ROC curves with the bagging ensemble method the inspection time can be drastically reduced at the expense of a small fraction of missed contractions. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
800 |
Expedition |
|
Conference |
|
|
|
Notes |
MILAB;MV;SIAI |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ VKR2006; IAM @ iam @ VKR2006 |
Serial |
647 |
|
Permanent link to this record |
|
|
|
|
Author |
Arnau Ramisa; Adriana Tapus; David Aldavert; Ricardo Toledo; Ramon Lopez de Mantaras |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title ![sorted by Title field, descending order (down)](img/sort_desc.gif) |
Robust Vision-Based Localization using Combinations of Local Feature Regions Detectors |
Type |
Journal Article |
|
Year |
2009 |
Publication |
Autonomous Robots |
Abbreviated Journal |
AR |
|
|
Volume |
27 |
Issue |
4 |
Pages |
373-385 |
|
|
Keywords |
|
|
|
Abstract |
This paper presents a vision-based approach for mobile robot localization. The model of the environment is topological. The new approach characterizes a place using a signature. This signature consists of a constellation of descriptors computed over different types of local affine covariant regions extracted from an omnidirectional image acquired rotating a standard camera with a pan-tilt unit. This type of representation permits a reliable and distinctive environment modelling. Our objectives were to validate the proposed method in indoor environments and, also, to find out if the combination of complementary local feature region detectors improves the localization versus using a single region detector. Our experimental results show that if false matches are effectively rejected, the combination of different covariant affine region detectors increases notably the performance of the approach by combining the different strengths of the individual detectors. In order to reduce the localization time, two strategies are evaluated: re-ranking the map nodes using a global similarity measure and using standard perspective view field of 45°.
In order to systematically test topological localization methods, another contribution proposed in this work is a novel method to see the degradation in localization performance as the robot moves away from the point where the original signature was acquired. This allows to know the robustness of the proposed signature. In order for this to be effective, it must be done in several, variated, environments that test all the possible situations in which the robot may have to perform localization. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0929-5593 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ RTA2009 |
Serial |
1245 |
|
Permanent link to this record |
|
|
|
|
Author |
Ariel Amato; Mikhail Mozerov; Xavier Roca; Jordi Gonzalez |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title ![sorted by Title field, descending order (down)](img/sort_desc.gif) |
Robust Real-Time Background Subtraction Based on Local Neighborhood Patterns |
Type |
Journal Article |
|
Year |
2010 |
Publication |
EURASIP Journal on Advances in Signal Processing |
Abbreviated Journal |
EURASIPJ |
|
|
Volume |
|
Issue |
|
Pages |
7 |
|
|
Keywords |
|
|
|
Abstract |
Article ID 901205
This paper describes an efficient background subtraction technique for detecting moving objects. The proposed approach is able to overcome difficulties like illumination changes and moving shadows. Our method introduces two discriminative features based on angular and modular patterns, which are formed by similarity measurement between two sets of RGB color vectors: one belonging to the background image and the other to the current image. We show how these patterns are used to improve foreground detection in the presence of moving shadows and in the case when there are strong similarities in color between background and foreground pixels. Experimental results over a collection of public and own datasets of real image sequences demonstrate that the proposed technique achieves a superior performance compared with state-of-the-art methods. Furthermore, both the low computational and space complexities make the presented algorithm feasible for real-time applications. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1110-8657 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ISE |
Approved |
no |
|
|
Call Number |
ISE @ ise @ AMR2010 |
Serial |
1463 |
|
Permanent link to this record |
|
|
|
|
Author |
Pejman Rasti; Salma Samiei; Mary Agoyi; Sergio Escalera; Gholamreza Anbarjafari |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title ![sorted by Title field, descending order (down)](img/sort_desc.gif) |
Robust non-blind color video watermarking using QR decomposition and entropy analysis |
Type |
Journal Article |
|
Year |
2016 |
Publication |
Journal of Visual Communication and Image Representation |
Abbreviated Journal |
JVCIR |
|
|
Volume |
38 |
Issue |
|
Pages |
838-847 |
|
|
Keywords |
Video watermarking; QR decomposition; Discrete Wavelet Transformation; Chirp Z-transform; Singular value decomposition; Orthogonal–triangular decomposition |
|
|
Abstract |
Issues such as content identification, document and image security, audience measurement, ownership and copyright among others can be settled by the use of digital watermarking. Many recent video watermarking methods show drops in visual quality of the sequences. The present work addresses the aforementioned issue by introducing a robust and imperceptible non-blind color video frame watermarking algorithm. The method divides frames into moving and non-moving parts. The non-moving part of each color channel is processed separately using a block-based watermarking scheme. Blocks with an entropy lower than the average entropy of all blocks are subject to a further process for embedding the watermark image. Finally a watermarked frame is generated by adding moving parts to it. Several signal processing attacks are applied to each watermarked frame in order to perform experiments and are compared with some recent algorithms. Experimental results show that the proposed scheme is imperceptible and robust against common signal processing attacks. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HuPBA;MILAB; |
Approved |
no |
|
|
Call Number |
Admin @ si @RSA2016 |
Serial |
2766 |
|
Permanent link to this record |
|
|
|
|
Author |
Antonio Lopez; Joan Serrat; Cristina Cañero; Felipe Lumbreras; T. Graf |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title ![sorted by Title field, descending order (down)](img/sort_desc.gif) |
Robust lane markings detection and road geometry computation |
Type |
Journal Article |
|
Year |
2010 |
Publication |
International Journal of Automotive Technology |
Abbreviated Journal |
IJAT |
|
|
Volume |
11 |
Issue |
3 |
Pages |
395–407 |
|
|
Keywords |
lane markings |
|
|
Abstract |
Detection of lane markings based on a camera sensor can be a low-cost solution to lane departure and curve-over-speed warnings. A number of methods and implementations have been reported in the literature. However, reliable detection is still an issue because of cast shadows, worn and occluded markings, variable ambient lighting conditions, for example. We focus on increasing detection reliability in two ways. First, we employed an image feature other than the commonly used edges: ridges, which we claim addresses this problem better. Second, we adapted RANSAC, a generic robust estimation method, to fit a parametric model of a pair of lane lines to the image features, based on both ridgeness and ridge orientation. In addition, the model was fitted for the left and right lane lines simultaneously to enforce a consistent result. Four measures of interest for driver assistance applications were directly computed from the fitted parametric model at each frame: lane width, lane curvature, and vehicle yaw angle and lateral offset with regard the lane medial axis. We qualitatively assessed our method in video sequences captured on several road types and under very different lighting conditions. We also quantitatively assessed it on synthetic but realistic video sequences for which road geometry and vehicle trajectory ground truth are known. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
The Korean Society of Automotive Engineers |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1229-9138 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ LSC2010 |
Serial |
1300 |
|
Permanent link to this record |
|
|
|
|
Author |
Laura Igual; Agata Lapedriza; Ricard Borras |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title ![sorted by Title field, descending order (down)](img/sort_desc.gif) |
Robust Gait-Based Gender Classification using Depth Cameras |
Type |
Journal Article |
|
Year |
2013 |
Publication |
EURASIP Journal on Advances in Signal Processing |
Abbreviated Journal |
EURASIPJ |
|
|
Volume |
37 |
Issue |
1 |
Pages |
72-80 |
|
|
Keywords |
|
|
|
Abstract |
This article presents a new approach for gait-based gender recognition using depth cameras, that can run in real time. The main contribution of this study is a new fast feature extraction strategy that uses the 3D point cloud obtained from the frames in a gait cycle. For each frame, these points are aligned according to their centroid and grouped. After that, they are projected into their PCA plane, obtaining a representation of the cycle particularly robust against view changes. Then, final discriminative features are computed by first making a histogram of the projected points and then using linear discriminant analysis. To test the method we have used the DGait database, which is currently the only publicly available database for gait analysis that includes depth information. We have performed experiments on manually labeled cycles and over whole video sequences, and the results show that our method improves the accuracy significantly, compared with state-of-the-art systems which do not use depth information. Furthermore, our approach is insensitive to illumination changes, given that it discards the RGB information. That makes the method especially suitable for real applications, as illustrated in the last part of the experiments section. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB; OR;MV |
Approved |
no |
|
|
Call Number |
Admin @ si @ ILB2013 |
Serial |
2144 |
|
Permanent link to this record |
|
|
|
|
Author |
Jose Manuel Alvarez; Theo Gevers; Ferran Diego; Antonio Lopez |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title ![sorted by Title field, descending order (down)](img/sort_desc.gif) |
Road Geometry Classification by Adaptative Shape Models |
Type |
Journal Article |
|
Year |
2013 |
Publication |
IEEE Transactions on Intelligent Transportation Systems |
Abbreviated Journal |
TITS |
|
|
Volume |
14 |
Issue |
1 |
Pages |
459-468 |
|
|
Keywords |
road detection |
|
|
Abstract |
Vision-based road detection is important for different applications in transportation, such as autonomous driving, vehicle collision warning, and pedestrian crossing detection. Common approaches to road detection are based on low-level road appearance (e.g., color or texture) and neglect of the scene geometry and context. Hence, using only low-level features makes these algorithms highly depend on structured roads, road homogeneity, and lighting conditions. Therefore, the aim of this paper is to classify road geometries for road detection through the analysis of scene composition and temporal coherence. Road geometry classification is proposed by building corresponding models from training images containing prototypical road geometries. We propose adaptive shape models where spatial pyramids are steered by the inherent spatial structure of road images. To reduce the influence of lighting variations, invariant features are used. Large-scale experiments show that the proposed road geometry classifier yields a high recognition rate of 73.57% ± 13.1, clearly outperforming other state-of-the-art methods. Including road shape information improves road detection results over existing appearance-based methods. Finally, it is shown that invariant features and temporal information provide robustness against disturbing imaging conditions. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1524-9050 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS;ISE |
Approved |
no |
|
|
Call Number |
Admin @ si @ AGD2013;; ADAS @ adas @ |
Serial |
2269 |
|
Permanent link to this record |
|
|
|
|
Author |
Jose Manuel Alvarez; Antonio Lopez |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title ![sorted by Title field, descending order (down)](img/sort_desc.gif) |
Road Detection Based on Illuminant Invariance |
Type |
Journal Article |
|
Year |
2011 |
Publication |
IEEE Transactions on Intelligent Transportation Systems |
Abbreviated Journal |
TITS |
|
|
Volume |
12 |
Issue |
1 |
Pages |
184-193 |
|
|
Keywords |
road detection |
|
|
Abstract |
By using an onboard camera, it is possible to detect the free road surface ahead of the ego-vehicle. Road detection is of high relevance for autonomous driving, road departure warning, and supporting driver-assistance systems such as vehicle and pedestrian detection. The key for vision-based road detection is the ability to classify image pixels as belonging or not to the road surface. Identifying road pixels is a major challenge due to the intraclass variability caused by lighting conditions. A particularly difficult scenario appears when the road surface has both shadowed and nonshadowed areas. Accordingly, we propose a novel approach to vision-based road detection that is robust to shadows. The novelty of our approach relies on using a shadow-invariant feature space combined with a model-based classifier. The model is built online to improve the adaptability of the algorithm to the current lighting and the presence of other vehicles in the scene. The proposed algorithm works in still images and does not depend on either road shape or temporal restrictions. Quantitative and qualitative experiments on real-world road sequences with heavy traffic and shadows show that the method is robust to shadows and lighting variations. Moreover, the proposed method provides the highest performance when compared with hue-saturation-intensity (HSI)-based algorithms. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ AlL2011 |
Serial |
1456 |
|
Permanent link to this record |
|
|
|
|
Author |
Fadi Dornaika; Angel Sappa |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title ![sorted by Title field, descending order (down)](img/sort_desc.gif) |
Rigid and Non-rigid Face Motion Tracking by Aligning Texture Maps and Stereo 3D Models |
Type |
Journal Article |
|
Year |
2007 |
Publication |
Pattern Recognition Letters |
Abbreviated Journal |
PRL |
|
|
Volume |
28 |
Issue |
15 |
Pages |
2116-2126 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ DoS2007c |
Serial |
877 |
|
Permanent link to this record |
|
|
|
|
Author |
Pichao Wang; Wanqing Li; Philip Ogunbona; Jun Wan; Sergio Escalera |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title ![sorted by Title field, descending order (down)](img/sort_desc.gif) |
RGB-D-based Human Motion Recognition with Deep Learning: A Survey |
Type |
Journal Article |
|
Year |
2018 |
Publication |
Computer Vision and Image Understanding |
Abbreviated Journal |
CVIU |
|
|
Volume |
171 |
Issue |
|
Pages |
118-139 |
|
|
Keywords |
Human motion recognition; RGB-D data; Deep learning; Survey |
|
|
Abstract |
Human motion recognition is one of the most important branches of human-centered research activities. In recent years, motion recognition based on RGB-D data has attracted much attention. Along with the development in artificial intelligence, deep learning techniques have gained remarkable success in computer vision. In particular, convolutional neural networks (CNN) have achieved great success for image-based tasks, and recurrent neural networks (RNN) are renowned for sequence-based problems. Specifically, deep learning methods based on the CNN and RNN architectures have been adopted for motion recognition using RGB-D data. In this paper, a detailed overview of recent advances in RGB-D-based motion recognition is presented. The reviewed methods are broadly categorized into four groups, depending on the modality adopted for recognition: RGB-based, depth-based, skeleton-based and RGB+D-based. As a survey focused on the application of deep learning to RGB-D-based motion recognition, we explicitly discuss the advantages and limitations of existing techniques. Particularly, we highlighted the methods of encoding spatial-temporal-structural information inherent in video sequence, and discuss potential directions for future research. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HUPBA; no proj |
Approved |
no |
|
|
Call Number |
Admin @ si @ WLO2018 |
Serial |
3123 |
|
Permanent link to this record |