|
Records |
Links |
|
Author |
Francisco Alvaro; Francisco Cruz; Joan Andreu Sanchez; Oriol Ramos Terrades; Jose Miguel Benedi |
![download PDF file pdf](img/file_PDF.gif)
|
|
Title |
Structure Detection and Segmentation of Documents Using 2D Stochastic Context-Free Grammars |
Type |
Journal Article |
|
Year |
2015 |
Publication |
Neurocomputing |
Abbreviated Journal |
NEUCOM |
|
|
Volume |
150 |
Issue ![sorted by Issue field, ascending order (up)](img/sort_asc.gif) |
A |
Pages |
147-154 |
|
|
Keywords |
document image analysis; stochastic context-free grammars; text classication features |
|
|
Abstract |
In this paper we dene a bidimensional extension of Stochastic Context-Free Grammars for structure detection and segmentation of images of documents.
Two sets of text classication features are used to perform an initial classication of each zone of the page. Then, the document segmentation is obtained as the most likely hypothesis according to a stochastic grammar. We used a dataset of historical marriage license books to validate this approach. We also tested several inference algorithms for Probabilistic Graphical Models
and the results showed that the proposed grammatical model outperformed
the other methods. Furthermore, grammars also provide the document structure
along with its segmentation. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG; 601.158; 600.077; 600.061 |
Approved |
no |
|
|
Call Number |
Admin @ si @ ACS2015 |
Serial |
2531 |
|
Permanent link to this record |
|
|
|
|
Author |
Daniel Sanchez; Miguel Angel Bautista; Sergio Escalera |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
HuPBA 8k+: Dataset and ECOC-GraphCut based Segmentation of Human Limbs |
Type |
Journal Article |
|
Year |
2015 |
Publication |
Neurocomputing |
Abbreviated Journal |
NEUCOM |
|
|
Volume |
150 |
Issue ![sorted by Issue field, ascending order (up)](img/sort_asc.gif) |
A |
Pages |
173–188 |
|
|
Keywords |
Human limb segmentation; ECOC; Graph-Cuts |
|
|
Abstract |
Human multi-limb segmentation in RGB images has attracted a lot of interest in the research community because of the huge amount of possible applications in fields like Human-Computer Interaction, Surveillance, eHealth, or Gaming. Nevertheless, human multi-limb segmentation is a very hard task because of the changes in appearance produced by different points of view, clothing, lighting conditions, occlusions, and number of articulations of the human body. Furthermore, this huge pose variability makes the availability of large annotated datasets difficult. In this paper, we introduce the HuPBA8k+ dataset. The dataset contains more than 8000 labeled frames at pixel precision, including more than 120000 manually labeled samples of 14 different limbs. For completeness, the dataset is also labeled at frame-level with action annotations drawn from an 11 action dictionary which includes both single person actions and person-person interactive actions. Furthermore, we also propose a two-stage approach for the segmentation of human limbs. In a first stage, human limbs are trained using cascades of classifiers to be split in a tree-structure way, which is included in an Error-Correcting Output Codes (ECOC) framework to define a body-like probability map. This map is used to obtain a binary mask of the subject by means of GMM color modelling and GraphCuts theory. In a second stage, we embed a similar tree-structure in an ECOC framework to build a more accurate set of limb-like probability maps within the segmented user mask, that are fed to a multi-label GraphCut procedure to obtain final multi-limb segmentation. The methodology is tested on the novel HuPBA8k+ dataset, showing performance improvements in comparison to state-of-the-art approaches. In addition, a baseline of standard action recognition methods for the 11 actions categories of the novel dataset is also provided. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HuPBA;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ SBE2015 |
Serial |
2552 |
|
Permanent link to this record |
|
|
|
|
Author |
Juan Ramon Terven Salinas; Bogdan Raducanu; Maria Elena Meza-de-Luna; Joaquin Salas |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Head-gestures mirroring detection in dyadic social linteractions with computer vision-based wearable devices |
Type |
Journal Article |
|
Year |
2016 |
Publication |
Neurocomputing |
Abbreviated Journal |
NEUCOM |
|
|
Volume |
175 |
Issue ![sorted by Issue field, ascending order (up)](img/sort_asc.gif) |
B |
Pages |
866–876 |
|
|
Keywords |
Head gestures recognition; Mirroring detection; Dyadic social interaction analysis; Wearable devices |
|
|
Abstract |
During face-to-face human interaction, nonverbal communication plays a fundamental role. A relevant aspect that takes part during social interactions is represented by mirroring, in which a person tends to mimic the non-verbal behavior (head and body gestures, vocal prosody, etc.) of the counterpart. In this paper, we introduce a computer vision-based system to detect mirroring in dyadic social interactions with the use of a wearable platform. In our context, mirroring is inferred as simultaneous head noddings displayed by the interlocutors. Our approach consists of the following steps: (1) facial features extraction; (2) facial features stabilization; (3) head nodding recognition; and (4) mirroring detection. Our system achieves a mirroring detection accuracy of 72% on a custom mirroring dataset. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
LAMP; 600.072; 600.068; |
Approved |
no |
|
|
Call Number |
Admin @ si @ TRM2016 |
Serial |
2721 |
|
Permanent link to this record |
|
|
|
|
Author |
Cesar Isaza; Joaquin Salas; Bogdan Raducanu |
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Toward the Detection of Urban Infrastructures Edge Shadows |
Type |
Conference Article |
|
Year |
2010 |
Publication |
12th International Conference on Advanced Concepts for Intelligent Vision Systems |
Abbreviated Journal |
|
|
|
Volume |
6474 |
Issue ![sorted by Issue field, ascending order (up)](img/sort_asc.gif) |
I |
Pages |
30–37 |
|
|
Keywords |
|
|
|
Abstract |
In this paper, we propose a novel technique to detect the shadows cast by urban infrastructure, such as buildings, billboards, and traffic signs, using a sequence of images taken from a fixed camera. In our approach, we compute two different background models in parallel: one for the edges and one for the reflected light intensity. An algorithm is proposed to train the system to distinguish between moving edges in general and edges that belong to static objects, creating an edge background model. Then, during operation, a background intensity model allow us to separate between moving and static objects. Those edges included in the moving objects and those that belong to the edge background model are subtracted from the current image edges. The remaining edges are the ones cast by urban infrastructure. Our method is tested on a typical crossroad scene and the results show that the approach is sound and promising. |
|
|
Address |
Sydney, Australia |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Berlin Heidelberg |
Place of Publication |
|
Editor |
eds. Blanc–Talon et al |
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN |
978-3-642-17687-6 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ACIVS |
|
|
Notes |
OR;MV |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ ISR2010 |
Serial |
1458 |
|
Permanent link to this record |
|
|
|
|
Author |
Susana Alvarez; Anna Salvatella; Maria Vanrell; Xavier Otazu |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Low-dimensional and Comprehensive Color Texture Description |
Type |
Journal Article |
|
Year |
2012 |
Publication |
Computer Vision and Image Understanding |
Abbreviated Journal |
CVIU |
|
|
Volume |
116 |
Issue ![sorted by Issue field, ascending order (up)](img/sort_asc.gif) |
I |
Pages |
54-67 |
|
|
Keywords |
|
|
|
Abstract |
Image retrieval can be dealt by combining standard descriptors, such as those of MPEG-7, which are defined independently for each visual cue (e.g. SCD or CLD for Color, HTD for texture or EHD for edges).
A common problem is to combine similarities coming from descriptors representing different concepts in different spaces. In this paper we propose a color texture description that bypasses this problem from its inherent definition. It is based on a low dimensional space with 6 perceptual axes. Texture is described in a 3D space derived from a direct implementation of the original Julesz’s Texton theory and color is described in a 3D perceptual space. This early fusion through the blob concept in these two bounded spaces avoids the problem and allows us to derive a sparse color-texture descriptor that achieves similar performance compared to MPEG-7 in image retrieval. Moreover, our descriptor presents comprehensive qualities since it can also be applied either in segmentation or browsing: (a) a dense image representation is defined from the descriptor showing a reasonable performance in locating texture patterns included in complex images; and (b) a vocabulary of basic terms is derived to build an intermediate level descriptor in natural language improving browsing by bridging semantic gap |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1077-3142 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
CAT;CIC |
Approved |
no |
|
|
Call Number |
Admin @ si @ ASV2012 |
Serial |
1827 |
|
Permanent link to this record |
|
|
|
|
Author |
Patricia Marquez; Debora Gil ; Aura Hernandez-Sabate |
![download PDF file pdf](img/file_PDF.gif)
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title |
Error Analysis for Lucas-Kanade Based Schemes |
Type |
Conference Article |
|
Year |
2012 |
Publication |
9th International Conference on Image Analysis and Recognition |
Abbreviated Journal |
|
|
|
Volume |
7324 |
Issue ![sorted by Issue field, ascending order (up)](img/sort_asc.gif) |
I |
Pages |
184-191 |
|
|
Keywords |
Optical flow, Confidence measure, Lucas-Kanade, Cardiac Magnetic Resonance |
|
|
Abstract |
Optical flow is a valuable tool for motion analysis in medical imaging sequences. A reliable application requires determining the accuracy of the computed optical flow. This is a main challenge given the absence of ground truth in medical sequences. This paper presents an error analysis of Lucas-Kanade schemes in terms of intrinsic design errors and numerical stability of the algorithm. Our analysis provides a confidence measure that is naturally correlated to the accuracy of the flow field. Our experiments show the higher predictive value of our confidence measure compared to existing measures. |
|
|
Address |
Aveiro, Portugal |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer-Verlag Berlin Heidelberg |
Place of Publication |
|
Editor |
|
|
|
Language |
english |
Summary Language |
|
Original Title |
|
|
|
Series Editor |
Campilho, Aurélio and Kamel, Mohamed |
Series Title |
Lecture Notes in Computer Science |
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN |
978-3-642-31294-6 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICIAR |
|
|
Notes |
IAM |
Approved |
no |
|
|
Call Number |
IAM @ iam @ MGH2012a |
Serial |
1899 |
|
Permanent link to this record |
|
|
|
|
Author |
Fernando Barrera; Felipe Lumbreras; Angel Sappa |
![download PDF file pdf](img/file_PDF.gif)
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title |
Evaluation of Similarity Functions in Multimodal Stereo |
Type |
Conference Article |
|
Year |
2012 |
Publication |
9th International Conference on Image Analysis and Recognition |
Abbreviated Journal |
|
|
|
Volume |
7324 |
Issue ![sorted by Issue field, ascending order (up)](img/sort_asc.gif) |
I |
Pages |
320-329 |
|
|
Keywords |
Aveiro, Portugal |
|
|
Abstract |
This paper presents an evaluation framework for multimodal stereo matching, which allows to compare the performance of four similarity functions. Additionally, it presents details of a multimodal stereo head that supply thermal infrared and color images, as well as, aspects of its calibration and rectification. The pipeline includes a novel method for the disparity selection, which is suitable for evaluating the similarity functions. Finally, a benchmark for comparing different initializations of the proposed framework is presented. Similarity functions are based on mutual information, gradient orientation and scale space representations. Their evaluation is performed using two metrics: i) disparity error, and ii) number of correct matches on planar regions. In addition to the proposed evaluation, the current paper also shows that 3D sparse representations can be recovered from such a multimodal stereo head. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Berlin Heidelberg |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN |
978-3-642-31294-6 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICIAR |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
BLS2012a |
Serial |
2014 |
|
Permanent link to this record |
|
|
|
|
Author |
Miguel Oliveira; Angel Sappa; V. Santos |
![download PDF file pdf](img/file_PDF.gif)
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title |
Color Correction using 3D Gaussian Mixture Models |
Type |
Conference Article |
|
Year |
2012 |
Publication |
9th International Conference on Image Analysis and Recognition |
Abbreviated Journal |
|
|
|
Volume |
7324 |
Issue ![sorted by Issue field, ascending order (up)](img/sort_asc.gif) |
I |
Pages |
97-106 |
|
|
Keywords |
|
|
|
Abstract |
The current paper proposes a novel color correction approach based on a probabilistic segmentation framework by using 3D Gaussian Mixture Models. Regions are used to compute local color correction functions, which are then combined to obtain the final corrected image. The proposed approach is evaluated using both a recently published metric and two large data sets composed of seventy images. The evaluation is performed by comparing our algorithm with eight well known color correction algorithms. Results show that the proposed approach is the highest scoring color correction method. Also, the proposed single step 3D color space probabilistic segmentation reduces processing time over similar approaches. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Berlin Heidelberg |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN |
10.1007/978-3-642-31295-3_12 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICIAR |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ OSS2012a |
Serial |
2015 |
|
Permanent link to this record |
|
|
|
|
Author |
Francesco Ciompi; Oriol Pujol; E Fernandez-Nofrerias; J. Mauri; Petia Radeva |
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
ECOC Random Fields for Lumen Segmentation in Radial Artery IVUS Sequences |
Type |
Conference Article |
|
Year |
2009 |
Publication |
12th International Conference on Medical Image and Computer Assisted Intervention |
Abbreviated Journal |
|
|
|
Volume |
5762 |
Issue ![sorted by Issue field, ascending order (up)](img/sort_asc.gif) |
II |
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
The measure of lumen volume on radial arteries can be used to evaluate the vessel response to different vasodilators. In this paper, we present a framework for automatic lumen segmentation in longitudinal cut images of radial artery from Intravascular ultrasound sequences. The segmentation is tackled as a classification problem where the contextual information is exploited by means of Conditional Random Fields (CRFs). A multi-class classification framework is proposed, and inference is achieved by combining binary CRFs according to the Error-Correcting-Output-Code technique. The results are validated against manually segmented sequences. Finally, the method is compared with other state-of-the-art classifiers. |
|
|
Address |
London, UK |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Berlin Heidelberg |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN |
978-3-642-04270-6 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
MICCAI |
|
|
Notes |
MILAB;HuPBA |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ CPF2009 |
Serial |
1228 |
|
Permanent link to this record |
|
|
|
|
Author |
Marco Pedersoli; Jordi Gonzalez; Andrew Bagdanov; Juan J. Villanueva |
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Recursive Coarse-to-Fine Localization for fast Object Recognition |
Type |
Conference Article |
|
Year |
2010 |
Publication |
11th European Conference on Computer Vision |
Abbreviated Journal |
|
|
|
Volume |
6313 |
Issue ![sorted by Issue field, ascending order (up)](img/sort_asc.gif) |
II |
Pages |
280–293 |
|
|
Keywords |
|
|
|
Abstract |
Cascading techniques are commonly used to speed-up the scan of an image for object detection. However, cascades of detectors are slow to train due to the high number of detectors and corresponding thresholds to learn. Furthermore, they do not use any prior knowledge about the scene structure to decide where to focus the search. To handle these problems, we propose a new way to scan an image, where we couple a recursive coarse-to-fine refinement together with spatial constraints of the object location. For doing that we split an image into a set of uniformly distributed neighborhood regions, and for each of these we apply a local greedy search over feature resolutions. The neighborhood is defined as a scanning region that only one object can occupy. Therefore the best hypothesis is obtained as the location with maximum score and no thresholds are needed. We present an implementation of our method using a pyramid of HOG features and we evaluate it on two standard databases, VOC2007 and INRIA dataset. Results show that the Recursive Coarse-to-Fine Localization (RCFL) achieves a 12x speed-up compared to standard sliding windows. Compared with a cascade of multiple resolutions approach our method has slightly better performance in speed and Average-Precision. Furthermore, in contrast to cascading approach, the speed-up is independent of image conditions, the number of detected objects and clutter. |
|
|
Address |
Crete (Greece) |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Berlin Heidelberg |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN |
978-3-642-15566-6 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ECCV |
|
|
Notes |
ISE |
Approved |
no |
|
|
Call Number |
DAG @ dag @ PGB2010 |
Serial |
1438 |
|
Permanent link to this record |
|
|
|
|
Author |
Carles Fernandez; Jordi Gonzalez; Xavier Roca |
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Automatic Learning of Background Semantics in Generic Surveilled Scenes |
Type |
Conference Article |
|
Year |
2010 |
Publication |
11th European Conference on Computer Vision |
Abbreviated Journal |
|
|
|
Volume |
6313 |
Issue ![sorted by Issue field, ascending order (up)](img/sort_asc.gif) |
II |
Pages |
678–692 |
|
|
Keywords |
|
|
|
Abstract |
Advanced surveillance systems for behavior recognition in outdoor traffic scenes depend strongly on the particular configuration of the scenario. Scene-independent trajectory analysis techniques statistically infer semantics in locations where motion occurs, and such inferences are typically limited to abnormality. Thus, it is interesting to design contributions that automatically categorize more specific semantic regions. State-of-the-art approaches for unsupervised scene labeling exploit trajectory data to segment areas like sources, sinks, or waiting zones. Our method, in addition, incorporates scene-independent knowledge to assign more meaningful labels like crosswalks, sidewalks, or parking spaces. First, a spatiotemporal scene model is obtained from trajectory analysis. Subsequently, a so-called GI-MRF inference process reinforces spatial coherence, and incorporates taxonomy-guided smoothness constraints. Our method achieves automatic and effective labeling of conceptual regions in urban scenarios, and is robust to tracking errors. Experimental validation on 5 surveillance databases has been conducted to assess the generality and accuracy of the segmentations. The resulting scene models are used for model-based behavior analysis. |
|
|
Address |
Crete (Greece) |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Berlin Heidelberg |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN |
978-3-642-15551-2 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ECCV |
|
|
Notes |
ISE |
Approved |
no |
|
|
Call Number |
ISE @ ise @ FGR2010 |
Serial |
1439 |
|
Permanent link to this record |
|
|
|
|
Author |
Muhammad Anwer Rao; David Vazquez; Antonio Lopez |
![download PDF file pdf](img/file_PDF.gif)
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title |
Color Contribution to Part-Based Person Detection in Different Types of Scenarios |
Type |
Conference Article |
|
Year |
2011 |
Publication |
14th International Conference on Computer Analysis of Images and Patterns |
Abbreviated Journal |
|
|
|
Volume |
6855 |
Issue ![sorted by Issue field, ascending order (up)](img/sort_asc.gif) |
II |
Pages |
463-470 |
|
|
Keywords |
Pedestrian Detection; Color |
|
|
Abstract |
Camera-based person detection is of paramount interest due to its potential applications. The task is diffcult because the great variety of backgrounds (scenarios, illumination) in which persons are present, as well as their intra-class variability (pose, clothe, occlusion). In fact, the class person is one of the included in the popular PASCAL visual object classes (VOC) challenge. A breakthrough for this challenge, regarding person detection, is due to Felzenszwalb et al. These authors proposed a part-based detector that relies on histograms of oriented gradients (HOG) and latent support vector machines (LatSVM) to learn a model of the whole human body and its constitutive parts, as well as their relative position. Since the approach of Felzenszwalb et al. appeared new variants have been proposed, usually giving rise to more complex models. In this paper, we focus on an issue that has not attracted suficient interest up to now. In particular, we refer to the fact that HOG is usually computed from RGB color space, but other possibilities exist and deserve the corresponding investigation. In this paper we challenge RGB space with the opponent color space (OPP), which is inspired in the human vision system.We will compute the HOG on top of OPP, then we train and test the part-based human classifer by Felzenszwalb et al. using PASCAL VOC challenge protocols and person database. Our experiments demonstrate that OPP outperforms RGB. We also investigate possible differences among types of scenarios: indoor, urban and countryside. Interestingly, our experiments suggest that the beneficts of OPP with respect to RGB mainly come for indoor and countryside scenarios, those in which the human visual system was designed by evolution. |
|
|
Address |
Seville, Spain |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer |
Place of Publication |
Berlin Heidelberg |
Editor |
P. Real, D. Diaz, H. Molina, A. Berciano, W. Kropatsch |
|
|
Language |
English |
Summary Language |
english |
Original Title |
Color Contribution to Part-Based Person Detection in Different Types of Scenarios |
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN |
978-3-642-23677-8 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CAIP |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ RVL2011b |
Serial |
1665 |
|
Permanent link to this record |
|
|
|
|
Author |
Naveen Onkarappa; Angel Sappa |
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Space Variant Representations for Mobile Platform Vision Applications |
Type |
Conference Article |
|
Year |
2011 |
Publication |
14th International Conference on Computer Analysis of Images and Patterns |
Abbreviated Journal |
|
|
|
Volume |
6855 |
Issue ![sorted by Issue field, ascending order (up)](img/sort_asc.gif) |
II |
Pages |
146-154 |
|
|
Keywords |
|
|
|
Abstract |
The log-polar space variant representation, motivated by biological vision, has been widely studied in the literature. Its data reduction and invariance properties made it useful in many vision applications. However, due to its nature, it fails in preserving features in the periphery. In the current work, as an attempt to overcome this problem, we propose a novel space-variant representation. It is evaluated and proved to be better than the log-polar representation in preserving the peripheral information, crucial for on-board mobile vision applications. The evaluation is performed by comparing log-polar and the proposed representation once they are used for estimating dense optical flow. |
|
|
Address |
Seville, Spain |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Berlin Heidelberg |
Place of Publication |
|
Editor |
P. Real, D. Diaz, H. Molina, A. Berciano, W. Kropatsch |
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN |
978-3-642-23677-8 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CAIP |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
NaS2011; ADAS @ adas @ |
Serial |
1686 |
|
Permanent link to this record |
|
|
|
|
Author |
Ricard Borras; Agata Lapedriza; Laura Igual |
![download PDF file pdf](img/file_PDF.gif)
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title |
Depth Information in Human Gait Analysis: An Experimental Study on Gender Recognition |
Type |
Conference Article |
|
Year |
2012 |
Publication |
9th International Conference on Image Analysis and Recognition |
Abbreviated Journal |
|
|
|
Volume |
7325 |
Issue ![sorted by Issue field, ascending order (up)](img/sort_asc.gif) |
II |
Pages |
98-105 |
|
|
Keywords |
|
|
|
Abstract |
This work presents DGait, a new gait database acquired with a depth camera. This database contains videos from 53 subjects walking in different directions. The intent of this database is to provide a public set to explore whether the depth can be used as an additional information source for gait classification purposes. Each video is labelled according to subject, gender and age. Furthermore, for each subject and view point, we provide initial and final frames of an entire walk cycle. On the other hand, we perform gait-based gender classification experiments with DGait database, in order to illustrate the usefulness of depth information for this purpose. In our experiments, we extract 2D and 3D gait features based on shape descriptors, and compare the performance of these features for gender identification, using a Kernel SVM. The obtained results show that depth can be an information source of great relevance for gait classification problems. |
|
|
Address |
Aveiro, Portugal |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Berlin Heidelberg |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN |
978-3-642-31297-7 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICIAR |
|
|
Notes |
OR; MILAB;MV |
Approved |
no |
|
|
Call Number |
Admin @ si @ BLI2012 |
Serial |
2009 |
|
Permanent link to this record |
|
|
|
|
Author |
Laura Igual; Joan Carles Soliva; Roger Gimeno; Sergio Escalera; Oscar Vilarroya; Petia Radeva |
![download PDF file pdf](img/file_PDF.gif)
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title |
Automatic Internal Segmentation of Caudate Nucleus for Diagnosis of Attention Deficit Hyperactivity Disorder |
Type |
Conference Article |
|
Year |
2012 |
Publication |
9th International Conference on Image Analysis and Recognition |
Abbreviated Journal |
|
|
|
Volume |
7325 |
Issue ![sorted by Issue field, ascending order (up)](img/sort_asc.gif) |
II |
Pages |
222-229 |
|
|
Keywords |
|
|
|
Abstract |
Poster
Studies on volumetric brain Magnetic Resonance Imaging (MRI) showed neuroanatomical abnormalities in pediatric Attention-Deficit/Hyperactivity Disorder (ADHD). In particular, the diminished right caudate volume is one of the most replicated findings among ADHD samples in morphometric MRI studies. In this paper, we propose a fully-automatic method for internal caudate nucleus segmentation based on machine learning. Moreover, the ratio between right caudate body volume and the bilateral caudate body volume is applied in a ADHD diagnostic test. We separately validate the automatic internal segmentation of caudate in head and body structures and the diagnostic test using real data from ADHD and control subjects. As a result, we show accurate internal caudate segmentation and similar performance among the proposed automatic diagnostic test and the manual annotation. |
|
|
Address |
Aveiro, Portugal |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN |
978-3-642-31297-7 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICIAR |
|
|
Notes |
OR; HuPBA; MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ ISG2012 |
Serial |
2059 |
|
Permanent link to this record |